modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
434M
likes
int64
0
6.53k
library_name
stringclasses
352 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
51 values
createdAt
unknown
card
stringlengths
1
913k
cleanrl/Walker2d-v2-ddpg_continuous_action-seed1
cleanrl
"2023-06-29T19:36:28Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "Walker2d-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T19:36:20Z"
--- tags: - Walker2d-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Walker2d-v2 type: Walker2d-v2 metrics: - type: mean_reward value: 993.74 +/- 1095.19 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Walker2d-v2** This is a trained model of a DDPG agent playing Walker2d-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Walker2d-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Walker2d-v2-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py curl -OL https://huggingface.co/cleanrl/Walker2d-v2-ddpg_continuous_action-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Walker2d-v2-ddpg_continuous_action-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Walker2d-v2 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'env_id': 'Walker2d-v2', 'exp_name': 'ddpg_continuous_action', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'torch_deterministic': True, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
josh-sematic/flan-small-cnn-summary
josh-sematic
"2023-06-29T19:42:47Z"
0
0
null
[ "dataset:cnn_dailymail", "region:us" ]
null
"2023-06-29T19:37:29Z"
--- datasets: - cnn_dailymail --- Just a small example fine tuning of [https://huggingface.co/google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the cnn_dailymail dataset for the purpose of summarization.
Flanimations/donut
Flanimations
"2023-07-01T16:24:58Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:41:11Z"
--- license: openrail ---
nicegame/Demoman
nicegame
"2023-06-29T19:41:57Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:41:11Z"
--- license: openrail ---
jzmsft/codeparrot-small
jzmsft
"2023-07-19T04:26:46Z"
0
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-06-29T19:42:31Z"
Entry not found
gio69/piccolo
gio69
"2023-06-29T19:44:55Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T19:44:54Z"
Entry not found
nicegame/Engineer
nicegame
"2023-06-29T19:47:55Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:46:57Z"
--- license: openrail ---
TheBloke/GPlatty-30B-SuperHOT-8K-GGML
TheBloke
"2023-06-29T21:02:38Z"
0
2
null
[ "arxiv:2302.13971", "license:other", "region:us" ]
null
"2023-06-29T19:47:40Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Lilloukas' GPlatty 30B GGML These files are GGML format model files for [Lilloukas' GPlatty 30B](https://huggingface.co/lilloukas/GPlatty-30B). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lilloukas/GPlatty-30B) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | gplatty-30b-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | gplatty-30b-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | gplatty-30b-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | gplatty-30b-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | gplatty-30b-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | gplatty-30b-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | gplatty-30b-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | gplatty-30b-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | gplatty-30b-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 gplatty-30b-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Lilloukas' GPlatty 30B # Information GPlatty-30B is a merge of [lilloukas/Platypus-30B](https://huggingface.co/lilloukas/Platypus-30B) and [chansung/gpt4-alpaca-lora-30b](https://huggingface.co/chansung/gpt4-alpaca-lora-30b) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 63.6 | | ARC (25-shot) | 66.0 | | HellaSwag (10-shot) | 84.8 | | TruthfulQA (0-shot) | 53.8 | | Avg. | 67.0 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above. ## Model Details * **Trained by**: Platypus-30B trained by Cole Hunter & Ariel Lee; gpt4-alpaca-lora-30b by chansung. * **Model type:** **GPlatty-30B** is an auto-regressive language model based on the LLaMA transformer architecture. * **Language(s)**: English * **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md). | Hyperparameter | Value | |---------------------------|-------| | \\(n_\text{parameters}\\) | 33B | | \\(d_\text{model}\\) | 6656 | | \\(n_\text{layers}\\) | 60 | | \\(n_\text{heads}\\) | 52 | ## Reproducing Evaluation Results Install LM Evaluation Harness: ``` git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` Each task was evaluated on a single A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly. ## Citations ```bibtex @article{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ```
LIF15/tis-1
LIF15
"2023-06-29T19:49:32Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T19:49:32Z"
Entry not found
nicegame/Medic
nicegame
"2023-06-29T19:51:41Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:51:19Z"
--- license: openrail ---
nicegame/Pyro
nicegame
"2023-06-29T19:52:44Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:52:27Z"
--- license: openrail ---
nicegame/Sniper
nicegame
"2023-06-29T19:53:44Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:53:13Z"
--- license: openrail ---
YasG/maskimages
YasG
"2023-07-03T19:31:59Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:53:53Z"
--- license: openrail ---
MrM0dZ/sansudertale
MrM0dZ
"2023-06-29T19:56:08Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T19:55:46Z"
Entry not found
nicegame/Soldier
nicegame
"2023-06-29T19:56:39Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:56:02Z"
--- license: openrail ---
nicegame/Spy
nicegame
"2023-06-29T19:56:52Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:56:30Z"
--- license: openrail ---
BlitzKrieg/chicken-v2
BlitzKrieg
"2023-06-29T20:27:26Z"
0
1
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T19:56:33Z"
--- license: openrail ---
nolanaatama/plnktnspngbbrvcv21kpchs69kstpsllr
nolanaatama
"2023-06-29T20:08:46Z"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2023-06-29T19:59:19Z"
--- license: creativeml-openrail-m ---
leeloli/J-STAYC
leeloli
"2023-06-29T20:03:54Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T20:02:02Z"
--- license: openrail ---
adhamelarabawy/human_presence_classifier
adhamelarabawy
"2023-06-29T21:37:18Z"
0
2
null
[ "image-classification", "en", "dataset:adhamelarabawy/fashion_human_classification", "region:us" ]
image-classification
"2023-06-29T20:10:49Z"
--- datasets: - adhamelarabawy/fashion_human_classification language: - en pipeline_tag: image-classification --- # Human Presence Classification #### CLIP-Based Linear Probe Logistic Regression classification model to detect the presence of humans in fashion-domain images. @author: Adham Elarabawy (www.adhamelarabawy.com) ## Overview I needed a human presence classification model to help with structuring a very large scraped dataset of fashion imagery. CLIP-based similarity scoring was not sufficient, since desired precision would result in a substantial drop rate. I trained a logistic model on top of CLIP image features as a linear probe for classification, using DeepFashion paired images. Achieved 100% accuracy on the test set (20% = ~2k imgs). Definitely overfit to fashion imagery, but that's fine since that's the downstream use case. This is extremely low latency, especially if you've already encoded your images using ViT-B/32 CLIP variant. On an A10, it takes about ~23 milliseconds to encode the image, and ~0.28 milliseconds to classify the features. ## Dataset I used a subset of DeepFashion v1 in order to curate a dataset of paired images of a garment and then the garment on a person. I then used this structuring to create the final dataset with binary labels of human presence. Some notes: - The images seem to be predominantly women. - The human models seem to have good coverage on most ethnicities/body types. Early analysis also shows that there is not any ethnicity/body type bias. - Most/all the images have a white background. From my testing, the model generalizes quite well to other domains (with natural/diverse backgrounds/poses). - My hypothesis is that the paired nature of the data allowed the model to pick up on the correct features, which has made it very robust. |Presence Case|Absence Case| |---|---| |<img src="https://datasets-server.huggingface.co/cached-assets/forgeml/viton_hd/--/forgeml--viton_hd/train/226/image/image.jpg" width="100px">|<img src="https://datasets-server.huggingface.co/cached-assets/forgeml/viton_hd/--/forgeml--viton_hd/train/226/cloth/image.jpg" width="100px">| ## Usage: ```python import clip import torch import pickle import sklearn import time from PIL import Image from huggingface_hub import hf_hub_download device = "cuda" if torch.cuda.is_available() else "cpu" clip_model, clip_preprocess = clip.load("ViT-B/32", device) repo_id = "adhamelarabawy/fashion_human_classifier" model_path = hf_hub_download(repo_id=repo_id, filename="model.pkl") with open(model_path, 'rb') as file: human_classifier = pickle.load(file) # time the prediction start = time.time() features = clip_model.encode_image(clip_preprocess(img).unsqueeze(0).to(device)).detach().cpu().numpy() encode_time = time.time() - start pred = human_classifier.predict(features) # True = has human, False = no human pred_time = time.time() - encode_time - start print(f"Encode time: {encode_time*1000:.3f} milliseconds") print(f"Prediction time: {pred_time*1000:.3f} milliseconds") print(f"Prediction (has_human): {pred}") ```
cleanrl/Hopper-v2-ddpg_continuous_action-seed1
cleanrl
"2023-06-29T20:13:05Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "Hopper-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T20:12:59Z"
--- tags: - Hopper-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Hopper-v2 type: Hopper-v2 metrics: - type: mean_reward value: 2392.40 +/- 742.56 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Hopper-v2** This is a trained model of a DDPG agent playing Hopper-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Hopper-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Hopper-v2-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py curl -OL https://huggingface.co/cleanrl/Hopper-v2-ddpg_continuous_action-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Hopper-v2-ddpg_continuous_action-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Hopper-v2 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'env_id': 'Hopper-v2', 'exp_name': 'ddpg_continuous_action', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'torch_deterministic': True, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
gilzamir/ChatCommand
gilzamir
"2023-06-29T20:16:55Z"
0
0
null
[ "license:mit", "region:us" ]
null
"2023-06-29T20:14:07Z"
--- license: mit ---
Javierg05/Jv
Javierg05
"2023-06-29T20:19:29Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:19:29Z"
Entry not found
tranquangchung/whisper-small-hi
tranquangchung
"2023-06-29T20:20:16Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:20:16Z"
Entry not found
therajmaurya/Falcon-40B-Instruct-QLoRA-PersonalFinance
therajmaurya
"2023-06-29T20:20:25Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2023-06-29T20:20:25Z"
--- license: apache-2.0 ---
Hazza1/Odin
Hazza1
"2023-06-29T20:29:14Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:24:41Z"
Entry not found
camenduru/PTI
camenduru
"2023-06-29T20:34:31Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:26:39Z"
Entry not found
Flyleaf/1979EltonJohn
Flyleaf
"2023-06-29T20:29:31Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T20:28:14Z"
--- license: openrail ---
or90/results16
or90
"2023-06-29T20:30:29Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:30:27Z"
Entry not found
platzi/clasificacion-DBpedia-juan-arevalo
platzi
"2023-06-29T20:35:57Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:32:58Z"
Entry not found
TheBloke/CAMEL-13B-Role-Playing-Data-SuperHOT-8K-GGML
TheBloke
"2023-06-29T21:24:32Z"
0
6
null
[ "arxiv:2303.17760", "license:other", "region:us" ]
null
"2023-06-29T20:36:06Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Camel AI's CAMEL 13B Role Playing Data GGML These files are GGML format model files for [Camel AI's CAMEL 13B Role Playing Data](https://huggingface.co/camel-ai/CAMEL-13B-Role-Playing-Data). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Role-Playing-Data-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/CAMEL-13B-Role-Playing-Data-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/CAMEL-13B-Role-Playing-Data-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/camel-ai/CAMEL-13B-Role-Playing-Data) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | camel-13b-role-playing-data-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 camel-13b-role-playing-data-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Camel AI's CAMEL 13B Role Playing Data CAMEL-13B-Role-Playing-Data is a chat large language model obtained by finetuning LLaMA-13B model on a total of 229K conversations created through our role-playing framework proposed in [CAMEL](https://arxiv.org/abs/2303.17760). We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL-13B scores an average of 57.2. --- license: cc-by-nc-4.0 ---
rajanchaturvedi/text-sql
rajanchaturvedi
"2023-06-29T20:39:10Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:36:38Z"
Entry not found
William0609/sb
William0609
"2023-06-29T20:37:46Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:37:46Z"
Entry not found
ZoeyM/wav2vec2-large-mms-1b-turkish-colab
ZoeyM
"2023-07-09T14:34:52Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:44:07Z"
Entry not found
jdawnduan/Reinforce-Cartpole1
jdawnduan
"2023-06-29T20:44:19Z"
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T20:44:07Z"
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Cartpole1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
cleanrl/InvertedPendulum-v2-ddpg_continuous_action-seed1
cleanrl
"2023-06-29T20:47:09Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "InvertedPendulum-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T20:46:59Z"
--- tags: - InvertedPendulum-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: InvertedPendulum-v2 type: InvertedPendulum-v2 metrics: - type: mean_reward value: 403.30 +/- 376.83 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **InvertedPendulum-v2** This is a trained model of a DDPG agent playing InvertedPendulum-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id InvertedPendulum-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v2-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v2-ddpg_continuous_action-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/InvertedPendulum-v2-ddpg_continuous_action-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id InvertedPendulum-v2 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'env_id': 'InvertedPendulum-v2', 'exp_name': 'ddpg_continuous_action', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'torch_deterministic': True, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
andytsuii/FasonModel
andytsuii
"2023-06-29T20:51:20Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:51:19Z"
Entry not found
jdawnduan/Reinforce-pixelcopter1
jdawnduan
"2023-06-29T20:52:09Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:52:09Z"
Entry not found
William0609/trained_model
William0609
"2023-06-29T21:46:17Z"
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "detr", "object-detection", "generated_from_trainer", "dataset:imagefolder", "endpoints_compatible", "region:us" ]
object-detection
"2023-06-29T20:52:47Z"
--- tags: - generated_from_trainer datasets: - imagefolder model-index: - name: trained_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_model This model was trained from scratch on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
Leomorales/test
Leomorales
"2023-06-29T20:57:52Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T20:57:52Z"
Entry not found
wearnocap/bvb
wearnocap
"2023-06-29T21:00:23Z"
0
0
null
[ "license:cc", "region:us" ]
null
"2023-06-29T21:00:23Z"
--- license: cc ---
alitair/dqn-SpaceInvadersNoFrameskip-v4
alitair
"2023-06-29T21:02:18Z"
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T21:01:33Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 818.50 +/- 321.08 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alitair -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alitair -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alitair ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
gdemerald/testbot
gdemerald
"2023-06-29T21:02:22Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:02:22Z"
Entry not found
slayyxxlol/2PacRVC2
slayyxxlol
"2023-06-29T21:09:02Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T21:02:36Z"
--- license: openrail ---
miki-kawa/vit-base-patch16-224-in21k-finetuned-lora2-food101
miki-kawa
"2023-06-29T21:03:18Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:03:18Z"
Entry not found
nesuter/qa_model_output
nesuter
"2023-06-29T21:06:27Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:06:27Z"
Entry not found
anti10194/zibidicozmik
anti10194
"2023-06-29T21:07:43Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T21:07:08Z"
--- license: openrail ---
pedrommmmmmmmm/p
pedrommmmmmmmm
"2023-06-29T21:11:13Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:11:13Z"
Entry not found
cattleherd/healthyornnot
cattleherd
"2023-06-29T21:21:13Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2023-06-29T21:18:21Z"
--- license: apache-2.0 ---
Gurumoorthy/q-FrozenLake-v1-4x4-noSlippery
Gurumoorthy
"2023-06-29T21:18:27Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T21:18:25Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Gurumoorthy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Gurumoorthy/q-Taxi-v3
Gurumoorthy
"2023-06-29T21:22:42Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T21:21:05Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Gurumoorthy/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
TheBloke/UltraLM-13B-GGML
TheBloke
"2023-06-29T22:08:10Z"
0
14
null
[ "dataset:stingning/ultrachat", "arxiv:2305.14233", "license:other", "region:us" ]
null
"2023-06-29T21:21:17Z"
--- inference: false license: other datasets: - stingning/ultrachat --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Open BMB's UltraLM 13B GGML These files are GGML format model files for [Open BMB's UltraLM 13B](https://huggingface.co/openbmb/UltraLM-13b). **Note**: I cannot make GGML k-quants for this model due to its vocab size of 32,001. Please see Compatibility below for more detail. GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/UltraLM-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/UltraLM-13B-GGML) * [Merged, unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/UltraLM-13B-fp16) ## Prompt template: Vicuna 1.1 ``` USER: prompt ASSISTANT: ``` <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`. These are guaranteed to be compatbile with any UIs, tools and libraries released since late May. ### New k-quant methods not compatible with this model at this time Unfortunately this model has a vocab size of 32,001. This breaks compatibility with the new GGML k-quant method. I cannot make k-quants for this reason. For further explanation, please see: - https://huggingface.co/openbmb/UltraLM-13b/discussions/1 - https://github.com/ggerganov/llama.cpp/issues/1919 <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | ultralm-13b.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. | | ultralm-13b.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | ultralm-13b.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | ultralm-13b.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | ultralm-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m ultralm-13b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:" ``` If you're able to use full GPU offloading, you should use `-t 1` to get best performance. If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Open BMB's UltraLM 13B # UltraLM-13b <!-- Provide a quick summary of what the model is/does. --> This is UltraLM-13b delta weights, a chat language model trained upon [UltraChat](https://github.com/thunlp/UltraChat) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> The model is fine-tuned based on LLaMA-13b with a multi-turn chat-format template as below ``` User: instruction 1<eos_token> Assistant: response 1<eos_token> User: instruction 2<eos_token> Assistant: response 2<eos_token> ... ``` - **License:** UltraLM is based on LLaMA and should be used under LLaMA's [model license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md). - **Finetuned from model:** LLaMA-13b - **Finetuned on data:** [UltraChat](https://github.com/thunlp/UltraChat) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [UltraChat](https://github.com/thunlp/UltraChat) - **Paper:** [arxiv](https://arxiv.org/abs/2305.14233) - **Demo:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> To use this model, you need to [recover](https://github.com/thunlp/UltraChat/tree/main/UltraLM) the full model from the delta weights and perform inference following the template below: ``` [Optional]User: system prompt<eos_token> User: user input<eos_token> Assistant: ```
anti10194/crae
anti10194
"2023-06-29T21:34:34Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T21:21:27Z"
--- license: openrail ---
byrex/byrex
byrex
"2023-06-29T21:24:09Z"
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
"2023-06-29T21:24:09Z"
--- license: bigcode-openrail-m ---
Jackmin108/rb_model
Jackmin108
"2023-06-29T21:24:47Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:24:47Z"
Entry not found
TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML
TheBloke
"2023-06-29T21:47:27Z"
0
11
null
[ "license:other", "region:us" ]
null
"2023-06-29T21:25:19Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Austism's Chronos Hermes 13B GGML These files are GGML format model files for [Austism's Chronos Hermes 13B](https://huggingface.co/Austism/chronos-hermes-13b). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chronos-Hermes-13B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Austism/chronos-hermes-13b) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | chronos-hermes-13b-superhot-8k.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB | 9.82 GB | Original llama.cpp quant method, 4-bit. | | chronos-hermes-13b-superhot-8k.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB | 10.64 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | chronos-hermes-13b-superhot-8k.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB | 11.45 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | chronos-hermes-13b-superhot-8k.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB | 12.26 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | chronos-hermes-13b-superhot-8k.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 chronos-hermes-13b-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Austism's Chronos Hermes 13B ([chronos-13b](https://huggingface.co/elinas/chronos-13b) + [Nous-Hermes-13b](https://huggingface.co/NousResearch/Nous-Hermes-13b)) 75/25 merge This has the aspects of chronos's nature to produce long, descriptive outputs. But with additional coherency and an ability to better obey instructions. Resulting in this model having a great ability to produce proactive storywriting and follow a narrative. This mix contains alot of chronos's writing style and 'flavour' with far less tendency of going AWOL and spouting nonsensical babble. This result was much more successful than my [first chronos merge](https://huggingface.co/Austism/chronos-wizardlm-uc-scot-st-13b).
BibEBobberson/BibRVC
BibEBobberson
"2024-07-20T01:59:30Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T21:25:37Z"
--- license: openrail ---
miki-kawa/vit-base-patch16-224-in21k-finetuned-lora4-food101
miki-kawa
"2023-06-29T21:35:58Z"
0
0
null
[ "pytorch", "tensorboard", "region:us" ]
null
"2023-06-29T21:26:36Z"
Entry not found
vikas85/whisper-combined
vikas85
"2023-06-29T21:40:36Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:31:36Z"
Entry not found
cleanrl/Humanoid-v2-ddpg_continuous_action-seed1
cleanrl
"2023-06-29T21:33:32Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "Humanoid-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T21:33:26Z"
--- tags: - Humanoid-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Humanoid-v2 type: Humanoid-v2 metrics: - type: mean_reward value: 672.28 +/- 374.96 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Humanoid-v2** This is a trained model of a DDPG agent playing Humanoid-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Humanoid-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Humanoid-v2-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py curl -OL https://huggingface.co/cleanrl/Humanoid-v2-ddpg_continuous_action-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Humanoid-v2-ddpg_continuous_action-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Humanoid-v2 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'env_id': 'Humanoid-v2', 'exp_name': 'ddpg_continuous_action', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'torch_deterministic': True, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
jamsinit/Test
jamsinit
"2023-06-29T21:34:19Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:34:19Z"
Entry not found
soulnull/rvcmodels
soulnull
"2023-08-08T00:28:24Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T21:37:38Z"
--- license: openrail ---
or90/results17
or90
"2023-06-29T21:41:18Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:41:16Z"
Entry not found
akbrh/Fghj
akbrh
"2023-06-29T21:43:22Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:43:22Z"
Entry not found
MusicBox27/CodeLyokoRVC
MusicBox27
"2023-06-29T21:49:02Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T21:45:51Z"
--- license: openrail ---
TheBloke/GPT4All-13B-Snoozy-SuperHOT-8K-GGML
TheBloke
"2023-06-29T22:05:14Z"
0
3
null
[ "license:other", "region:us" ]
null
"2023-06-29T21:47:37Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Nomic.ai's GPT4All Snoozy 13B GGML These files are GGML format model files for [Nomic.ai's GPT4All Snoozy 13B](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPT4All-13B-Snoozy-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/GPT4All-13B-Snoozy-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/GPT4All-13B-Snoozy-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | gpt4all-snoozy-13b-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 gpt4all-snoozy-13b-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Nomic.ai's GPT4All Snoozy 13B # Model Card for GPT4All-13b-snoozy A GPL licensed chatbot trained over a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model has been finetuned from LLama 13B - **Developed by:** [Nomic AI](https://home.nomic.ai) - **Model Type:** A finetuned LLama 13B model on assistant style interaction data - **Language(s) (NLP):** English - **License:** GPL - **Finetuned from model [optional]:** LLama 13B This model was trained on `nomic-ai/gpt4all-j-prompt-generations` using `revision=v1.3-groovy` ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) - **Base Model Repository:** [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama) - **Demo [optional]:** [https://gpt4all.io/](https://gpt4all.io/) ### Results Results on common sense reasoning benchmarks ``` | Model | BoolQ | PIQA | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA | Avg. | |:--------------------------|:--------:|:--------:|:---------:|:----------:|:--------:|:--------:|:--------:|:--------:| | GPT4All-J 6B v1.0 | 73.4 | 74.8 | 63.4 | 64.7 | 54.9 | 36.0 | 40.2 | 58.2 | | GPT4All-J v1.1-breezy | 74.0 | 75.1 | 63.2 | 63.6 | 55.4 | 34.9 | 38.4 | 57.8 | | GPT4All-J v1.2-jazzy | 74.8 | 74.9 | 63.6 | 63.8 | 56.6 | 35.3 | 41.0 | 58.6 | | GPT4All-J v1.3-groovy | 73.6 | 74.3 | 63.8 | 63.5 | 57.7 | 35.0 | 38.8 | 58.1 | | GPT4All-J Lora 6B | 68.6 | 75.8 | 66.2 | 63.5 | 56.4 | 35.7 | 40.2 | 58.1 | | GPT4All LLaMa Lora 7B | 73.1 | 77.6 | 72.1 | 67.8 | 51.1 | 40.4 | 40.2 | 60.3 | | GPT4All 13B snoozy | **83.3** | 79.2 | 75.0 | **71.3** | 60.9 | 44.2 | 43.4 | **65.3** | | Dolly 6B | 68.8 | 77.3 | 67.6 | 63.9 | 62.9 | 38.7 | 41.2 | 60.1 | | Dolly 12B | 56.7 | 75.4 | 71.0 | 62.2 | 64.6 | 38.5 | 40.4 | 58.4 | | Alpaca 7B | 73.9 | 77.2 | 73.9 | 66.1 | 59.8 | 43.3 | 43.4 | 62.4 | | Alpaca Lora 7B | 74.3 | **79.3** | 74.0 | 68.8 | 56.6 | 43.9 | 42.6 | 62.8 | | GPT-J 6.7B | 65.4 | 76.2 | 66.2 | 64.1 | 62.2 | 36.6 | 38.2 | 58.4 | | LLama 7B | 73.1 | 77.4 | 73.0 | 66.9 | 52.5 | 41.4 | 42.4 | 61.0 | | LLama 13B | 68.5 | 79.1 | 76.2 | 70.1 | 60.0 | **44.6** | 42.2 | 63.0 | | Pythia 6.7B | 63.5 | 76.3 | 64.0 | 61.1 | 61.3 | 35.2 | 37.2 | 57.0 | | Pythia 12B | 67.7 | 76.6 | 67.3 | 63.8 | 63.9 | 34.8 | 38 | 58.9 | | Fastchat T5 | 81.5 | 64.6 | 46.3 | 61.8 | 49.3 | 33.3 | 39.4 | 53.7 | | Fastchat Vicuña 7B | 76.6 | 77.2 | 70.7 | 67.3 | 53.5 | 41.2 | 40.8 | 61.0 | | Fastchat Vicuña 13B | 81.5 | 76.8 | 73.3 | 66.7 | 57.4 | 42.7 | 43.6 | 63.1 | | StableVicuña RLHF | 82.3 | 78.6 | 74.1 | 70.9 | 61.0 | 43.5 | **44.4** | 65.0 | | StableLM Tuned | 62.5 | 71.2 | 53.6 | 54.8 | 52.4 | 31.1 | 33.4 | 51.3 | | StableLM Base | 60.1 | 67.4 | 41.2 | 50.1 | 44.9 | 27.0 | 32.0 | 42.2 | | Koala 13B | 76.5 | 77.9 | 72.6 | 68.8 | 54.3 | 41.0 | 42.8 | 62.0 | | Open Assistant Pythia 12B | 67.9 | 78.0 | 68.1 | 65.0 | 64.2 | 40.4 | 43.2 | 61.0 | | Mosaic mpt-7B | 74.8 | **79.3** | **76.3** | 68.6 | **70.0** | 42.2 | 42.6 | 64.8 | | text-davinci-003 | 88.1 | 83.8 | 83.4 | 75.8 | 83.9 | 63.9 | 51.0 | 75.7 | ```
Gfn729/Teste
Gfn729
"2023-07-03T00:50:54Z"
0
0
null
[ "license:other", "region:us" ]
null
"2023-06-29T21:53:12Z"
--- license: other ---
KingKazma/xsum_gpt2_lora_500_10_20000_8_e-1_s444_v1
KingKazma
"2023-06-29T21:53:18Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:53:17Z"
Entry not found
Gus457/Primero1
Gus457
"2023-06-29T21:58:46Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:56:48Z"
Entry not found
Juancarlosjake/Analisis_de_sentimient
Juancarlosjake
"2023-06-29T21:58:50Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T21:58:50Z"
Entry not found
TheBloke/llama-30b-supercot-SuperHOT-8K-GGML
TheBloke
"2023-06-29T22:52:36Z"
0
2
null
[ "license:other", "region:us" ]
null
"2023-06-29T22:05:26Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Ausboss' Llama 30B SuperCOT GGML These files are GGML format model files for [Ausboss' Llama 30B SuperCOT](https://huggingface.co/ausboss/llama-30b-supercot). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/llama-30b-supercot-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/llama-30b-supercot-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/llama-30b-supercot-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ausboss/llama-30b-supercot) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | llama-30b-supercot-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 13.71 GB | 16.21 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | llama-30b-supercot-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.28 GB | 19.78 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | llama-30b-supercot-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.72 GB | 18.22 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | llama-30b-supercot-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 14.06 GB | 16.56 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | llama-30b-supercot-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.62 GB | 22.12 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | llama-30b-supercot-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.36 GB | 20.86 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | llama-30b-supercot-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.05 GB | 25.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | llama-30b-supercot-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.40 GB | 24.90 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | llama-30b-supercot-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 llama-30b-supercot-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Ausboss' Llama 30B SuperCOT Merge of [huggyllama/llama-30b](https://huggingface.co/huggyllama/llama-30b) + [kaiokendev/SuperCOT-LoRA](https://huggingface.co/kaiokendev/SuperCOT-LoRA/edit/main/README.md) Supercot was trained to work with langchain prompting. Load up locally in my custom LLM notebook that uses the Oobabooga modules to load up models: https://github.com/ausboss/Local-LLM-Langchain Then you can add cells from of these other notebooks for testing: https://github.com/gkamradt/langchain-tutorials # From Koikendev Lora page ### Compatibility This LoRA is compatible with any 7B, 13B or 30B 4-bit quantized LLaMa model, including ggml quantized converted bins ### Prompting You should prompt the LoRA the same way you would prompt Alpaca or Alpacino: ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: <instruction> ### Input: <any additional context. Remove this if it's not neccesary> ### Response: <make sure to leave a single new-line here for optimal results> ``` Remember that with lower parameter sizes, the structure of the prompt becomes more important. The same prompt worded differently can give wildly different answers. Consider using the following suggestion suffixes to improve output quality: - "Think through this step by step" - "Let's think about this logically" - "Explain your reasoning" - "Provide details to support your answer" - "Compare and contrast your answer with alternatives" ### Coming Soon - Tweet fix for 13B and 7B - lower model sizes seem to be extremely sensitive to hashtags at the end of training data responses, especially at longer cutoffs
Selflabs/LocalGPT
Selflabs
"2023-06-29T22:36:19Z"
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
"2023-06-29T22:06:32Z"
--- license: bigscience-openrail-m ---
Forbu14/falcon_7B_OA_SFT_LORA_DPO
Forbu14
"2023-06-29T22:10:52Z"
0
0
transformers
[ "transformers", "text-generation", "en", "fr", "dataset:Anthropic/hh-rlhf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
"2023-06-29T22:09:42Z"
--- license: apache-2.0 datasets: - Anthropic/hh-rlhf language: - en - fr library_name: transformers pipeline_tag: text-generation ---
cleanrl/Pusher-v2-ddpg_continuous_action-seed1
cleanrl
"2023-06-29T22:10:05Z"
0
0
cleanrl
[ "cleanrl", "tensorboard", "Pusher-v2", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T22:09:59Z"
--- tags: - Pusher-v2 - deep-reinforcement-learning - reinforcement-learning - custom-implementation library_name: cleanrl model-index: - name: DDPG results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pusher-v2 type: Pusher-v2 metrics: - type: mean_reward value: -45.06 +/- 3.88 name: mean_reward verified: false --- # (CleanRL) **DDPG** Agent Playing **Pusher-v2** This is a trained model of a DDPG agent playing Pusher-v2. The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ddpg_continuous_action.py). ## Get Started To use this model, please install the `cleanrl` package with the following command: ``` pip install "cleanrl[ddpg_continuous_action]" python -m cleanrl_utils.enjoy --exp-name ddpg_continuous_action --env-id Pusher-v2 ``` Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. ## Command to reproduce the training ```bash curl -OL https://huggingface.co/cleanrl/Pusher-v2-ddpg_continuous_action-seed1/raw/main/ddpg_continuous_action.py curl -OL https://huggingface.co/cleanrl/Pusher-v2-ddpg_continuous_action-seed1/raw/main/pyproject.toml curl -OL https://huggingface.co/cleanrl/Pusher-v2-ddpg_continuous_action-seed1/raw/main/poetry.lock poetry install --all-extras python ddpg_continuous_action.py --track --capture-video --save-model --hf-entity cleanrl --upload-model --env-id Pusher-v2 --seed 1 ``` # Hyperparameters ```python {'batch_size': 256, 'buffer_size': 1000000, 'capture_video': True, 'cuda': True, 'env_id': 'Pusher-v2', 'exp_name': 'ddpg_continuous_action', 'exploration_noise': 0.1, 'gamma': 0.99, 'hf_entity': 'cleanrl', 'learning_rate': 0.0003, 'learning_starts': 25000.0, 'noise_clip': 0.5, 'policy_frequency': 2, 'save_model': True, 'seed': 1, 'tau': 0.005, 'torch_deterministic': True, 'total_timesteps': 1000000, 'track': True, 'upload_model': True, 'wandb_entity': None, 'wandb_project_name': 'cleanRL'} ```
Matius54/WALL-E
Matius54
"2023-06-29T22:21:57Z"
0
0
null
[ "tensorboard", "license:openrail", "region:us" ]
null
"2023-06-29T22:10:05Z"
--- license: openrail --- WALL·E (RVC v2) 840 Epochs 5K Steps\ Dataset + Training done my me\ Dataset length: 58 seconds\ Dub Voice Actor: n/a\ Mangio Crepe hop length: 10\ Batch Size: 6\ Epochs: 300\ Steps: 3.6k
FreeLove/uzunRVC
FreeLove
"2023-06-29T22:14:04Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T22:11:09Z"
--- license: openrail ---
Scarlap/combine_s
Scarlap
"2023-06-29T22:15:42Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T22:13:45Z"
Entry not found
xPXXX/Alpaca_FineTuning_Model
xPXXX
"2023-06-29T22:31:30Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-06-29T22:27:14Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.4.0.dev0
WoysfuL/Destiny-Guardians-Ghost-Voice
WoysfuL
"2023-06-29T22:56:15Z"
0
0
null
[ "Destiny", "Ghost", "Voice", "en", "license:openrail", "region:us" ]
null
"2023-06-29T22:28:18Z"
--- license: openrail language: - en tags: - Destiny - Ghost - Voice ---
xiaozaa/lora_model
xiaozaa
"2023-06-29T22:32:41Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T22:29:17Z"
Entry not found
Propeanutbutter/Mikiman
Propeanutbutter
"2023-06-29T22:30:48Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T22:30:48Z"
Entry not found
ouchyouchuu/test
ouchyouchuu
"2023-06-29T22:32:05Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T22:32:05Z"
Entry not found
Dr-Ivo-Robotnik/Stephen_Fry
Dr-Ivo-Robotnik
"2023-06-29T22:53:48Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T22:35:30Z"
Entry not found
Khushnur/t5-base-end2end-questions-generation_aug
Khushnur
"2023-06-29T22:37:31Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T22:37:31Z"
Entry not found
clydemystik325/ClydesRVCModels
clydemystik325
"2023-11-04T17:03:23Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T22:37:42Z"
--- license: openrail ---
KingKazma/xsum_gpt2_lora_500_6_20000_8_e0_s444_v1
KingKazma
"2023-06-29T22:39:49Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T22:39:48Z"
Entry not found
HyperDeath/Kobeni
HyperDeath
"2023-06-29T22:57:03Z"
0
0
null
[ "license:gpl-3.0", "region:us" ]
null
"2023-06-29T22:44:53Z"
--- license: gpl-3.0 ---
DineshMeka/PICTEST
DineshMeka
"2023-06-29T22:45:06Z"
0
0
null
[ "region:us" ]
null
"2023-06-29T22:45:06Z"
Entry not found
lucar1126/stable-cpt
lucar1126
"2023-06-30T00:42:17Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T22:47:42Z"
--- license: openrail ---
circulus/canvers-anime-v3.5
circulus
"2023-06-29T23:32:14Z"
0
0
diffusers
[ "diffusers", "safetensors", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-06-29T22:48:10Z"
--- license: gpl-3.0 ---
circulus/canvers-cartoon-v3.5
circulus
"2023-06-29T23:36:26Z"
0
0
diffusers
[ "diffusers", "safetensors", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-06-29T22:48:29Z"
--- license: gpl-3.0 ---
circulus/canvers-story-v3.5
circulus
"2023-06-29T23:27:06Z"
0
0
diffusers
[ "diffusers", "safetensors", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-06-29T22:48:59Z"
--- license: gpl-3.0 ---
BaconZz/Zibidi
BaconZz
"2023-07-06T02:18:11Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T22:50:35Z"
--- license: openrail ---
jlpan/starcoder-finetuned-stackExchange
jlpan
"2023-06-29T22:51:33Z"
0
1
null
[ "region:us" ]
null
"2023-06-29T22:51:33Z"
Entry not found
Khushnur/t5-base-end2end-questions-generation_eli_squad_aug_v1
Khushnur
"2023-06-30T00:32:35Z"
0
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:eli5_cleaned_datav3_60k", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-06-29T22:53:41Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - eli5_cleaned_datav3_60k model-index: - name: t5-base-end2end-questions-generation_eli_squad_aug_v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-end2end-questions-generation_eli_squad_aug_v1 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5_cleaned_datav3_60k dataset. It achieves the following results on the evaluation set: - Loss: 2.3264 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2646 | 0.25 | 100 | 2.4396 | | 2.1168 | 0.49 | 200 | 2.4056 | | 2.0807 | 0.74 | 300 | 2.3825 | | 2.0393 | 0.98 | 400 | 2.3653 | | 1.9759 | 1.23 | 500 | 2.3547 | | 1.9567 | 1.47 | 600 | 2.3453 | | 1.9397 | 1.72 | 700 | 2.3405 | | 1.9497 | 1.96 | 800 | 2.3349 | | 1.8961 | 2.21 | 900 | 2.3318 | | 1.9075 | 2.46 | 1000 | 2.3290 | | 1.9056 | 2.7 | 1100 | 2.3269 | | 1.8959 | 2.95 | 1200 | 2.3264 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GGML
TheBloke
"2023-06-29T23:14:46Z"
0
6
null
[ "license:other", "region:us" ]
null
"2023-06-29T22:57:00Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Manticore 13B Chat Pyg Guanaco GGML These files are GGML format model files for [Manticore 13B Chat Pyg Guanaco](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco). These are SuperHOT GGMLs with an increased context length. SuperHOT is a new system that employs RoPE to expand context beyond what was originally possible for a model. It was discovered and developed by [kaiokendev](https://huggingface.co/kaiokendev). In order to use the increased context length, you can presently use: * [KoboldCpp](https://github.com/LostRuins/koboldcpp) - [release 1.33](https://github.com/LostRuins/koboldcpp/releases/tag/v1.33) or later. Support is also expected to come to llama.cpp, however it is still being worked on and there is currently no ETA for that. To use the increased context with KoboldCpp and (when supported) llama.cpp, simply use `--contextsize` to set the desired context, eg `--contextsize 4096` or `--contextsize 8192`. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Manticore-13B-Chat-Pyg-Guanaco-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco) <!-- compatibility_ggml start --> ## Compatibility These GGMLs will work with any llama.cpp-compatible GGML client that supports k-quants. However the increased context length won't work without specific support. See the note in the introduction for details on using increased context. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB | 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB | 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB | 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB | 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB | 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB | 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB | 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB | 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ## How to run in `koboldcpp` On Linux I use the following command line to launch the KoboldCpp UI with OpenCL aceleration and a context size of 4096: ``` python ./koboldcpp.py --stream --unbantokens --threads 8 --usecublas 100 manticore-13b-chat-pyg-guanaco-superhot-8k.ggmlv3.q5_0.bin ``` Change `--gpulayers 100` to the number of layers you want/are able to offload to the GPU. Remove it if you don't have GPU acceleration. For OpenCL acceleration, change `--usecublas` to `--useclblast 0 0`. You may need to change the second `0` to `1` if you have both an iGPU and a discrete GPU. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Manticore 13B Chat Pyg Guanaco Manticore-13b-Chat-Pyg with the Guanaco 13b qLoRa from TimDettmers applied
zeerowiibu/WiibuRVCCollection
zeerowiibu
"2023-09-16T16:09:17Z"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-06-29T22:57:30Z"
--- license: openrail ---
hseokool/vicuna-13b-1.1-230623-02
hseokool
"2023-06-29T23:00:35Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-06-29T23:00:32Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0
TheBloke/h2ogpt-research-oasst1-llama-65B-GGML
TheBloke
"2023-06-30T09:17:18Z"
0
11
null
[ "license:other", "region:us" ]
null
"2023-06-29T23:02:43Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # H2O's H2OGPT Research OASST1 LLaMa 65B GGML These files are GGML format model files for [H2O's H2OGPT Research OASST1 LLaMa 65B](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b). GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as: * [text-generation-webui](https://github.com/oobabooga/text-generation-webui) * [KoboldCpp](https://github.com/LostRuins/koboldcpp) * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) * [ctransformers](https://github.com/marella/ctransformers) ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/h2ogpt-research-oasst1-llama-65B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/h2ogpt-research-oasst1-llama-65B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b) ## Prompt template ``` <human>: prompt <bot>: ``` <!-- compatibility_ggml start --> ## Compatibility ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0` I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`. These are guaranteed to be compatbile with any UIs, tools and libraries released since late May. ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K` These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`. They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt. ## Explanation of the new k-quant methods The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type. Refer to the Provided Files table below to see what files use which methods, and how. <!-- compatibility_ggml end --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q2_K.bin | q2_K | 2 | 27.45 GB | 29.95 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 34.65 GB | 37.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 31.50 GB | 34.00 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 28.16 GB | 30.66 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q4_0.bin | q4_0 | 4 | 36.73 GB | 39.23 GB | Original llama.cpp quant method, 4-bit. | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB | 43.31 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 39.35 GB | 41.85 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 36.80 GB | 39.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB | 47.39 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 46.24 GB | 48.74 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 44.92 GB | 47.42 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q6_K.bin | q6_K | 6 | 53.56 GB | 56.06 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors | | h2ogpt-research-oasst1-llama-65b.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### q6_K and q8_0 files require expansion from archive **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts. ### q6_K Please download: * `h2ogpt-research-oasst1-llama-65b.ggmlv3.q6_K.zip` * `h2ogpt-research-oasst1-llama-65b.ggmlv3.q6_K.z01` ### q8_0 Please download: * `h2ogpt-research-oasst1-llama-65b.ggmlv3.q8_0.zip` * `h2ogpt-research-oasst1-llama-65b.ggmlv3.q8_0.z01` Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example: ``` sudo apt update -y && sudo apt install 7zip 7zz x h2ogpt-research-oasst1-llama-65b.ggmlv3.q6_K.zip ``` Once the `.bin` is extracted you can delete the `.zip` and `.z01` files. ## How to run in `llama.cpp` I use the following command line; adjust for your tastes and needs: ``` ./main -t 10 -ngl 32 -m h2ogpt-research-oasst1-llama-65b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<human>: write a story about llamas\n<bot>:" ``` If you're able to use full GPU offloading, you should use `-t 1` to get best performance. If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance. Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md). <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix, ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: H2O's H2OGPT Research OASST1 LLaMa 65B # h2oGPT Model Card ## Summary H2O.ai's `h2ogpt-research-oasst1-llama-65b` is a 65 billion parameter instruction-following large language model (NOT licensed for commercial use). - Base model: [decapoda-research/llama-65b-hf](https://huggingface.co/decapoda-research/llama-65b-hf) - Fine-tuning dataset: [h2oai/openassistant_oasst1_h2ogpt_graded](https://huggingface.co/datasets/h2oai/openassistant_oasst1_h2ogpt_graded) - Data-prep and fine-tuning code: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt) - Training logs: [zip](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b/blob/main/llama-65b-hf.h2oaiopenassistant_oasst1_h2ogpt_graded.1_epochs.113510499324f0f007cbec9d9f1f8091441f2469.3.zip) ## Chatbot - Run your own chatbot: [H2O.ai GitHub](https://github.com/h2oai/h2ogpt) [![H2O.ai GitHub](https://user-images.githubusercontent.com/6147661/232930822-e7170e4d-8aa1-4f7a-ad70-ece9cdd8b0cb.png)](https://github.com/h2oai/h2ogpt) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the following libraries installed. ```bash pip install transformers==4.29.2 pip install accelerate==0.19.0 pip install torch==2.0.1 pip install einops==0.6.1 ``` ```python import torch from transformers import pipeline, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", padding_side="left") generate_text = pipeline(model="h2oai/h2ogpt-research-oasst1-llama-65b", tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", prompt_type="human_bot") res = generate_text("Why is drinking water so healthy?", max_new_tokens=100) print(res[0]["generated_text"]) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/h2oai/h2ogpt-research-oasst1-llama-65b/blob/main/h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("h2oai/h2ogpt-research-oasst1-llama-65b", torch_dtype=torch.bfloat16, device_map="auto") generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer, prompt_type="human_bot") res = generate_text("Why is drinking water so healthy?", max_new_tokens=100) print(res[0]["generated_text"]) ``` ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 8192, padding_idx=31999) (layers): ModuleList( (0-79): 80 x LlamaDecoderLayer( (self_attn): LlamaAttention( (q_proj): Linear(in_features=8192, out_features=8192, bias=False) (k_proj): Linear(in_features=8192, out_features=8192, bias=False) (v_proj): Linear(in_features=8192, out_features=8192, bias=False) (o_proj): Linear(in_features=8192, out_features=8192, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=8192, out_features=22016, bias=False) (down_proj): Linear(in_features=22016, out_features=8192, bias=False) (up_proj): Linear(in_features=8192, out_features=22016, bias=False) (act_fn): SiLUActivation() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=8192, out_features=32000, bias=False) ) ``` ## Model Configuration ```json LlamaConfig { "_name_or_path": "h2oai/h2ogpt-research-oasst1-llama-65b", "architectures": [ "LlamaForCausalLM" ], "bos_token_id": 0, "custom_pipelines": { "text-generation": { "impl": "h2oai_pipeline.H2OTextGenerationPipeline", "pt": "AutoModelForCausalLM" } }, "eos_token_id": 1, "hidden_act": "silu", "hidden_size": 8192, "initializer_range": 0.02, "intermediate_size": 22016, "max_position_embeddings": 2048, "max_sequence_length": 2048, "model_type": "llama", "num_attention_heads": 64, "num_hidden_layers": 80, "pad_token_id": -1, "rms_norm_eps": 1e-05, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.30.1", "use_cache": true, "vocab_size": 32000 } ``` ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). TBD ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
jdawnduan/Reinforce-Pixelcopter-PLE-v0
jdawnduan
"2023-06-30T00:33:45Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-06-29T23:06:23Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 22.60 +/- 15.62 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction