--- base_model: macadeliccc/Samantha-Qwen-2-7B datasets: - macadeliccc/opus_samantha - HuggingfaceH4/ultrachat_200k - teknium/OpenHermes-2.5 - Sao10K/Claude-3-Opus-Instruct-15K license: apache-2.0 language: - en - zh pipeline_tag: text-generation --- # Samantha Qwen2 7B-GGUF This is quantized version of [macadeliccc/Samantha-Qwen-2-7B](https://huggingface.co/macadeliccc/Samantha-Qwen-2-7B) created using llama.cpp # Model Description Trained on 2x4090 using QLoRa and FSDP + [LoRa](macadeliccc/Samantha-Qwen2-7B-LoRa) ## Launch Using VLLM ```bash python -m vllm.entrypoints.openai.api_server \ --model macadeliccc/Samantha-Qwen-2-7B \ --chat-template ./examples/template_chatml.jinja \ ``` ```python from openai import OpenAI # Set OpenAI's API key and API base to use vLLM's API server. openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) chat_response = client.chat.completions.create( model="macadeliccc/Samantha-Qwen-2-7B", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me a joke."}, ] ) print("Chat response:", chat_response) ``` ## Prompt Template ``` <|im_start|>system You are a friendly assistant.<|im_end|> <|im_start|>user What is the capital of France?<|im_end|> <|im_start|>assistant The capital of France is Paris. ``` [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: Qwen/Qwen-7B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: true strict: false datasets: - path: macadeliccc/opus_samantha type: sharegpt field: conversations conversation: chatml - path: uncensored-ultrachat.json type: sharegpt field: conversations conversation: chatml - path: openhermes_200k.json type: sharegpt field: conversations conversation: chatml - path: opus_instruct.json type: sharegpt field: conversations conversation: chatml chat_template: chatml dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/lora-out sequence_len: 2048 sample_packing: false pad_to_sequence_len: adapter: qlora lora_model_dir: lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 1 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: warmup_steps: 250 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: ```