YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Deleuze-Qwen-1.5B
A fine-tuned language model specialized in the philosophy of Gilles Deleuze, based on DeepSeek-R1-Distill-Qwen-1.5B.
Model Description
This model was fine-tuned on a corpus of Gilles Deleuze's philosophical works using LoRA (Low-Rank Adaptation) to specialize it in understanding and generating content related to Deleuzian concepts and philosophy.
Base Model
- Name: DeepSeek-R1-Distill-Qwen-1.5B
- Type: Causal Language Model
- Size: 1.5 billion parameters
Training Data
The model was trained on a dataset compiled from various books and texts by Gilles Deleuze, including:
- A Thousand Plateaus
- Difference and Repetition
- Logic of Sense
- Anti-Oedipus
- Cinema 1 & 2
- And other philosophical works
Training Procedure
- Method: LoRA fine-tuning
- LoRA Parameters:
- Rank: 64
- Alpha: 128
- Dropout: 0.05
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Training Parameters:
- Learning rate: 5.0e-5
- Epochs: 3
- Batch size: 2 (with gradient accumulation steps: 4)
- Sequence length: 2048
- Optimizer: AdamW
- LR scheduler: Cosine
Intended Use
This model is intended for:
- Research on Deleuze's philosophy
- Generating explanations of Deleuzian concepts
- Exploring philosophical ideas through the lens of Deleuze's work
- Educational purposes related to continental philosophy
Limitations
- The model may occasionally generate content that sounds plausible but is philosophically inaccurate
- It has limited knowledge of philosophical works published after its training data cutoff
- The model may struggle with very specific or obscure references in Deleuze's work
- As with all language models, it may exhibit biases present in the training data
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("wisdomfunction/deleuze-qwen-1.5b")
tokenizer = AutoTokenizer.from_pretrained("wisdomfunction/deleuze-qwen-1.5b")
# Example prompt
prompt = "What are the key concepts in Deleuze's philosophy?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate response
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Citation
If you use this model in your research, please cite:
@misc{deleuze-qwen-1.5b,
author = {wisdomfunction},
title = {Deleuze-Qwen-1.5B: A Fine-tuned Language Model for Deleuzian Philosophy},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/wisdomfunction/deleuze-qwen-1.5b}}
}
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.