Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,46 @@ tags:
|
|
16 |
This model was converted to GGUF format from [`DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B`](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
17 |
Refer to the [original model card](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) for more details on the model.
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
## Use with llama.cpp
|
20 |
Install llama.cpp through brew (works on Mac and Linux)
|
21 |
|
|
|
16 |
This model was converted to GGUF format from [`DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B`](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
17 |
Refer to the [original model card](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) for more details on the model.
|
18 |
|
19 |
+
|
20 |
+
# **How to Use DoeyLLM / OneLLM-Doey-V1-Llama-3.2-3B-Instruct**
|
21 |
+
|
22 |
+
This guide explains how to use the **DoeyLLM** model on both app (iOS) and PC platforms.
|
23 |
+
|
24 |
+
---
|
25 |
+
|
26 |
+
## **App (iOS): Use with OneLLM**
|
27 |
+
|
28 |
+
OneLLM brings versatile large language models (LLMs) to your device—Llama, Gemma, Qwen, Mistral, and more. Enjoy private, offline GPT and AI tools tailored to your needs.
|
29 |
+
|
30 |
+
With OneLLM, experience the capabilities of leading-edge language models directly on your device, all without an internet connection. Get fast, reliable, and intelligent responses, while keeping your data secure with local processing.
|
31 |
+
|
32 |
+
### **Quick Start for iOS**
|
33 |
+
|
34 |
+
Follow these steps to integrate the **DoeyLLM** model using the OneLLM app:
|
35 |
+
|
36 |
+
1. **Download OneLLM**
|
37 |
+
Get the app from the [App Store](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910) and install it on your iOS device.
|
38 |
+
|
39 |
+
2. **Load the DoeyLLM Model**
|
40 |
+
Use the OneLLM interface to load the DoeyLLM model directly into the app:
|
41 |
+
- Navigate to the **Model Library**.
|
42 |
+
- Search for `DoeyLLM`.
|
43 |
+
- Select the model and tap **Download** to store it locally on your device.
|
44 |
+
3. **Start Conversing**
|
45 |
+
Once the model is loaded, you can begin interacting with it through the app's chat interface. For example:
|
46 |
+
- Tap the **Chat** tab.
|
47 |
+
- Type your question or prompt, such as:
|
48 |
+
> "Explain the significance of AI in education."
|
49 |
+
- Receive real-time, intelligent responses generated locally.
|
50 |
+
|
51 |
+
### **Key Features of OneLLM**
|
52 |
+
- **Versatile Models**: Supports various LLMs, including Llama, Gemma, and Qwen.
|
53 |
+
- **Private & Secure**: All processing occurs locally on your device, ensuring data privacy.
|
54 |
+
- **Offline Capability**: Use the app without requiring an internet connection.
|
55 |
+
- **Fast Performance**: Optimized for mobile devices, delivering low-latency responses.
|
56 |
+
|
57 |
+
For more details or support, visit the [OneLLM App Store page](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910).
|
58 |
+
|
59 |
## Use with llama.cpp
|
60 |
Install llama.cpp through brew (works on Mac and Linux)
|
61 |
|