Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,46 @@ tags:
|
|
15 |
This model was converted to GGUF format from [`DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B`](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
16 |
Refer to the [original model card](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) for more details on the model.
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
## Use with llama.cpp
|
19 |
Install llama.cpp through brew (works on Mac and Linux)
|
20 |
|
|
|
15 |
This model was converted to GGUF format from [`DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B`](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
16 |
Refer to the [original model card](https://huggingface.co/DoeyLLM/OneLLM-Doey-V1-Llama-3.2-3B) for more details on the model.
|
17 |
|
18 |
+
|
19 |
+
# **How to Use DoeyLLM / OneLLM-Doey-V1-Llama-3.2-3B-Instruct**
|
20 |
+
|
21 |
+
This guide explains how to use the **DoeyLLM** model on both app (iOS) and PC platforms.
|
22 |
+
|
23 |
+
---
|
24 |
+
|
25 |
+
## **App (iOS): Use with OneLLM**
|
26 |
+
|
27 |
+
OneLLM brings versatile large language models (LLMs) to your device—Llama, Gemma, Qwen, Mistral, and more. Enjoy private, offline GPT and AI tools tailored to your needs.
|
28 |
+
|
29 |
+
With OneLLM, experience the capabilities of leading-edge language models directly on your device, all without an internet connection. Get fast, reliable, and intelligent responses, while keeping your data secure with local processing.
|
30 |
+
|
31 |
+
### **Quick Start for iOS**
|
32 |
+
|
33 |
+
Follow these steps to integrate the **DoeyLLM** model using the OneLLM app:
|
34 |
+
|
35 |
+
1. **Download OneLLM**
|
36 |
+
Get the app from the [App Store](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910) and install it on your iOS device.
|
37 |
+
|
38 |
+
2. **Load the DoeyLLM Model**
|
39 |
+
Use the OneLLM interface to load the DoeyLLM model directly into the app:
|
40 |
+
- Navigate to the **Model Library**.
|
41 |
+
- Search for `DoeyLLM`.
|
42 |
+
- Select the model and tap **Download** to store it locally on your device.
|
43 |
+
3. **Start Conversing**
|
44 |
+
Once the model is loaded, you can begin interacting with it through the app's chat interface. For example:
|
45 |
+
- Tap the **Chat** tab.
|
46 |
+
- Type your question or prompt, such as:
|
47 |
+
> "Explain the significance of AI in education."
|
48 |
+
- Receive real-time, intelligent responses generated locally.
|
49 |
+
|
50 |
+
### **Key Features of OneLLM**
|
51 |
+
- **Versatile Models**: Supports various LLMs, including Llama, Gemma, and Qwen.
|
52 |
+
- **Private & Secure**: All processing occurs locally on your device, ensuring data privacy.
|
53 |
+
- **Offline Capability**: Use the app without requiring an internet connection.
|
54 |
+
- **Fast Performance**: Optimized for mobile devices, delivering low-latency responses.
|
55 |
+
|
56 |
+
For more details or support, visit the [OneLLM App Store page](https://apps.apple.com/us/app/onellm-private-ai-gpt-llm/id6737907910).
|
57 |
+
|
58 |
## Use with llama.cpp
|
59 |
Install llama.cpp through brew (works on Mac and Linux)
|
60 |
|