Update README.md
Browse files
README.md
CHANGED
@@ -8,35 +8,41 @@ pinned: false
|
|
8 |
---
|
9 |
# ArtyLLaMA: Empowering AI Creativity in the Open Source Community π¦π¨
|
10 |
|
11 |
-
ArtyLLaMA is an innovative chat interface for Open Source Large Language Models, leveraging the power of Ollama. It features dynamic content generation and display through an "Artifacts-like" system, making AI-assisted creativity more accessible and interactive.
|
12 |
|
13 |
## Project Description
|
14 |
|
15 |
-
ArtyLLaMA is not a model itself, but a framework that allows users to interact with various language models
|
16 |
|
17 |
### Key Features:
|
18 |
|
19 |
-
- π¦ **
|
20 |
- π¨ **Dynamic Artifact Generation**: Create and display content artifacts during chat interactions
|
21 |
-
- π₯οΈ **Real-time HTML Preview**: Instantly visualize HTML artifacts
|
22 |
-
- π **Multi-Model Support**: Choose from multiple language models
|
23 |
- π± **Responsive Design**: Mobile-friendly interface built with Tailwind CSS
|
24 |
- π **Dark Mode**: Easy on the eyes with a default dark theme
|
25 |
- π **Local Inference**: Run models locally for privacy and customization
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
## Intended Use
|
28 |
|
29 |
ArtyLLaMA is designed for developers, researchers, and creative professionals who want to:
|
30 |
-
- Explore the capabilities of
|
31 |
-
- Generate creative content, including code, designs, and written text
|
32 |
- Prototype AI-assisted applications and workflows
|
33 |
-
-
|
34 |
|
35 |
## Limitations
|
36 |
|
37 |
-
-
|
38 |
-
- Performance depends on the user's hardware capabilities
|
39 |
-
- Limited to models supported by Ollama
|
40 |
- Does not include built-in content moderation (users should implement their own safeguards)
|
41 |
|
42 |
## Ethical Considerations
|
@@ -44,25 +50,25 @@ ArtyLLaMA is designed for developers, researchers, and creative professionals wh
|
|
44 |
Users of ArtyLLaMA should be aware of:
|
45 |
- Potential biases present in the underlying language models
|
46 |
- The need for responsible use and content generation
|
47 |
-
- Privacy implications of using AI-generated content
|
48 |
|
49 |
## Technical Specifications
|
50 |
|
51 |
-
- **
|
52 |
-
- **
|
53 |
-
- **
|
54 |
-
- **
|
|
|
55 |
|
56 |
## Getting Started
|
57 |
|
58 |
1. Clone the repository: `git clone https://github.com/kroonen/ArtyLLaMA.git`
|
59 |
2. Install dependencies: `npm install`
|
60 |
-
3.
|
61 |
-
4.
|
62 |
-
5.
|
63 |
-
6. Access the interface at `http://localhost:3000`
|
64 |
|
65 |
-
For more detailed instructions
|
66 |
|
67 |
## License
|
68 |
|
@@ -88,4 +94,4 @@ For questions, feedback, or collaborations, please reach out to:
|
|
88 |
- Email: robin@kroonen.ai
|
89 |
- Twitter: [@rob_x_ai](https://x.com/rob_x_ai)
|
90 |
|
91 |
-
We welcome contributions and feedback from the community, subject to the terms of our license!
|
|
|
8 |
---
|
9 |
# ArtyLLaMA: Empowering AI Creativity in the Open Source Community π¦π¨
|
10 |
|
11 |
+
ArtyLLaMA is an innovative chat interface for Open Source Large Language Models, leveraging the power of Ollama, OpenAI, and Anthropic. It features dynamic content generation and display through an "Artifacts-like" system, making AI-assisted creativity more accessible and interactive.
|
12 |
|
13 |
## Project Description
|
14 |
|
15 |
+
ArtyLLaMA is not a model itself, but a framework that allows users to interact with various language models. It provides a user-friendly interface for generating creative content, code, and visualizations using state-of-the-art language models.
|
16 |
|
17 |
### Key Features:
|
18 |
|
19 |
+
- π¦ **Multi-Provider Integration**: Seamless support for Ollama, OpenAI, and Anthropic models
|
20 |
- π¨ **Dynamic Artifact Generation**: Create and display content artifacts during chat interactions
|
21 |
+
- π₯οΈ **Real-time HTML Preview**: Instantly visualize HTML artifacts with interactive canvas
|
22 |
+
- π **Multi-Model Support**: Choose from multiple language models across providers
|
23 |
- π± **Responsive Design**: Mobile-friendly interface built with Tailwind CSS
|
24 |
- π **Dark Mode**: Easy on the eyes with a default dark theme
|
25 |
- π **Local Inference**: Run models locally for privacy and customization
|
26 |
+
- ποΈ **Code Syntax Highlighting**: Enhanced readability for various programming languages
|
27 |
+
- π **SVG Rendering Support**: Display AI-created vector graphics
|
28 |
+
- π **3D Visualization**: Utilize Three.js for 3D visualizations and simulations
|
29 |
+
- π **User Authentication**: JWT-based system for user registration and login
|
30 |
+
- π **Personalized Chat History**: Store and retrieve messages based on user ID
|
31 |
+
- π **Semantic Search**: Cross-model semantic search capabilities in chat history
|
32 |
+
- π **Dynamic Embedding Collections**: Support for multiple embedding models with automatic collection creation
|
33 |
|
34 |
## Intended Use
|
35 |
|
36 |
ArtyLLaMA is designed for developers, researchers, and creative professionals who want to:
|
37 |
+
- Explore the capabilities of various language models
|
38 |
+
- Generate and iterate on creative content, including code, designs, and written text
|
39 |
- Prototype AI-assisted applications and workflows
|
40 |
+
- Experiment with local and cloud-based AI inference
|
41 |
|
42 |
## Limitations
|
43 |
|
44 |
+
- Local setup requires installation of Ollama for certain features
|
45 |
+
- Performance depends on the user's hardware capabilities or chosen cloud provider
|
|
|
46 |
- Does not include built-in content moderation (users should implement their own safeguards)
|
47 |
|
48 |
## Ethical Considerations
|
|
|
50 |
Users of ArtyLLaMA should be aware of:
|
51 |
- Potential biases present in the underlying language models
|
52 |
- The need for responsible use and content generation
|
53 |
+
- Privacy implications of using AI-generated content and storing chat history
|
54 |
|
55 |
## Technical Specifications
|
56 |
|
57 |
+
- **Frontend**: React-based with Tailwind CSS
|
58 |
+
- **Backend**: Node.js with Express.js
|
59 |
+
- **Required Libraries**: React, Express.js, Tailwind CSS, Three.js, and others (see package.json)
|
60 |
+
- **Supported Model Formats**: Those supported by Ollama, OpenAI, and Anthropic
|
61 |
+
- **Hardware Requirements**: Varies based on the chosen model and deployment method
|
62 |
|
63 |
## Getting Started
|
64 |
|
65 |
1. Clone the repository: `git clone https://github.com/kroonen/ArtyLLaMA.git`
|
66 |
2. Install dependencies: `npm install`
|
67 |
+
3. Set up environment variables (see README for details on API keys)
|
68 |
+
4. Run the application: `npm run dev`
|
69 |
+
5. Access the interface at `http://localhost:3000`
|
|
|
70 |
|
71 |
+
For more detailed instructions, including Docker setup, visit our [GitHub repository](https://github.com/kroonen/ArtyLLaMA).
|
72 |
|
73 |
## License
|
74 |
|
|
|
94 |
- Email: robin@kroonen.ai
|
95 |
- Twitter: [@rob_x_ai](https://x.com/rob_x_ai)
|
96 |
|
97 |
+
We welcome contributions and feedback from the community, subject to the terms of our license!
|