LLMServer / README.md
AurelioAguirre's picture
added readme
526ff2e
|
raw
history blame
2.96 kB
metadata
title: LLMServer
emoji: πŸ‘Ή
colorFrom: indigo
colorTo: purple
sdk: docker
pinned: false

LLM Server

This repository contains a FastAPI-based server that serves open-source Large Language Models from Hugging Face.

Getting Started

These instructions will help you set up and run the project on your local machine.

Prerequisites

  • Python 3.10 or higher
  • Git

Cloning the Repository

Choose one of the following methods to clone the repository:

HTTPS

git clone https://huggingface.co/spaces/TeamGenKI/LLMServer
cd project-name

SSH

git clone git@hf.co:spaces/TeamGenKI/LLMServer
cd project-name

Setting Up the Virtual Environment

Windows

# Create virtual environment
python -m venv myenv

# Activate virtual environment
myenv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Linux

# Create virtual environment
python -m venv myenv

# Activate virtual environment
source myenv/bin/activate

# Install dependencies
pip install -r requirements.txt

macOS

# Create virtual environment
python3 -m venv myenv

# Activate virtual environment
source myenv/bin/activate

# Install dependencies
pip3 install -r requirements.txt

Running the Application

Once you have set up your environment and installed the dependencies, you can start the FastAPI application:

uvicorn main.app:app --reload

The API will be available at http://localhost:8001

API Documentation

Once the application is running, you can access:

  • Interactive API documentation (Swagger UI) at http://localhost:8000/docs
  • Alternative API documentation (ReDoc) at http://localhost:8000/redoc

Deactivating the Virtual Environment

When you're done working on the project, you can deactivate the virtual environment:

deactivate

Contributing

[Add contributing guidelines here]

License

[Add license information here]

Project Structure

.
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ main
β”‚   β”œβ”€β”€ api.py
β”‚   β”œβ”€β”€ app.py
β”‚   β”œβ”€β”€ config.yaml
β”‚   β”œβ”€β”€ env_template
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ logs
β”‚   β”‚   └── llm_api.log
β”‚   β”œβ”€β”€ models
β”‚   β”œβ”€β”€ __pycache__
β”‚   β”‚   β”œβ”€β”€ api.cpython-39.pyc
β”‚   β”‚   β”œβ”€β”€ app.cpython-39.pyc
β”‚   β”‚   β”œβ”€β”€ __init__.cpython-39.pyc
β”‚   β”‚   └── routes.cpython-39.pyc
β”‚   β”œβ”€β”€ routes.py
β”‚   β”œβ”€β”€ test_locally.py
β”‚   └── utils
β”‚       β”œβ”€β”€ errors.py
β”‚       β”œβ”€β”€ helpers.py
β”‚       β”œβ”€β”€ __init__.py
β”‚       β”œβ”€β”€ logging.py
β”‚       β”œβ”€β”€ __pycache__
β”‚       β”‚   β”œβ”€β”€ helpers.cpython-39.pyc
β”‚       β”‚   β”œβ”€β”€ __init__.cpython-39.pyc
β”‚       β”‚   β”œβ”€β”€ logging.cpython-39.pyc
β”‚       β”‚   └── validation.cpython-39.pyc
β”‚       └── validation.py
β”œβ”€β”€ README.md
└── requirements.txt