--- title: Smart Retrieval API emoji: 📝 colorFrom: blue colorTo: indigo sdk: docker python_version: 3.11.4 app_port: 8000 pinned: false ---

Project logo

Smart Retrieval

[![Status](https://img.shields.io/badge/status-active-success.svg)]() [![GitHub Issues](https://img.shields.io/github/issues/digitalbuiltenvironment/Smart-Retrieval.svg)](https://github.com/digitalbuiltenvironment/Smart-Retrieval/issues) [![GitHub Pull Requests](https://img.shields.io/github/issues-pr/digitalbuiltenvironment/Smart-Retrieval.svg)](https://github.com/digitalbuiltenvironment/Smart-Retrieval/pulls) [![License](https://img.shields.io/badge/license-MIT-blue.svg)](/LICENSE) [![Test Build and Deploy](https://github.com/digitalbuiltenvironment/Smart-Retrieval/actions/workflows/pipeline.yml/badge.svg)](https://github.com/digitalbuiltenvironment/Smart-Retrieval/actions/workflows/pipeline.yml)
---


A Large Language Model (LLM) powered platform for information retrieval.

## 📝 Table of Contents - [About](#about) - [Getting Started](#getting_started) - [Deployment](#deployment) - [Built Using](#built_using) - [Contributing](#contributing) - [Authors](#authors) - [Acknowledgments](#acknowledgement) ## 🧐 About Smart Retrieval is a platform for efficient and streamlined information retrieval, especially in the realm of legal and compliance documents. With the power of Open-Source Large Language Models (LLM) and Retrieval Augmented Generation (RAG), it aims to enhance user experiences at JTC by addressing key challenges such as manual search inefficiencies and rigid file naming conventions, revolutionizing the way JTC employees access and comprehend crucial documents Project files bootstrapped with [`create-llama`](https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama). ## 🏁 Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See [deployment](#deployment) for notes on how to deploy the project on a live system. 1. First, startup the backend as described in the [backend README](./backend/README.md). 2. Second, run the development server of the frontend as described in the [frontend README](./frontend/README.md). 3. Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. ## 🚀 Deployment Deploy this on a live system. For more information, see the [DEPLOYMENT.md](./DEPLOYMENT.md). ## ⛏️ Built Using - [NextJs](https://nextjs.org/) - Frontend Web Framework - [Vercel AI](https://vercel.com/ai) - AI SDK library for building AI-powered streaming text and chat UIs. - [NodeJs](https://nodejs.org/en/) - Frontend Server Environment - [Python](https://python.org/) - Backend Server Environment - [FastAPI](https://fastapi.tiangolo.com/) - Backend API Web Framework - [LlamaIndex](https://www.llamaindex.ai/) - Data Framework for LLM ## 📑 Contributing Contributions, issues and feature requests are welcome! Read the [CONTRIBUTING.md](./CONTRIBUTING.md) for details and the process for submitting pull requests. ## ✍️ Authors - [@xkhronoz](https://github.com/xkhronoz) See also the list of [contributors](https://github.com/digitalbuiltenvironment/Smart-Retrieval/contributors) who participated in this project. ## 🎉 Acknowledgements - Hat tip to anyone whose code was used - Inspiration - References