--- license: mit --- # EEG Forecasting with Llama 3.1-8B and Time-LLM This repository contains the code and model for forecasting EEG signals by combining the quantized Llama 3.1-8B model from [Hugging Face](https://huggingface.co/akshathmangudi/llama3.1-8b-quantized) and a modified version of the [Time-LLM](https://github.com/KimMeen/Time-LLM) framework. ## Overview This project aims to leverage large language models (LLMs) for time-series forecasting, specifically focusing on EEG data. The integration of Llama 3.1-8B allows us to apply powerful sequence modeling capabilities to predict future EEG signal patterns with high accuracy and efficiency. ### Key Features - **Quantized Llama 3.1-8B Model**: Utilizes a quantized version of Llama 3.1-8B to reduce computational requirements while maintaining performance. - **Modified Time-LLM Framework**: Adapted the Time-LLM framework for EEG signal forecasting, allowing for efficient processing of EEG time-series data. - **Scalable and Flexible**: The model can be easily adapted to other time-series forecasting tasks beyond EEG data. ## Getting Started ### Prerequisites Before you begin, ensure you have the following installed: - Python 3.8+ - PyTorch - Transformers (Hugging Face) - Time-LLM dependencies (see the original [Time-LLM repository](https://github.com/KimMeen/Time-LLM)) - Download the Llama 3.1-8B quantized model from Hugging Face: git lfs install git clone https://huggingface.co/akshathmangudi/llama3.1-8b-quantized ### EEG datasets The datasets can be get from [this survey](https://github.com/ChiShengChen/EEG-Datasets), choose the dataset you want to try. ## Acknowledgments - Hugging Face for the Llama 3.1-8B-quantized model. - The original Time-LLM repository for the time-series framework.