File size: 1,794 Bytes
00da2d4 a285b79 00da2d4 5b32513 00da2d4 6eb5209 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
license: mit
---
# EEG Forecasting with Llama 3.1-8B and Time-LLM
This repository contains the code and model for forecasting EEG signals by combining the quantized Llama 3.1-8B model from [Hugging Face](https://huggingface.co/akshathmangudi/llama3.1-8b-quantized) and a modified version of the [Time-LLM](https://github.com/KimMeen/Time-LLM) framework.
## Overview
This project aims to leverage large language models (LLMs) for time-series forecasting, specifically focusing on EEG data. The integration of Llama 3.1-8B allows us to apply powerful sequence modeling capabilities to predict future EEG signal patterns with high accuracy and efficiency.
### Key Features
- **Quantized Llama 3.1-8B Model**: Utilizes a quantized version of Llama 3.1-8B to reduce computational requirements while maintaining performance.
- **Modified Time-LLM Framework**: Adapted the Time-LLM framework for EEG signal forecasting, allowing for efficient processing of EEG time-series data.
- **Scalable and Flexible**: The model can be easily adapted to other time-series forecasting tasks beyond EEG data.
## Getting Started
### Prerequisites
Before you begin, ensure you have the following installed:
- Python 3.8+
- PyTorch
- Transformers (Hugging Face)
- Time-LLM dependencies (see the original [Time-LLM repository](https://github.com/KimMeen/Time-LLM))
- Download the Llama 3.1-8B quantized model from Hugging Face:
git lfs install
git clone https://huggingface.co/akshathmangudi/llama3.1-8b-quantized
### EEG datasets
The datasets can be get from [this survey](https://github.com/ChiShengChen/EEG-Datasets), choose the dataset you want to try.
## Acknowledgments
- Hugging Face for the Llama 3.1-8B-quantized model.
- The original Time-LLM repository for the time-series framework.
|