File size: 934 Bytes
541ba47
 
 
be97739
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
license: other
---

# Overview
This project aims to support visually impaired individuals in their daily navigation. 

This project combines the [YOLO](https://ultralytics.com/yolov8) model and [LLaMa 2 7b](https://huggingface.co/meta-llama/Llama-2-7b) for the navigation.

YOLO is trained on the bounding box data from the [AI Hub](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=189),
Output of YOLO (bbox data) is converted as lists like `[[class_of_obj_1, xmin, xmax, ymin, ymax, size], [class_of...] ...]` then added to the input of question.
The LLM is trained to navigate using [LearnItAnyway/Visual-Navigation-21k](https://huggingface.co/datasets/LearnItAnyway/Visual-Navigation-21k) multi-turn dataset


## Usage
We show how to use the model in [yolo_llama_visnav_test.ipynb](https://huggingface.co/LearnItAnyway/YOLO_LLaMa_7B_VisNav/blob/main/yolo_llama_visnav_test.ipynb)