magichampz's picture
Update README.md
5a362be
---
license: mit
---
# Model Card for lego-technic-sorting-model
Classification of lego technic pieces under basic room lighting conditions
## Model Details
### Model Description
CNN designed from the ground up, without using a pre-trained model to classify images of lego pieces into 7 categories. <br>
Achieved a 93% validation accuracy
- **Developed by:** Aveek Goswami, Amos Koh
- **Funded by:** Nullspace Robotics Singapore
- **Model type:** Convolutional Neural Network (CNN)
### Model Sources
- **Repository:** https://github.com/magichampz/lego-sorting-machine-ag-ak
## Uses
The files in the create-model folder are meant to be used on your own computer.
You can train your own deep learning model using your own data and test this model on your computer using testing-tflite-model.py on a single image.
The model was trained on Google Colab, so create_training_data_array.py was used to creata a numpy array file to upload data in the form of a numpy array to Google Colab.
After transfering the tflite model to your Pi, you can then run the image classification file in the raspberry-pi folder to detect and classify lego pieces in real time.
Example of real time object detection and classification:
![image/gif](https://cdn-uploads.huggingface.co/production/uploads/652dc3dab86e108d0fea458c/E7UZXLWPvU_39cxrF49jD.gif)
## Bias, Limitations and Recommendations
The images of the lego pieces used to train the model were taken in room lighting conditions, illuminated with a torchlight. <br>
To use the model, would recommend trying to recreate the conditions and achieve photographs with a similar lighting. <br>
Otherwise, it might be better to retrain the model with a new dataset of images corresponding to your lighting conditions
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
## Training Details
### Training Data
- **Data:** https://huggingface.co/datasets/magichampz/lego-technic-pieces <br>
More images can be taken by editing the motion_detection_and_image_classification.py script.
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Training Procedure
The model was trained using the GPU's available on Google Colab. The jupyter notebook loaded the data from a npy file (in the dataset card), which contained all the images as well as their category labels.
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing
Images were normalized before being fed into the model. Their contrast was also increased using the increase_contrast_more function defined in the notebook attached.
## Evaluation
### Results
Our model was trained with 6000 images across 7 different categories of lego technic pieces, split into a 80/20 train/test split. <br>
It achieved 93% testing accuracy and graphs of the accuracy and loss are shown below. <br>
A confusion matrix was also plotted to visualize the performance of the classification algorithm. It depicts the count value of true versus false predictions across each category.
![Unknown-5](https://user-images.githubusercontent.com/91732309/190358182-58fa5671-263d-490b-8f54-616cb2daf764.png)