LiteCoder Experiment Reproducing package
To run the pre-train objective use the following scripts:
Reproduce LiteCoder with all objectives:
Navigate the folder
Pre-training
containing theLiteCoder.py
fileThen, run
Python LiteCoder.py --train-tt --train-cs --train-pd
- The pretrained model is released on hugging face, therefore it automatically loads.
To run the ablation studies:
- Ablation 1:
Python LiteCoder.py --train-tt
- Ablation 2:
Python LiteCoder.py --train-tt --train-cs
- Ablation 3:
Python LiteCoder.py --train-tt --train-cs --train-pd
- Ablation 1:
To
Fine-tuning
LiteCoder on downstream tasks:Navigate to the
Fine-tuning
folder and thenDownstream task
folder:Code Clone Detection:
- Follow the instruction of
readme.md
file.
- Follow the instruction of
Code Translation:
- Run
setup.sh
file. - Navigate to the
scripts/finetune
and runtranslate.sh
file.
- Run
To extract the programming language features (i.e.,
token type
,code sememe
, andcode dependencies
)We used open source datasets to extract language features. we released the extracted datasets on the Hugging Face:
LT_Java
: LiteCoder/LT_JavaLT_Python
: LiteCoder/LT_PythonLT_Java_Dependency
: LiteCoder/LT_Java_Dependency
Navigate to the utils directory:
- Use either the
Java
orPython
notebook file to run over your dataset. - Run the cells, for which, you want to extract the features.
- Use either the
Dependencies:
Feature extraction dependencies:
- pip install ast-comments - pip install ast - pip install javalang - pip install tree-sitter
Model training dependencies:
- pip install transformers - pip install datasets - pip install pytorch_lightning - pip install torch
Or
pip install -r requirements.txt