LiteCoder's picture
Create README.md
1d3a625

LiteCoder Experiment Reproducing package

  • To run the pre-train objective use the following scripts:

    • Reproduce LiteCoder with all objectives:

      • Navigate the folder Pre-training containing the LiteCoder.py file

      • Then, run Python LiteCoder.py --train-tt --train-cs --train-pd

        • The pretrained model is released on hugging face, therefore it automatically loads.
    • To run the ablation studies:

      • Ablation 1: Python LiteCoder.py --train-tt
      • Ablation 2: Python LiteCoder.py --train-tt --train-cs
      • Ablation 3: Python LiteCoder.py --train-tt --train-cs --train-pd
  • To Fine-tuning LiteCoder on downstream tasks:

    • Navigate to the Fine-tuning folder and then Downstream task folder:

      • Code Clone Detection:

        • Follow the instruction of readme.md file.
      • Code Translation:

        • Run setup.sh file.
        • Navigate to the scripts/finetune and run translate.sh file.
  • To extract the programming language features (i.e., token type, code sememe, and code dependencies)

    • We used open source datasets to extract language features. we released the extracted datasets on the Hugging Face:

    • Navigate to the utils directory:

      • Use either the Java or Python notebook file to run over your dataset.
      • Run the cells, for which, you want to extract the features.
  • Dependencies:

    • Feature extraction dependencies:

      - pip install ast-comments
      - pip install ast
      - pip install javalang
      - pip install tree-sitter
      
    • Model training dependencies:

      - pip install transformers 
      - pip install datasets
      - pip install pytorch_lightning
      - pip install torch
      
    • Or pip install -r requirements.txt