The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError Exception: UnicodeDecodeError Message: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single for _, table in generator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 90, in _generate_tables batch = f.read(self.config.chunksize) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1104, in read_with_retries out = read(*args, **kwargs) File "/usr/local/lib/python3.9/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string |
---|
# LiteCoder Experiment Reproducing package |
- To run the pre-train objective use the following scripts: |
- Reproduce LiteCoder with all objectives: |
- Navigate the folder `Pre-training` containing the `LiteCoder.py` file |
- Then, run `Python LiteCoder.py --train-tt --train-cs --train-pd` |
- The pretrained model is released on [hugging face](https://huggingface.co/LiteCoder/LiteCoder_pretrained), therefore it automatically loads. |
- To run the ablation studies: |
- Ablation 1: `Python LiteCoder.py --train-tt` |
- Ablation 2: `Python LiteCoder.py --train-tt --train-cs` |
- Ablation 3: `Python LiteCoder.py --train-tt --train-cs --train-pd` |
- To `Fine-tuning` LiteCoder on downstream tasks: |
- Navigate to the `Fine-tuning` folder and then `Downstream task` folder: |
- Code Clone Detection: |
- Follow the instruction of `readme.md` file. |
- Code Translation: |
- Run `setup.sh` file. |
- Navigate to the `scripts/finetune` and run `translate.sh` file. |
- To extract the programming language features (i.e., `token type`, `code sememe`, and `code dependencies`) |
- We used open source datasets to extract language features. we released the extracted datasets on the Hugging Face: |
- `LT_Java` : [LiteCoder/LT_Java](https://huggingface.co/datasets/LiteCoder/LT_Java) |
- `LT_Python` : [LiteCoder/LT_Python](https://huggingface.co/datasets/LiteCoder/LT_Python) |
- `LT_Java_Dependency` : [LiteCoder/LT_Java_Dependency](https://huggingface.co/datasets/LiteCoder/LT_Java_Dependency) |
- Navigate to the utils directory: |
- Use either the `Java` or `Python` notebook file to run over your dataset. |
- Run the cells, for which, you want to extract the features. |
- Dependencies: |
- Feature extraction dependencies: |
```bash |
- pip install ast-comments |
- pip install ast |
- pip install javalang |
- pip install tree-sitter |
- Model training dependencies: |
``` bash |
- pip install transformers |
- pip install datasets |
- pip install pytorch_lightning |
- pip install torch |
- Install the required packages: |
``` bash |
- pip install -r requirements.txt |
absl-py==1.4.0 |
accelerate==0.20.3 |
aiohttp==3.8.4 |
aiosignal==1.3.1 |
antlr4-python3-runtime==4.9.3 |
anyio==3.7.1 |
argon2-cffi==21.3.0 |
argon2-cffi-bindings==21.2.0 |
array-record==0.4.0 |
arrow==1.2.3 |
asgiref==3.6.0 |
astor==0.8.1 |
astroid==2.6.6 |
asttokens==2.2.1 |
astunparse==1.6.3 |
async-lru==2.0.3 |
async-timeout==4.0.2 |
attrs==23.1.0 |
Babel==2.12.1 |
backcall==0.2.0 |
beautifulsoup4==4.12.2 |
bitsandbytes==0.39.1 |
bitsandbytes-cuda117==0.26.0.post2 |
bleach==6.0.0 |
boltons @ file:///croot/boltons_1677628692245/work |
brotlipy==0.7.0 |
cachetools==5.3.1 |
certifi @ file:///croot/certifi_1690232220950/work/certifi |
cffi @ file:///croot/cffi_1670423208954/work |
chardet==4.0.0 |
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work |
chex==0.1.7 |
click==8.1.6 |
clu==0.0.9 |
cmake==3.27.0 |
cohesion==1.0.0 |
colorama==0.4.6 |
comm==0.1.3 |
conda @ file:///croot/conda_1690381753336/work |
conda-content-trust @ file:///tmp/abs_5952f1c8-355c-4855-ad2e-538535021ba5h26t22e5/croots/recipe/conda-content-trust_1658126371814/work |
conda-package-handling @ file:///croot/conda-package-handling_1672865015732/work |
conda_package_streaming @ file:///croot/conda-package-streaming_1670508151586/work |
contextlib2==21.6.0 |
LiteCoder Experiment Reproducing package
To run the pre-train objective use the following scripts:
Reproduce LiteCoder with all objectives:
Navigate the folder
Pre-training
containing theLiteCoder.py
fileThen, run
Python LiteCoder.py --train-tt --train-cs --train-pd
- The pretrained model is released on hugging face, therefore it automatically loads.
To run the ablation studies:
- Ablation 1:
Python LiteCoder.py --train-tt
- Ablation 2:
Python LiteCoder.py --train-tt --train-cs
- Ablation 3:
Python LiteCoder.py --train-tt --train-cs --train-pd
- Ablation 1:
To
Fine-tuning
LiteCoder on downstream tasks:Navigate to the
Fine-tuning
folder and thenDownstream task
folder:Code Clone Detection:
- Follow the instruction of
readme.md
file.
- Follow the instruction of
Code Translation:
- Run
setup.sh
file. - Navigate to the
scripts/finetune
and runtranslate.sh
file.
- Run
To extract the programming language features (i.e.,
token type
,code sememe
, andcode dependencies
)We used open source datasets to extract language features. we released the extracted datasets on the Hugging Face:
LT_Java
: LiteCoder/LT_JavaLT_Python
: LiteCoder/LT_PythonLT_Java_Dependency
: LiteCoder/LT_Java_Dependency
Navigate to the utils directory:
- Use either the
Java
orPython
notebook file to run over your dataset. - Run the cells, for which, you want to extract the features.
- Use either the
Dependencies:
Feature extraction dependencies:
- pip install ast-comments - pip install ast - pip install javalang - pip install tree-sitter
Model training dependencies:
- pip install transformers - pip install datasets - pip install pytorch_lightning - pip install torch
Or
pip install -r requirements.txt
- Downloads last month
- 8