Edit model card

Training procedure

Framework versions

  • PEFT 0.4.0

  • PEFT 0.4.0

  • use

in https://github.com/hiyouga/ChatGLM-Efficient-Tuning/tree/main

CUDA_VISIBLE_DEVICES=3 nohup python src/web_demo.py \
    --model_name_or_path /HOME/jack/model/chatglm-6b \
    --checkpoint_dir paper_meta\ \
    > log_web_demo.txt 2>&1 & tail -f log_web_demo.txt

🚩Citation

Please cite the following paper if you use jackkuo/ChatPaperGPT_32k in your work.

@INPROCEEDINGS{10412837,
  author={Guo, Menghao and Wu, Fan and Jiang, Jinling and Yan, Xiaoran and Chen, Guangyong and Li, Wenhui and Zhao, Yunhong and Sun, Zeyi},
  booktitle={2023 IEEE International Conference on Knowledge Graph (ICKG)}, 
  title={Investigations on Scientific Literature Meta Information Extraction Using Large Language Models}, 
  year={2023},
  volume={},
  number={},
  pages={249-254},
  keywords={Measurement;Knowledge graphs;Information retrieval;Data mining;Task analysis;information extraction;large language model;scientific literature},
  doi={10.1109/ICKG59574.2023.00036}}
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .