VARCO_Arena / README_kr.md
sonsus's picture
others
c2ba4d5
|
raw
history blame
5.37 kB

Varco Arena

๋ฐ”๋ฅด์ฝ” ์•„๋ ˆ๋‚˜๋Š” ํ…Œ์ŠคํŠธ์…‹ ๋ช…๋ น์–ด๋ณ„๋กœ ๋น„๊ตํ•  ๋ชจ๋ธ๋“ค์˜ ํ† ๋„ˆ๋จผํŠธ๋ฅผ ์ˆ˜ํ–‰ํ•˜์—ฌ ์ •ํ™•ํ•˜๊ฒŒ ๋ชจ๋ธ๋“ค์˜ ์ˆœ์œ„๋ฅผ ๋งค๊น๋‹ˆ๋‹ค. ์ด๊ฒƒ์€ reference ์•„์›ƒํ’‹๊ณผ ๋น„๊ตํ•˜์—ฌ ์Šน๋ฅ ์„ ๋งค๊ธฐ๋Š” ๋ฐฉ๋ฒ•๋ณด๋‹ค ์ •ํ™•ํ•˜๋ฉฐ ์กฐ๊ธˆ ๋” ์ €๋ ดํ•ฉ๋‹ˆ๋‹ค.

๋” ์ž์„ธํ•œ ๋‚ด์šฉ์— ๋Œ€ํ•ด์„œ๋Š” ์•„๋ž˜์˜ ๋งํฌ๋ฅผ ์ฐธ์กฐํ•˜์‹œ๋ฉด ๋ฉ๋‹ˆ๋‹ค.

Quickstart

๋กœ์ปฌ์—์„œ ์ŠคํŠธ๋ฆผ๋ฆฟ ์•ฑ์œผ๋กœ ์‹œ์ž‘ํ•˜๊ธฐ (์ถ”์ฒœ!)

git clone [THIS_REPO]
# install requirements below. we recommend miniforge to manage environment
cd streamlit_app_local
bash run.sh

๋” ์ž์„ธํ•œ ๋‚ด์šฉ์€ [THIS_REPO]/streamlit_app_local/README.md ์„ ์ฐธ์กฐํ•˜์„ธ์š”!

CLI ์‚ฌ์šฉ

  • cli์™€ ์›น ์•ฑ์€ ์„œ๋กœ ๊ฐ™์€ ์ฝ”๋“œ๋ฅผ ํ™œ์šฉํ•˜๋ฉฐ, ์•„๋ž˜์˜ ๋””๋ ‰ํ† ๋ฆฌ์— ์žˆ์Šต๋‹ˆ๋‹ค.
    • varco_arena/
  • vscode ์ƒ์—์„œ ๋””๋ฒ„๊น…์„ ์œ„ํ•œ ํ”„๋ฆฌ์…‹ ํ”„๋กฌํ”„ํŠธ๋ณ„ ํ…Œ์ŠคํŠธ ๋ช…๋ น์–ด๋Š” ๋‹ค์Œ ํŒŒ์ผ์— ์ ํ˜€์žˆ์Šต๋‹ˆ๋‹ค.
    • varco_arena/.vscode/launch.json
## gpt-4o-mini as a judge
python main.py -i "./some/dirpath/to/jsonl/files" -o SOME_REL_PATH_TO_CREATE -m tournament -e "gpt-4o-mini"
## vllm-openai served LLM as a judge
python main.py -i "./some/dirpath/to/jsonl/files" -o SOME_REL_PATH_TO_CREATE -e SOME_MODEL_NAME_SERVED -m tournament -u "http://url_to/your/vllm_openai_server:someport"

# dbg lines
## openai api judge dbg
python main.py -i "rsc/inputs_for_dbg/dbg_400_error_inputs/" -o SOME_WANTED_TARGET_DIR -e gpt-4o-mini
## other testing lines
python main.py -i "rsc/inputs_for_dbg/[SOME_DIRECTORY]/" -o SOME_WANTED_TARGET_DIR -e gpt-4o-mini
## dummy judge dbg (checking errors without api requests)
python main.py -i "rsc/inputs_for_dbg/dbg_400_error_inputs/" -o SOME_WANTED_TARGET_DIR -e debug

Requirements

python = 3.11.9 ์ƒ์—์„œ ํ…Œ์ŠคํŠธ ํ•จ. requirements.txt

openai>=1.17.0
munch
pandas
numpy
tqdm>=4.48.0
plotly
scikit-learn
kaleido
tiktoken>=0.7.0
pyyaml
transformers
streamlit>=1.40.2
openpyxl
git+https://github.com/shobrook/openlimit.git#egg=openlimit # do not install this by pypi

# Linux์ธ ๊ฒฝ์šฐ
uvloop
# Windows์ธ ๊ฒฝ์šฐ
winloop

Argument

  • -i, --input : ์ž…๋ ฅ ํŒŒ์ผ or ๋””๋ ‰ํ† ๋ฆฌ or ํŒŒ์ผ๋ช…์— ๋Œ€ํ•œ ์ •๊ทœ ํ‘œํ˜„์‹
  • -o, --output_dir : ์ถœ๋ ฅ ํŒŒ์ผ์ด ์ €์žฅ๋˜๋Š” ๋””๋ ‰ํ† ๋ฆฌ
  • -e, --evaluation : ํ‰๊ฐ€ ๋ชจ๋ธ (e.g. "gpt-4o-2024-05-13", "gpt-4o-mini", vllm์—์„œ ๋„์šด ๋ชจ๋ธ ๋ช… ๋“ฑ)
  • -m, --matching_method: ๋งค์น˜ ๋ฐฉ์‹ (๊ธฐ๋ณธ๊ฐ’ "tournament", "league" (๋น„์ถ”์ฒœ) )
  • -k, --openai_api_key : OpenAI API Key
  • -u, --openai_url: ๋กœ์ปฌ vLLM OpenAI ์„œ๋ฒ„ ์‚ฌ์šฉ ์‹œ URL(ip์ฃผ์†Œ+ํฌํŠธ)

advanced

  • -j, --n_jobs : asyncio.semaphore()์— ์ „๋‹ฌ๋  ์ธ์ž. Arena๊ฐ€ ์ง„ํ–‰๋˜์ง€ ์•Š๋Š”๋‹ค๋ฉด ๊ธฐ๋ณธ๊ฐ’์ธ 32 ์ดํ•˜๋กœ ๋‚ด๋ ค๋ณด์ž
  • -p, --evalprompt : ํ•ด๋‹น ๋””๋ ‰ํ† ๋ฆฌ ์ฐธ์กฐ
  • -lr, --limit_requests : vLLM OpenAI ์„œ๋ฒ„ ์š”์ฒญ ์ œํ•œ (default: 7,680)
  • -lt, --limit_tokens : vLLM OpenAI ์„œ๋ฒ„ ํ† ํฐ ์ œํ•œ (default: 15,728,640)

Input Data Format

input jsonl ๊ฐ€์ด๋“œ ๋งํฌ

Contributing & Customizing

git clone ๋ฐ dependency ์„ค์น˜ ํ›„์— ํ•  ์ผ

pip install pre-commit
pre-commit install

commit ํ•˜๊ธฐ ์ „์— ํ•  ์ผ

bash precommit.sh # ์ด๊ฒŒ ์ฝ”๋“œ๋“ค์„ ๋‹ค ๋ฆฌํฌ๋งทํ•ด์ค„๊ฑฐ์ž„

๋ฌธ์˜: ์†์„ ์ผ

  • ๋‚ด๊ฐ€ ๋งŒ๋“  ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์–ด์š”
    • ./varco_arena/prompts/ ์—์„  ๊ฐ์ข… ํ”„๋กฌํ”„ํŠธ ํด๋ž˜์Šค ๋ฐ yaml ํŒŒ์ผ ํ˜•ํƒœ๋กœ ์ •์˜๋œ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค. ํ”„๋ฆฌ์…‹์„ ์ฐธ์กฐํ•˜์—ฌ ์ž‘์„ฑํ•˜์‹œ๋ฉด ๋ฉ๋‹ˆ๋‹ค.
  • ํ…Œ์ŠคํŠธ์…‹ ๋ณ„๋กœ ๋‹ค๋ฅธ ํ‰๊ฐ€ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์–ด์š” (e.g. ์ž‘์—…์— ๋”ฐ๋ผ ๋‹ค๋ฅธ ํ”„๋กฌํ”„ํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์–ด์š”)
    • ์œ„ ๊ฑธ์–ด๋“œ๋ฆฐ ๋งํฌ์˜ load_prompt ๋ฅผ ํ†ตํ•ด์„œ promptname + task ํ˜•ํƒœ๋กœ ./varco_arena_core/manager.py:async_run ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋กœ๋“œ๋˜๋„๋ก ํ•ด๋†“์•˜์Šต๋‹ˆ๋‹ค.
  • ์ œ๊ฐ€ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์€ ์ž…๋ ฅํŒŒ์ผ์— instruction, source, generated ์ด์™ธ์— ๋‹ค๋ฅธ ํ•„๋“œ๋ฅผ ์ถ”๊ฐ€ํ•ด์„œ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์–ด์š”.
    • ์กฐ๊ธˆ ๋ณต์žกํ•ด์ง€๋Š”๋ฐ ๋‹ค์Œ ๋ถ€๋ถ„์„ ๊ณ ์ณ์ฃผ์„ธ์š”
      • varco_arena/eval_utils.py ์—์„œ async_eval_w_prompt ๋ถ€๋ถ„์„ ์†๋ด์•ผํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์—ฌ๊ธฐ์—์„œ PROMPT_OBJ.complete_prompt()์„ ํ˜ธ์ถœํ•จ)
      • ๊ทธ ์™ธ ์—ฐ๊ด€๋œ ๋ถ€๋ถ„์€ ํƒ€๊ณ ํƒ€๊ณ  ๊ณ ์ณ์ฃผ์…”์•ผ...

Special Thanks to (contributors)

  • ์ด๋ฏผํ˜ธ (@๋Œ€ํ™”๋ชจ๋ธํŒ€, NCSOFT) github
    • query wrapper
    • rag prompt
  • ์˜ค์ฃผ๋ฏผ (@์ƒ์„ฑ๋ชจ๋ธํŒ€, NCSOFT)
    • overall prototyping of the system in haste

Citation

์ €ํฌ ์ž‘์—…๋ฌผ์ด ๋„์›€์ด ๋˜์—ˆ๋‹ค๋ฉด ์ €ํฌ๋„ ๋„์›€์„ ๋ฐ›์•„๋ณผ ์ˆ˜ ์žˆ์„๊นŒ์š”?๐Ÿ˜‰

@misc{son2024varcoarenatournamentapproach,
      title={Varco Arena: A Tournament Approach to Reference-Free Benchmarking Large Language Models},
      author={Seonil Son and Ju-Min Oh and Heegon Jin and Cheolhun Jang and Jeongbeom Jeong and Kuntae Kim},
      year={2024},
      eprint={2411.01281},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.01281},
}