cyberosa
upgrading to python 3.11
634798b
|
raw
history blame
1.88 kB
---
title: Leaderboard Gradio
emoji: 🦀
colorFrom: pink
colorTo: blue
sdk: gradio
python_version: 3.11
sdk_version: 4.39.0
app_file: app.py
pinned: false
license: apache-2.0
---
# About this space
This HF space is a 'Gradio' based space with the configuration above.
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# Cloning the space repo
`git clone https://huggingface.co/spaces/valory/olas-prediction-leaderboard`
# Updating the space repo
Update the space like any github repo
Make sure you have git-lfs (since the CSVs are big and need LFS to push)
Use similar `git` functions to push
# Re-starting the space repo
There are two ways:
1. Push a small commit
2. Use the `Restart this space` from the [settings](https://huggingface.co/spaces/valory/olas-prediction-leaderboard/settings) page
3. Use the `Factory rebuild`
# Running the benchmark to contribute with new data
Run the benchmark locally using this [repo](https://github.com/valory-xyz/olas-predict-benchmark)
Please see the readme on the repo on how to run
Copy the relevant row/columns from `summary.csv` in the results folder
Paste the CSV in the root of the `olas-prediction-leaderboard` HF space repo as `formatted_data.csv`
Add the changes and push using `git add, commit, and push` commands
Note: you just need to add the new data as a new row in the csv file. One row per model/tool.
# Scripts of the repository
## app.py
Starts the gradio app
Also, kickstart the start.py
There are 4 tabs:
1. Benchmark Leaderboard: Shows the benchmark data
2. About: Some FAQs
3. Contribute: Some details on how to contribute
4. Run the benchmark: Run the benchmark on any tools. You will have to provide your api keys
## start.py
Setups the necessary things including - Olas-predict-benchmark repo, mech repo, and the required datasets for running the benchmark