Yotam-Perlitz commited on
Commit
d3905ad
1 Parent(s): f1c3da2

improve motivation

Browse files

Signed-off-by: Yotam-Perlitz <y.perlitz@ibm.com>

Files changed (1) hide show
  1. app.py +22 -112
app.py CHANGED
@@ -53,7 +53,7 @@ st.markdown(
53
  """
54
  The BenchBench leaderboard ranks benchmarks based on their agreement with the *Aggregate Benchmark* – a comprehensive, combined measure of existing benchmark results.
55
  \n
56
- To achive it, we scraped results from multiple benchmarks (citations below) to allow for obtaining benchmark agreement results with a wide range of benchmark using a large set of models.
57
  \n
58
  BenchBench is for you if:
59
  """
@@ -68,11 +68,11 @@ st.markdown(
68
 
69
  st.markdown(
70
  """
71
- In our work -- [Benchmark Agreement Testing Done Right](https://arxiv.org/abs/2407.13696),
72
  we standardize BAT and show the importance of its configurations, notably,
73
- the benchmarks we compare to, and the models we use to compare with, check it out int he sidebar.
74
  \n
75
- We show that agreements are best reporesented with the Z Score, the relative agreement of each benchmark to the Aggragate benchmark, as presented below.
76
  """
77
  )
78
 
@@ -325,7 +325,7 @@ reporter = Reporter()
325
  z_scores = reporter.get_all_z_scores(agreements=agreements, aggragate_name="aggregate")
326
  z_scores.drop(columns=["n_models_of_corr_with_agg"], inplace=True)
327
 
328
- corr_name = f"{'Kendall Tau' if corr_type=='kendall' else 'Per.'} Corr."
329
 
330
  z_scores["z_score"] = z_scores["z_score"].round(2)
331
  z_scores["corr_with_agg"] = z_scores["corr_with_agg"].round(2)
@@ -699,6 +699,7 @@ with st.expander(label="Citations"):
699
 
700
  st.subheader("Benchmark Report Card")
701
 
 
702
 
703
  benchmarks = allbench.df["scenario"].unique().tolist()
704
  index_to_use = 1
@@ -742,11 +743,6 @@ fig = px.scatter(
742
  )
743
  st.plotly_chart(fig, use_container_width=True)
744
 
745
- st.markdown(
746
- "BenchBench-Leaderboard complements our study, where we analyzed over 40 prominent benchmarks and introduced standardized practices to enhance the robustness and validity of benchmark evaluations through the [BenchBench Python package](#). "
747
- "The BenchBench-Leaderboard serves as a dynamic platform for benchmark comparison and is an essential tool for researchers and practitioners in the language model field aiming to select and utilize benchmarks effectively. "
748
- )
749
-
750
  st.subheader("How did we get the Z Scores?", divider=True)
751
 
752
  st.write(r"""
@@ -779,151 +775,65 @@ fig.update_layout(
779
  # # Plot!
780
  st.plotly_chart(fig, use_container_width=True)
781
 
 
 
782
  st.subheader("Why should you use the BenchBench Leaderboard?")
783
 
784
  st.markdown(
785
  """
786
-
787
- Current practices in Benchmark Agreement Testing (BAT) often suffer from a lack of standardization and transparency, which can lead to inconsistent results and diminished trust in benchmark evaluations. Several key issues are prevalent in the field:
788
-
 
789
  """
790
  )
791
 
792
  st.markdown(
793
  """
794
- - **Lack of Standard Methodologies:** Unlike other scientific procedures that follow rigorous methodologies, BAT lacks uniform procedures across different studies. Researchers often employ varied criteria for selecting benchmarks and models for comparison, which leads to results that cannot be easily compared or replicated. This variation undermines the reliability of conclusions drawn from BAT and makes it difficult for other researchers to build on existing work.
 
795
  """
796
  )
797
 
798
  st.image(
799
  "images/motivation.png",
800
- caption="Conclusions depend on the models considered. Kendall-tau correlations between the LMSys Arena benchmark and three other benchmarks: BBH, MMLU, and Alpaca v2. Each group of bars represents the correlation for different sets of top models, specifically the top 5, top 10, and top 15 (overlapping) models (according to the Arena). The results indicate that the degree of agreement between benchmarks varies with the number of top models considered, highlighting that different selections of models can lead to varying conclusions about benchmark agreement.",
801
  use_column_width=True,
802
  )
803
 
804
  st.markdown(
805
  """
806
- - **Arbitrary Selection of Reference Benchmarks:** One of the most critical decisions in BAT is the choice of reference benchmarks. Currently, this choice is often arbitrary and lacks a clear rationale, influenced by availability or personal preference rather than strategic alignment with the benchmark’s purpose. This can skew the results significantly, as different benchmarks may not be equally representative or relevant to the models being tested.
807
  """
808
  )
809
  st.markdown(
810
  """
811
- - **Inadequate Model Representation:** BAT frequently relies on a limited subset of models, which may not comprehensively represent the diversity of architectures and training paradigms in modern language models. This selective representation can lead to biased agreement scores that favor certain types of models over others, failing to provide a holistic view of model performance across different benchmarks.
812
  """
813
  )
814
 
815
  st.image(
816
  "images/pointplot_granularity_matters.png",
817
- caption="Correlations increase with number of models. Mean correlation (y) between each benchmark (lines) and the rest, given different numbers of models. The Blue and Orange lines are the average of all benchmark pair correlations with models sampled randomly (orange) or in contiguous sets (blue). The shaded lines represents adjacent sampling for the different benchmarks.",
818
  use_column_width=True,
819
  )
820
 
821
  st.markdown(
822
  """
823
- - **Overemphasis on Correlation Metrics:** Current BAT practices tend to over-rely on correlation metrics without adequately considering their limitations and the context of their application. While these metrics can provide useful insights, they are often treated as definitive evidence of agreement without acknowledging that high correlation does not necessarily imply conceptual alignment between benchmarks.
824
  """
825
  )
826
 
827
  st.markdown(
828
  """
829
- To address these issues, there is a critical need for a more structured approach to BAT that includes clear guidelines for benchmark and model selection, a broader consideration of agreement metrics, and an acknowledgment of the evolving nature of technology in this space. By reforming BAT practices, the research community can improve the reliability and utility of benchmarks as tools for evaluating and advancing language models.
 
830
  """
831
  )
832
 
833
 
834
  st.image(
835
  "images/ablations.png",
836
- caption="Our recommendations substantially reduce the variance of BAT. Ablation analysis for each BAT recommendation separately and their combinations.",
837
  use_column_width=True,
838
  )
839
-
840
-
841
- st.header("The BenchBench package")
842
-
843
- st.markdown("""
844
- ### Overview
845
-
846
- The BAT package is designed to facilitate benchmark agreement testing for NLP models. It allows users to easily compare multiple models against various benchmarks and generate comprehensive reports on their agreement.
847
-
848
- ### Installation
849
-
850
- To install the BAT package, you can use pip:
851
-
852
- ```
853
- pip install bat-package
854
- ```
855
-
856
- ### Usage Example
857
-
858
- Below is a step-by-step example of how to use the BAT package to perform agreement testing.
859
-
860
- #### Step 1: Configuration
861
-
862
- First, set up the configuration for the tests:
863
-
864
- ```python
865
- import pandas as pd
866
- from bat import Tester, Config, Benchmark, Reporter
867
- from bat.utils import get_holistic_benchmark
868
-
869
- cfg = Config(
870
- exp_to_run="example",
871
- n_models_taken_list=[0],
872
- model_select_strategy_list=["random"],
873
- n_exps=10
874
- )
875
- ```
876
-
877
- #### Step 2: Fetch Model Names
878
-
879
- Fetch the names of the reference models to be used for scoring:
880
-
881
- ```python
882
- tester = Tester(cfg=cfg)
883
- models_for_benchmark_scoring = tester.fetch_reference_models_names(
884
- reference_benchmark=get_holistic_benchmark(), n_models=20
885
- )
886
- print(models_for_benchmark_scoring)
887
- ```
888
-
889
- #### Step 3: Load and Prepare Benchmark
890
-
891
- Load a new benchmark and add an aggregate column:
892
-
893
- ```python
894
- newbench_name = "fakebench"
895
- newbench = Benchmark(
896
- pd.read_csv(f"src/bat/assets/{newbench_name}.csv"),
897
- data_source=newbench_name,
898
- )
899
- newbench.add_aggregate(new_col_name=f"{newbench_name}_mwr")
900
- ```
901
-
902
- #### Step 4: Agreement Testing
903
-
904
- Perform all-vs-all agreement testing on the new benchmark:
905
-
906
- ```python
907
- newbench_agreements = tester.all_vs_all_agreement_testing(newbench)
908
- reporter = Reporter()
909
- reporter.draw_agreements(newbench_agreements)
910
- ```
911
-
912
- #### Step 5: Extend and Clean Benchmark
913
-
914
- Extend the new benchmark with holistic data and clear repeated scenarios:
915
-
916
- ```python
917
- allbench = newbench.extend(get_holistic_benchmark())
918
- allbench.clear_repeated_scenarios(source_to_keep=newbench_name)
919
- ```
920
-
921
- #### Step 6: Comprehensive Agreement Testing
922
-
923
- Perform comprehensive agreement testing and visualize:
924
-
925
- ```python
926
- all_agreements = tester.all_vs_all_agreement_testing(allbench)
927
- reporter.draw_agreements(all_agreements)
928
- ```
929
- """)
 
53
  """
54
  The BenchBench leaderboard ranks benchmarks based on their agreement with the *Aggregate Benchmark* – a comprehensive, combined measure of existing benchmark results.
55
  \n
56
+ To achive this, we scraped results from multiple benchmarks (citations below) to allow for obtaining benchmark agreement results with a wide range of benchmark using a large set of models.
57
  \n
58
  BenchBench is for you if:
59
  """
 
68
 
69
  st.markdown(
70
  """
71
+ In our work - [Benchmark Agreement Testing Done Right](https://arxiv.org/abs/2407.13696) and [opensource repo](https://github.com/IBM/benchbench),
72
  we standardize BAT and show the importance of its configurations, notably,
73
+ the benchmarks we compare to, and the models we use to compare with (see sidebar).
74
  \n
75
+ We also show that agreements are best represented with the relative agreement (Z Score) of each benchmark to the Aggragate benchmark, as presented below in the leaderboard.
76
  """
77
  )
78
 
 
325
  z_scores = reporter.get_all_z_scores(agreements=agreements, aggragate_name="aggregate")
326
  z_scores.drop(columns=["n_models_of_corr_with_agg"], inplace=True)
327
 
328
+ corr_name = f"{'Kendall Tau' if corr_type=='kendall' else 'Per.'} Corr. w/ Agg"
329
 
330
  z_scores["z_score"] = z_scores["z_score"].round(2)
331
  z_scores["corr_with_agg"] = z_scores["corr_with_agg"].round(2)
 
699
 
700
  st.subheader("Benchmark Report Card")
701
 
702
+ st.markdown("Choose the Benchmark from which you want to get a report.")
703
 
704
  benchmarks = allbench.df["scenario"].unique().tolist()
705
  index_to_use = 1
 
743
  )
744
  st.plotly_chart(fig, use_container_width=True)
745
 
 
 
 
 
 
746
  st.subheader("How did we get the Z Scores?", divider=True)
747
 
748
  st.write(r"""
 
775
  # # Plot!
776
  st.plotly_chart(fig, use_container_width=True)
777
 
778
+ import streamlit as st
779
+
780
  st.subheader("Why should you use the BenchBench Leaderboard?")
781
 
782
  st.markdown(
783
  """
784
+ Benchmark Agreement Testing (BAT) is crucial for validating new benchmarks and understanding the relationships between existing ones.
785
+ However, current BAT practices often lack standardization and transparency, leading to inconsistent results and hindering reliable comparisons.
786
+ The BenchBench Leaderboard addresses these challenges by offering a **principled and data-driven approach to benchmark evaluation**.
787
+ Let's explore some of the key issues with current BAT practices:
788
  """
789
  )
790
 
791
  st.markdown(
792
  """
793
+ - **Lack of Standard Methodologies:** BAT lacks standardized procedures for benchmark and model selection, hindering reproducibility and comparability across studies.
794
+ Researchers often make arbitrary choices, leading to results that are difficult to interpret and build upon.
795
  """
796
  )
797
 
798
  st.image(
799
  "images/motivation.png",
800
+ caption="**Example: Model Selection Impacts BAT Conclusions.** Kendall-tau correlations between the LMSys Arena benchmark and three others demonstrate how agreement varies significantly depending on the subset of models considered. This highlights the need for standardized model selection in BAT.",
801
  use_column_width=True,
802
  )
803
 
804
  st.markdown(
805
  """
806
+ - **Arbitrary Selection of Reference Benchmarks:** The choice of reference benchmarks in BAT is often subjective and lacks a clear rationale. Using different reference benchmarks can lead to widely varying agreement scores, making it difficult to draw robust conclusions about a target benchmark's validity.
807
  """
808
  )
809
  st.markdown(
810
  """
811
+ - **Inadequate Model Representation:** BAT often relies on a limited set of models that may not adequately represent the diversity of modern language models. This can lead to biased agreement scores that favor certain model types and fail to provide a comprehensive view of benchmark performance.
812
  """
813
  )
814
 
815
  st.image(
816
  "images/pointplot_granularity_matters.png",
817
+ caption="**Example: Agreement Varies with Model Range.** Mean correlation between benchmarks shows that agreement tends to increase with the number of models considered and is generally lower for closely ranked models (blue lines). This highlights the importance of considering multiple granularities in BAT.",
818
  use_column_width=True,
819
  )
820
 
821
  st.markdown(
822
  """
823
+ - **Overemphasis on Correlation Metrics:** BAT often relies heavily on correlation metrics without fully considering their limitations or the context of their application. While correlation can be informative, it's crucial to remember that high correlation doesn't automatically imply that benchmarks measure the same underlying construct.
824
  """
825
  )
826
 
827
  st.markdown(
828
  """
829
+ The BenchBench Leaderboard tackles these challenges by implementing a standardized and transparent approach to BAT, promoting consistency and facilitating meaningful comparisons between benchmarks.
830
+ By adopting the best practices embedded in the leaderboard, the research community can enhance the reliability and utility of benchmarks for evaluating and advancing language models.
831
  """
832
  )
833
 
834
 
835
  st.image(
836
  "images/ablations.png",
837
+ caption="**BenchBench's Standardized Approach Reduces Variance.** This ablation study demonstrates that following the best practices implemented in BenchBench significantly reduces the variance of BAT results, leading to more robust and reliable conclusions.",
838
  use_column_width=True,
839
  )