How do your annotations for FineWeb2 compare to your teammates'?
I started contributing some annotations to the FineWeb2 collaborative annotation sprint and I wanted to know if my labelling trends were similar to those of my teammates.
I did some analysis and I wasn't surprised to see that I'm being a bit harsher on my evaluations than my mates π
Do you want to see how your annotations compare to others? π Go to this Gradio space: nataliaElv/fineweb2_compare_my_annotations βοΈ Enter the dataset that you've contributed to and your Hugging Face username.