victormiller commited on
Commit
a9b8a99
·
verified ·
1 Parent(s): 0f2f612

Update overview.py

Browse files
Files changed (1) hide show
  1. overview.py +3 -0
overview.py CHANGED
@@ -269,6 +269,8 @@ dedup_text1 = P("Our deduplication process began with 61.8 TB of high-quality, f
269
  dedup_text2 = P("After the global near-deduplication of all 87 CommonCrawl dumps and other curated data, we removed around 85% of the total documents. This leaves us with approximately 4.24 trillion deduplicated tokens, which aligns with what FineWeb has reported for their iterative global deduplication. Along with the list of duplicated documents to delete, our deduplication code also saves some metadata about the duplicate clusters that we find. We save statistics about every duplicate cluster we find, with the document ID of the document we retain from the cluster as the key and with a value capturing the distribution of the duplicates within the cluster over the CommonCrawl dumps (identified by the first 2 digits of every document ID). This way, we always have information about the duplicates we have deleted, allowing us to upsample any data distribution we want for training.")
270
  dedup_text3 = P("During deduplication, it is not feasible to store all the duplicate clusters we form, but we do save some samples at every size. Here are some observations we made by examining these sample duplicate clusters:")
271
 
 
 
272
  def overview():
273
  return Div(Section(
274
 
@@ -293,6 +295,7 @@ def overview():
293
  dedup_text1,
294
  dedup_text2,
295
  dedup_text3,
 
296
  id="inner-text",
297
  )
298
  )
 
269
  dedup_text2 = P("After the global near-deduplication of all 87 CommonCrawl dumps and other curated data, we removed around 85% of the total documents. This leaves us with approximately 4.24 trillion deduplicated tokens, which aligns with what FineWeb has reported for their iterative global deduplication. Along with the list of duplicated documents to delete, our deduplication code also saves some metadata about the duplicate clusters that we find. We save statistics about every duplicate cluster we find, with the document ID of the document we retain from the cluster as the key and with a value capturing the distribution of the duplicates within the cluster over the CommonCrawl dumps (identified by the first 2 digits of every document ID). This way, we always have information about the duplicates we have deleted, allowing us to upsample any data distribution we want for training.")
270
  dedup_text3 = P("During deduplication, it is not feasible to store all the duplicate clusters we form, but we do save some samples at every size. Here are some observations we made by examining these sample duplicate clusters:")
271
 
272
+ list = UL(Li("Smaller components tend to have more overlap in their MinHash bands. The smallest components, which are essentially pairs, consist of exact duplicate documents that local exact deduplication missed."))
273
+
274
  def overview():
275
  return Div(Section(
276
 
 
295
  dedup_text1,
296
  dedup_text2,
297
  dedup_text3,
298
+ list,
299
  id="inner-text",
300
  )
301
  )