maidalun1020 commited on
Commit
3b18090
1 Parent(s): 0ca1bc5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -16
README.md CHANGED
@@ -12,7 +12,7 @@ license: apache-2.0
12
  <h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
13
 
14
  <p align="center">
15
- <a href="https://github.com/netease-youdao/BCEmbedding/LICENSE">
16
  <img src="https://img.shields.io/badge/license-Apache--2.0-yellow">
17
  </a>
18
  <a href="https://twitter.com/YDopensource">
@@ -109,11 +109,11 @@ Existing embedding models often encounter performance challenges in bilingual an
109
 
110
  - ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available.
111
  - ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index).
112
- - ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb).
113
 
114
  - ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布.
115
  - ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。
116
- - ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布.
117
 
118
  ## 🍎 Model List
119
 
@@ -146,7 +146,7 @@ pip install -v -e .
146
 
147
  ### Quick Start
148
 
149
- Use `EmbeddingModel` by `BCEmbedding`, and `cls` [pooler](https://github.com/netease-youdao/BCEmbedding/BCEmbedding/models/embedding.py#L24) is default.
150
 
151
  ```python
152
  from BCEmbedding import EmbeddingModel
@@ -234,9 +234,9 @@ The evaluation tasks contain ***12 datastes*** of **"Reranking"**.
234
 
235
  #### 3. Metrics Visualization Tool
236
 
237
- We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/reranker_eval_summary.md).
238
 
239
- 我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/reranker_eval_summary.md)。
240
 
241
  ```bash
242
  python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir}
@@ -287,12 +287,12 @@ Then, sumarize the evaluation results by:
287
  python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results
288
  ```
289
 
290
- Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***:
291
  - In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
292
  - With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
293
  - ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.***
294
 
295
- 输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出:
296
  - 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。
297
  - 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。
298
  - ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
@@ -337,14 +337,14 @@ The summary of multiple domains evaluations can be seen in <a href=#1-multiple-d
337
  ***NOTE:***
338
  - Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with various model size.
339
  - ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting.
340
- - The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task.
341
- - More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/embedding_eval_summary.md).
342
 
343
  ***要点:***
344
  - 对比所有开源的各种规模的embedding模型,***bce-embedding-base_v1*** 表现最好。
345
  - 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。
346
- - 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。
347
- - 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/embedding_eval_summary.md)。
348
 
349
  #### 2. Reranker Models
350
 
@@ -357,12 +357,12 @@ The summary of multiple domains evaluations can be seen in <a href=#1-multiple-d
357
  ***NOTE:***
358
  - Our ***bce-reranker-base_v1*** outperforms other opensource reranker models.
359
  - ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting.
360
- - More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/reranker_eval_summary.md).
361
 
362
  ***要点:***
363
  - ***bce-reranker-base_v1*** 优于其他开源reranker模型。
364
  - 评测包含 **"Reranking"** 任务的 ***12个数据集***。
365
- - 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/Docs/EvaluationSummary/reranker_eval_summary.md)
366
 
367
  ### RAG Evaluations in LlamaIndex
368
 
@@ -401,7 +401,7 @@ Welcome to scan the QR code below and join the WeChat group.
401
 
402
  欢迎大家扫码加入官方微信交流群。
403
 
404
- <img src="https://github.com/netease-youdao/BCEmbedding/Docs/assets/Wechat.jpg" width="20%" height="auto">
405
 
406
  ## ✏️ Citation
407
 
@@ -420,7 +420,7 @@ If you use `BCEmbedding` in your research or project, please feel free to cite a
420
 
421
  ## 🔐 License
422
 
423
- `BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/LICENSE)
424
 
425
  ## 🔗 Related Links
426
 
 
12
  <h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
13
 
14
  <p align="center">
15
+ <a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE">
16
  <img src="https://img.shields.io/badge/license-Apache--2.0-yellow">
17
  </a>
18
  <a href="https://twitter.com/YDopensource">
 
109
 
110
  - ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available.
111
  - ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index).
112
+ - ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb).
113
 
114
  - ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布.
115
  - ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。
116
+ - ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布.
117
 
118
  ## 🍎 Model List
119
 
 
146
 
147
  ### Quick Start
148
 
149
+ Use `EmbeddingModel` by `BCEmbedding`, and `cls` [pooler](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/models/embedding.py#L24) is default.
150
 
151
  ```python
152
  from BCEmbedding import EmbeddingModel
 
234
 
235
  #### 3. Metrics Visualization Tool
236
 
237
+ We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
238
 
239
+ 我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)。
240
 
241
  ```bash
242
  python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir}
 
287
  python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results
288
  ```
289
 
290
+ Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***:
291
  - In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
292
  - With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
293
  - ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.***
294
 
295
+ 输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出:
296
  - 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。
297
  - 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。
298
  - ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
 
337
  ***NOTE:***
338
  - Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with various model size.
339
  - ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting.
340
+ - The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task.
341
+ - More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md).
342
 
343
  ***要点:***
344
  - 对比所有开源的各种规模的embedding模型,***bce-embedding-base_v1*** 表现最好。
345
  - 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。
346
+ - 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。
347
+ - 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)。
348
 
349
  #### 2. Reranker Models
350
 
 
357
  ***NOTE:***
358
  - Our ***bce-reranker-base_v1*** outperforms other opensource reranker models.
359
  - ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting.
360
+ - More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
361
 
362
  ***要点:***
363
  - ***bce-reranker-base_v1*** 优于其他开源reranker模型。
364
  - 评测包含 **"Reranking"** 任务的 ***12个数据集***。
365
+ - 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)
366
 
367
  ### RAG Evaluations in LlamaIndex
368
 
 
401
 
402
  欢迎大家扫码加入官方微信交流群。
403
 
404
+ <img src="https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/assets/Wechat.jpg" width="20%" height="auto">
405
 
406
  ## ✏️ Citation
407
 
 
420
 
421
  ## 🔐 License
422
 
423
+ `BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE)
424
 
425
  ## 🔗 Related Links
426