ldwang commited on
Commit
617ca48
1 Parent(s): 1dafa38
Files changed (1) hide show
  1. README.md +34 -16
README.md CHANGED
@@ -2631,25 +2631,36 @@ FlagEmbedding can map any text to a low-dimensional dense vector which can be us
2631
  And it also can be used in vector databases for LLMs.
2632
 
2633
  ************* 🌟**Updates**🌟 *************
2634
- - 09/15/2023: Release [paper](https://arxiv.org/pdf/2309.07597.pdf) and [dataset](https://data.baai.ac.cn/details/BAAI-MTP).
2635
- - 09/12/2023: New Release:
 
 
2636
  - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
2637
  - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
2638
- - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
 
 
 
 
 
 
2639
  - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
2640
- - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
2641
- - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
2642
- - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
 
 
2643
 
2644
 
2645
  ## Model List
2646
 
2647
  `bge` is short for `BAAI general embedding`.
2648
 
2649
- | Model | Language | | Description | query instruction for retrieval\* |
2650
  |:-------------------------------|:--------:| :--------:| :--------:|:--------:|
2651
- | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
2652
- | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
 
2653
  | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
2654
  | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
2655
  | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
@@ -2664,11 +2675,15 @@ And it also can be used in vector databases for LLMs.
2664
  | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
2665
 
2666
 
2667
- \*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
2668
 
2669
- \**: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
2670
  For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
2671
 
 
 
 
 
2672
  ## Frequently asked questions
2673
 
2674
  <details>
@@ -2705,7 +2720,11 @@ please select an appropriate similarity threshold based on the similarity distri
2705
  <summary>3. When does the query instruction need to be used</summary>
2706
 
2707
  <!-- ### When does the query instruction need to be used -->
2708
-
 
 
 
 
2709
  For a retrieval task that uses short queries to find long related documents,
2710
  it is recommended to add instructions for these short queries.
2711
  **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
@@ -2965,7 +2984,7 @@ which is more accurate than embedding model (i.e., bi-encoder) but more time-con
2965
  Therefore, it can be used to re-rank the top-k documents returned by embedding model.
2966
  We train the cross-encoder on a multilingual pair data,
2967
  The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
2968
- More details pelease refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
2969
 
2970
 
2971
  ## Contact
@@ -2975,7 +2994,8 @@ You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac
2975
 
2976
  ## Citation
2977
 
2978
- If you find our work helpful, please cite us:
 
2979
  ```
2980
  @misc{bge_embedding,
2981
  title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
@@ -2990,5 +3010,3 @@ If you find our work helpful, please cite us:
2990
  ## License
2991
  FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
2992
 
2993
-
2994
-
 
2631
  And it also can be used in vector databases for LLMs.
2632
 
2633
  ************* 🌟**Updates**🌟 *************
2634
+ - 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
2635
+ - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
2636
+ - 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
2637
+ - 09/12/2023: New models:
2638
  - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
2639
  - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
2640
+
2641
+
2642
+ <details>
2643
+ <summary>More</summary>
2644
+ <!-- ### More -->
2645
+
2646
+ - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
2647
  - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
2648
+ - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
2649
+ - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
2650
+ - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
2651
+
2652
+ </details>
2653
 
2654
 
2655
  ## Model List
2656
 
2657
  `bge` is short for `BAAI general embedding`.
2658
 
2659
+ | Model | Language | | Description | query instruction for retrieval [1] |
2660
  |:-------------------------------|:--------:| :--------:| :--------:|:--------:|
2661
+ | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
2662
+ | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
2663
+ | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
2664
  | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
2665
  | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
2666
  | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
 
2675
  | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
2676
 
2677
 
2678
+ [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
2679
 
2680
+ [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
2681
  For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
2682
 
2683
+ All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
2684
+ If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
2685
+
2686
+
2687
  ## Frequently asked questions
2688
 
2689
  <details>
 
2720
  <summary>3. When does the query instruction need to be used</summary>
2721
 
2722
  <!-- ### When does the query instruction need to be used -->
2723
+
2724
+ For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
2725
+ No instruction only has a slight degradation in retrieval performance compared with using instruction.
2726
+ So you can generate embedding without instruction in all cases for convenience.
2727
+
2728
  For a retrieval task that uses short queries to find long related documents,
2729
  it is recommended to add instructions for these short queries.
2730
  **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
 
2984
  Therefore, it can be used to re-rank the top-k documents returned by embedding model.
2985
  We train the cross-encoder on a multilingual pair data,
2986
  The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
2987
+ More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
2988
 
2989
 
2990
  ## Contact
 
2994
 
2995
  ## Citation
2996
 
2997
+ If you find this repository useful, please consider giving a star :star: and citation
2998
+
2999
  ```
3000
  @misc{bge_embedding,
3001
  title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
 
3010
  ## License
3011
  FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
3012