wbi-sg commited on
Commit
f2e4d8c
1 Parent(s): 106d1fb

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +54 -0
  2. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - flair
4
+ - entity-mention-linker
5
+ ---
6
+
7
+ ## sapbert-ncbi-taxonomy
8
+
9
+ Biomedical Entity Mention Linking for UMLS.
10
+ We use this model for species since NCBI Taxonomy is contained in UMLS:
11
+
12
+ - Model: [cambridgeltl/SapBERT-from-PubMedBERT-fulltext](https://huggingface.co/cambridgeltl/SapBERT-from-PubMedBERT-fulltext)
13
+ - Dictionary: [NCBI Taxonomy](https://ftp.ncbi.nih.gov/pub/taxonomy/new_taxdump/new_taxdump.zip)
14
+
15
+ ### Demo: How to use in Flair
16
+
17
+ Requires:
18
+
19
+ - **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
20
+
21
+ ```python
22
+ from flair.data import Sentence
23
+ from flair.models import Classifier, EntityMentionLinker
24
+ from flair.tokenization import SciSpacyTokenizer
25
+
26
+ sentence = Sentence(
27
+ "The mutation in the ABCD1 gene causes X-linked adrenoleukodystrophy, "
28
+ "a neurodegenerative disease, which is exacerbated by exposure to high "
29
+ "levels of mercury in dolphin populations.",
30
+ use_tokenizer=SciSpacyTokenizer()
31
+ )
32
+
33
+ # load hunflair to detect the entity mentions we want to link.
34
+ tagger = Classifier.load("hunflair-species")
35
+ tagger.predict(sentence)
36
+
37
+ # load the linker and dictionary
38
+ linker = EntityMentionLinker.load("species-linker")
39
+ linker.predict(sentence)
40
+
41
+ # print the results for each entity mention:
42
+ for span in sentence.get_spans(tagger.label_type):
43
+ for link in span.get_labels(linker.label_type):
44
+ print(f"{span.text} -> {link.value}")
45
+ ```
46
+
47
+ As an alternative to downloading the already precomputed model (much storage). You can also build the model
48
+ and compute the embeddings for the dataset using:
49
+
50
+ ```python
51
+ linker = EntityMentionLinker.build("cambridgeltl/SapBERT-from-PubMedBERT-fulltext", dictionary_name_or_path="ncbi-taxonomy", entity_type="species", hybrid_search=False)
52
+ ```
53
+
54
+ This will reduce the download requirements, at the cost of computation. Note `hybrid_search=False` as SapBERT unlike BioSyn is trained only for dense retrieval.
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b49b76e1e3f9d632b7af11d9e9938ba229634e0fc96244fa0f3ee26f36ff6282
3
+ size 9888167855