Transformers
Safetensors
Inference Endpoints
jzhoubu commited on
Commit
175d1ec
1 Parent(s): ced11e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -85,7 +85,7 @@ print(results)
85
  # Output:
86
  # SearchResults(
87
  # ids=tensor([[0, 1, 2]], device='cuda:0'),
88
- # scores=tensor([[97.2458, 39.7507, 37.6407]], device='cuda:0')
89
  # )
90
 
91
  query_id = 0
@@ -98,6 +98,7 @@ print(top1_psg)
98
 
99
  ```
100
 
 
101
  ## Building Bag-of-token Index for Search
102
 
103
  Our framework supports using tokenization as an index (i.e., a bag-of-token index), which operates on CPU and reduces indexing time and storage requirements by over 90%, compare to an embedding-based index.
@@ -122,7 +123,7 @@ print(results)
122
  # Output:
123
  # SearchResults(
124
  # ids=tensor([0, 2, 1], device='cuda:3'),
125
- # scores=tensor([97.2964, 39.7844, 37.6955], device='cuda:0')
126
  # )
127
  ```
128
 
 
85
  # Output:
86
  # SearchResults(
87
  # ids=tensor([[0, 1, 2]], device='cuda:0'),
88
+ # scores=tensor([[61.5432, 10.3108, 8.6709]], device='cuda:0')
89
  # )
90
 
91
  query_id = 0
 
98
 
99
  ```
100
 
101
+
102
  ## Building Bag-of-token Index for Search
103
 
104
  Our framework supports using tokenization as an index (i.e., a bag-of-token index), which operates on CPU and reduces indexing time and storage requirements by over 90%, compare to an embedding-based index.
 
123
  # Output:
124
  # SearchResults(
125
  # ids=tensor([0, 2, 1], device='cuda:3'),
126
+ # scores=tensor([61.5432, 10.3108, 8.6709], device='cuda:0')
127
  # )
128
  ```
129