Diangle commited on
Commit
034b3ba
1 Parent(s): 0fe2e9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -32,8 +32,8 @@ from transformers import CLIPTokenizer, CLIPTextModelWithProjection
32
 
33
  search_sentence = "a basketball player performing a slam dunk"
34
 
35
- model = CLIPTextModelWithProjection.from_pretrained("Diangle/clip4clip-webvid")
36
- tokenizer = CLIPTokenizer.from_pretrained("Diangle/clip4clip-webvid")
37
 
38
  inputs = tokenizer(text=search_sentence , return_tensors="pt")
39
  outputs = model(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])
@@ -46,14 +46,14 @@ print("final output: ", final_output)
46
 
47
  ### Extracting Video Embeddings:
48
 
49
- An additional notebook ["GSI_VideoRetrieval_VideoEmbedding.ipynb"](https://huggingface.co/Diangle/clip4clip-webvid/blob/main/Notebooks/GSI_VideoRetrieval_VideoEmbedding.ipynb), provides instructions for extracting video embeddings and includes the necessary tools for preprocessing videos.
50
 
51
 
52
  ## Model Intended Use
53
 
54
  This model is intended for use in large scale video-text retrieval applications.
55
 
56
- To illustrate its functionality, refer to the accompanying [**Video Search Space**](https://huggingface.co/spaces/Diangle/Clip4Clip-webvid) which provides a search demonstration on a vast collection of approximately 1.5 million videos.
57
  This interactive demo showcases the model's capability to effectively retrieve videos based on text queries, highlighting its potential for handling substantial video datasets.
58
 
59
  ## Motivation
@@ -82,7 +82,7 @@ We evaluate R1, R5, R10, MedianR, and MeanR on:
82
  | Binarized CLIP4Clip trained on 150k Webvid with rerank100 | 50.56 | 76.39 | 83.51 | 1.0 | 43.2964
83
 
84
  For an elaborate description of the evaluation refer to the notebook
85
- [GSI_VideoRetrieval-Evaluation](https://huggingface.co/Diangle/clip4clip-webvid/blob/main/Notebooks/GSI_VideoRetrieval-Evaluation.ipynb).
86
 
87
  <div id="footnote1">
88
 
@@ -93,7 +93,7 @@ For an elaborate description of the evaluation refer to the notebook
93
 
94
 
95
  ## Acknowledgements
96
- Acknowledging Diana Mazenko of [Searchium](https://www.searchium.ai) for adapting and loading the model to Hugging Face, and for creating a Hugging Face [**SPACE**](https://huggingface.co/spaces/Diangle/Clip4Clip-webvid) for a large-scale video-search demo.
97
 
98
  Acknowledgments also to Lou et el for their comprehensive work on CLIP4Clip and openly available code.
99
 
 
32
 
33
  search_sentence = "a basketball player performing a slam dunk"
34
 
35
+ model = CLIPTextModelWithProjection.from_pretrained("Searchium-ai/clip4clip-webvid150k")
36
+ tokenizer = CLIPTokenizer.from_pretrained("Searchium-ai/clip4clip-webvid150k")
37
 
38
  inputs = tokenizer(text=search_sentence , return_tensors="pt")
39
  outputs = model(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])
 
46
 
47
  ### Extracting Video Embeddings:
48
 
49
+ An additional notebook ["GSI_VideoRetrieval_VideoEmbedding.ipynb"](https://huggingface.co/Searchium-ai/clip4clip-webvid150k/blob/main/Notebooks/GSI_VideoRetrieval_VideoEmbedding.ipynb), provides instructions for extracting video embeddings and includes the necessary tools for preprocessing videos.
50
 
51
 
52
  ## Model Intended Use
53
 
54
  This model is intended for use in large scale video-text retrieval applications.
55
 
56
+ To illustrate its functionality, refer to the accompanying [**Video Search Space**](https://huggingface.co/spaces/Searchium-ai/Video-Search) which provides a search demonstration on a vast collection of approximately 1.5 million videos.
57
  This interactive demo showcases the model's capability to effectively retrieve videos based on text queries, highlighting its potential for handling substantial video datasets.
58
 
59
  ## Motivation
 
82
  | Binarized CLIP4Clip trained on 150k Webvid with rerank100 | 50.56 | 76.39 | 83.51 | 1.0 | 43.2964
83
 
84
  For an elaborate description of the evaluation refer to the notebook
85
+ [GSI_VideoRetrieval-Evaluation](https://huggingface.co/Searchium-ai/clip4clip-webvid150k/blob/main/Notebooks/GSI_VideoRetrieval-Evaluation.ipynb).
86
 
87
  <div id="footnote1">
88
 
 
93
 
94
 
95
  ## Acknowledgements
96
+ Acknowledging Diana Mazenko of [Searchium](https://www.searchium.ai) for adapting and loading the model to Hugging Face, and for creating a Hugging Face [**SPACE**](https://huggingface.co/spaces/Searchium-ai/Video-Search) for a large-scale video-search demo.
97
 
98
  Acknowledgments also to Lou et el for their comprehensive work on CLIP4Clip and openly available code.
99