File size: 7,098 Bytes
ac550ec
cd32d7b
 
 
ac550ec
cd32d7b
 
ac550ec
cd32d7b
 
 
6be402c
cd32d7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4f1a196
cd32d7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8bd9260
311229b
 
 
cd32d7b
 
 
311229b
cd32d7b
 
 
 
 
 
 
 
 
 
 
311229b
cd32d7b
 
ac4aaf5
4f1a196
 
 
 
 
 
 
 
 
 
 
 
 
 
cd32d7b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
963e121
 
 
d503134
 
 
 
 
 
963e121
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
language: en
tags:
- tvp
license: other
datasets:
- charades
---

# TVP base model

The TVP model was proposed in [Text-Visual Prompting for Efficient 2D Temporal Video Grounding](https://arxiv.org/abs/2303.04995) by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding. The goal of
this model is to incorporate trainable prompts into both visual inputs and textual features to temporal video grounding(TVG) problems. It was introduced in
[this paper](https://arxiv.org/pdf/2303.04995.pdf).

TVP got accepted to [CVPR'23](https://cvpr2023.thecvf.com/) conference.

## Model description

The abstract from the paper is the following:
In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video. Benefiting from fine-grained 3D visual features, the TVG techniques have achieved remarkable progress in recent years. However, the high complexity of 3D convolutional neural networks (CNNs) makes extracting dense 3D visual features time-consuming, which calls for intensive memory and computing resources. Towards efficient TVG, we propose a novel text-visual prompting (TVP) framework, which incorporates optimized perturbation patterns (that we call ‘prompts’) into both visual inputs and textual features of a TVG model. In sharp contrast to 3D CNNs, we show that TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of cross-modal feature fusion using only low-complexity sparse 2D visual features. Further, we propose a Temporal-Distance IoU (TDIoU) loss for efficient learning of TVG. Experiments on two benchmark datasets, Charades-STA and ActivityNet Captions datasets, empirically show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on ActivityNet Captions) and achieves 5× inference acceleration over TVG using 3D visual features.

## Intended uses & limitations(TODO)

You can use the raw model for temporal video grounding.

### How to use

Here is how to use this model to get the logits of a given video and text in PyTorch:
```python
import av
import cv2
import numpy as np
import torch
from huggingface_hub import hf_hub_download
from transformers import AutoProcessor, AutoModel


def pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps):
    """
    Convert the video from its original fps to the target_fps and decode the video with PyAV decoder.
    Returns:
        frames (tensor): decoded frames from the video. Return None if the no
            video stream was found.
        fps (float): the number of frames per second of the video.
    """
    fps = float(container.streams.video[0].average_rate)
    clip_size = sampling_rate * num_frames / target_fps * fps
    delta = max(container.streams.video[0].frames - clip_size, 0)
    start_idx = delta * clip_idx / num_clips
    end_idx = start_idx + clip_size - 1
    timebase = container.streams.video[0].duration / container.streams.video[0].frames
    video_start_pts = int(start_idx * timebase)
    video_end_pts = int(end_idx * timebase)
    stream_name = {"video": 0}
    seek_offset = max(video_start_pts - 1024, 0)
    container.seek(seek_offset, any_frame=False, backward=True, stream=container.streams.video[0])
    frames = {}
    for frame in container.decode(**stream_name):
        if frame.pts < video_start_pts:
            continue
        if frame.pts <= video_end_pts:
            frames[frame.pts] = frame
        else:
            frames[frame.pts] = frame
            break
    frames = [frames[pts] for pts in sorted(frames)]

    return frames, fps


def decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps):
    """
    Decode the video and perform temporal sampling.
    Args:
        container (container): pyav container.
        sampling_rate (int): frame sampling rate (interval between two sampled frames).
        num_frames (int): number of frames to sample.
        clip_idx (int): if clip_idx is -1, perform random temporal sampling.
            If clip_idx is larger than -1, uniformly split the video to num_clips
            clips, and select the clip_idx-th video clip.
        num_clips (int): overall number of clips to uniformly sample from the given video.
        target_fps (int): the input video may have different fps, convert it to
            the target video fps before frame sampling.
    Returns:
        frames (tensor): decoded frames from the video.
    """
    assert clip_idx >= -2, "Not valied clip_idx {}".format(clip_idx)
    frames, fps = pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps)
    clip_size = sampling_rate * num_frames / target_fps * fps
    index = torch.linspace(0, clip_size - 1, num_frames)
    index = torch.clamp(index, 0, len(frames) - 1).long().tolist()
    frames = [frames[idx] for idx in index]
    frames = [frame.to_rgb().to_ndarray() for frame in frames]
    frames = torch.from_numpy(np.stack(frames))

    return frames


file = hf_hub_download(repo_id="Intel/tvp_demo", filename="0A8ZT.mp4", repo_type="dataset")

model = AutoModel.from_pretrained("Intel/tvp-base")

decoder_kwargs = dict(
    container=av.open(file, metadata_errors="ignore"),
    sampling_rate=1,
    num_frames=model.config.num_frm,
    clip_idx=0,
    num_clips=1,
    target_fps=3,
)
raw_sampled_frms = decode(**decoder_kwargs)
raw_sampled_frms = raw_sampled_frms.permute(0, 3, 1, 2)

processor = AutoProcessor.from_pretrained("Intel/tvp-base")
data = processor(
    text=["person turn a light on."], videos=list(raw_sampled_frms.numpy()), return_tensors="pt", max_text_length=100
)

output = model(**data)

print(f"The model's output is {output}")

def get_video_duration(filename):
    cap = cv2.VideoCapture(filename)
    if cap.isOpened():
        rate = cap.get(5)
        frame_num =cap.get(7)
        duration = frame_num/rate
        return duration
    return -1

duration = get_video_duration(file)
timestamp = output['logits'].tolist()
start, end = round(timestamp[0][0]*duration, 1), round(timestamp[0][1]*duration, 1)
print(f"The time slot of the video corresponding to the text is from {start}s to {end}s")
```

### Limitations and bias

TODO

## Training data

The TVP model was pretrained on public datasets:
- [charades](https://prior.allenai.org/projects/charades), 

## Training procedure

### Preprocessing

TODO

### Pretraining

TODO

## Evaluation results

Please refer to [Table 2](https://arxiv.org/pdf/2303.04995.pdf) for TVP's performance on Temporal Video Grounding task.

### BibTeX entry and citation info
```bibtex
@inproceedings{zhang2023text,
  title={Text-visual prompting for efficient 2d temporal video grounding},
  author={Zhang, Yimeng and Chen, Xin and Jia, Jinghan and Liu, Sijia and Ding, Ke},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={14794--14804},
  year={2023}
}
```