--- language: en tags: - tvp license: other datasets: - charades --- # TVP base model The TVP model was proposed in [Text-Visual Prompting for Efficient 2D Temporal Video Grounding](https://arxiv.org/abs/2303.04995) by Yimeng Zhang, Xin Chen, Jinghan Jia, Sijia Liu, Ke Ding. The goal of this model is to incorporate trainable prompts into both visual inputs and textual features to temporal video grounding(TVG) problems. It was introduced in [this paper](https://arxiv.org/pdf/2303.04995.pdf). TVP got accepted to [CVPR'23](https://cvpr2023.thecvf.com/) conference. ## Model description The abstract from the paper is the following: In this paper, we study the problem of temporal video grounding (TVG), which aims to predict the starting/ending time points of moments described by a text sentence within a long untrimmed video. Benefiting from fine-grained 3D visual features, the TVG techniques have achieved remarkable progress in recent years. However, the high complexity of 3D convolutional neural networks (CNNs) makes extracting dense 3D visual features time-consuming, which calls for intensive memory and computing resources. Towards efficient TVG, we propose a novel text-visual prompting (TVP) framework, which incorporates optimized perturbation patterns (that we call ‘prompts’) into both visual inputs and textual features of a TVG model. In sharp contrast to 3D CNNs, we show that TVP allows us to effectively co-train vision encoder and language encoder in a 2D TVG model and improves the performance of cross-modal feature fusion using only low-complexity sparse 2D visual features. Further, we propose a Temporal-Distance IoU (TDIoU) loss for efficient learning of TVG. Experiments on two benchmark datasets, Charades-STA and ActivityNet Captions datasets, empirically show that the proposed TVP significantly boosts the performance of 2D TVG (e.g., 9.79% improvement on Charades-STA and 30.77% improvement on ActivityNet Captions) and achieves 5× inference acceleration over TVG using 3D visual features. ## Intended uses & limitations(TODO) You can use the raw model for temporal video grounding. ### How to use Here is how to use this model to get the logits of a given video and text in PyTorch: ```python import av import cv2 import numpy as np import torch from huggingface_hub import hf_hub_download from transformers import AutoProcessor, TvpForVideoGrounding def pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): ''' Convert the video from its original fps to the target_fps and decode the video with PyAV decoder. Returns: frames (tensor): decoded frames from the video. Return None if the no video stream was found. fps (float): the number of frames per second of the video. ''' fps = float(container.streams.video[0].average_rate) clip_size = sampling_rate * num_frames / target_fps * fps delta = max(container.streams.video[0].frames - clip_size, 0) start_idx = delta * clip_idx / num_clips end_idx = start_idx + clip_size - 1 timebase = container.streams.video[0].duration / container.streams.video[0].frames video_start_pts = int(start_idx * timebase) video_end_pts = int(end_idx * timebase) stream_name = {"video": 0} seek_offset = max(video_start_pts - 1024, 0) container.seek(seek_offset, any_frame=False, backward=True, stream=container.streams.video[0]) frames = {} for frame in container.decode(**stream_name): if frame.pts < video_start_pts: continue if frame.pts <= video_end_pts: frames[frame.pts] = frame else: frames[frame.pts] = frame break frames = [frames[pts] for pts in sorted(frames)] return frames, fps def decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps): ''' Decode the video and perform temporal sampling. Args: container (container): pyav container. sampling_rate (int): frame sampling rate (interval between two sampled frames). num_frames (int): number of frames to sample. clip_idx (int): if clip_idx is -1, perform random temporal sampling. If clip_idx is larger than -1, uniformly split the video to num_clips clips, and select the clip_idx-th video clip. num_clips (int): overall number of clips to uniformly sample from the given video. target_fps (int): the input video may have different fps, convert it to the target video fps before frame sampling. Returns: frames (tensor): decoded frames from the video. ''' assert clip_idx >= -2, "Not a valied clip_idx {}".format(clip_idx) frames, fps = pyav_decode(container, sampling_rate, num_frames, clip_idx, num_clips, target_fps) clip_size = sampling_rate * num_frames / target_fps * fps index = torch.linspace(0, clip_size - 1, num_frames) index = torch.clamp(index, 0, len(frames) - 1).long().tolist() frames = [frames[idx] for idx in index] frames = [frame.to_rgb().to_ndarray() for frame in frames] frames = torch.from_numpy(np.stack(frames)) return frames def get_resize_size(image, max_size): ''' Args: image: np.ndarray max_size: The max size of height and width Returns: (height, width) Note the height/width order difference >>> pil_img = Image.open("raw_img_tensor.jpg") >>> pil_img.size (640, 480) # (width, height) >>> np_img = np.array(pil_img) >>> np_img.shape (480, 640, 3) # (height, width, 3) ''' height, width = image.shape[-2:] if height >= width: ratio = width * 1.0 / height new_height = max_size new_width = new_height * ratio else: ratio = height * 1.0 / width new_width = max_size new_height = new_width * ratio size = {"height": int(new_height), "width": int(new_width)} return size file = hf_hub_download(repo_id="Intel/tvp_demo", filename="AK2KG.mp4", repo_type="dataset") model = TvpForVideoGrounding.from_pretrained("Intel/tvp-base") decoder_kwargs = dict( container=av.open(file, metadata_errors="ignore"), sampling_rate=1, num_frames=model.config.num_frames, clip_idx=0, num_clips=1, target_fps=3, ) raw_sampled_frms = decode(**decoder_kwargs).permute(0, 3, 1, 2) text = "a person is sitting on a bed." processor = AutoProcessor.from_pretrained("Intel/tvp-base") size = get_resize_size(raw_sampled_frms, model.config.max_img_size) model_inputs = processor( text=[text], videos=list(raw_sampled_frms.numpy()), return_tensors="pt", max_text_length=100, size=size ) model_inputs["pixel_values"] = model_inputs["pixel_values"].to(model.dtype) model_inputs["labels"] = torch.tensor([18.1, 0.0, 6.8]) output = model(**model_inputs) print(f"The model's output is {output}") def get_video_duration(filename): cap = cv2.VideoCapture(filename) if cap.isOpened(): rate = cap.get(5) frame_num = cap.get(7) duration = frame_num/rate return duration return -1 duration = get_video_duration(file) timestamp = output['logits'].tolist() start, end = round(timestamp[0][0]*duration, 1), round(timestamp[0][1]*duration, 1) print(f"The time slot of the video corresponding to the text \"{text}\" is from {start}s to {end}s") ``` ### Limitations and bias TODO ## Training data The TVP model was pretrained on public datasets: - [charades](https://prior.allenai.org/projects/charades), ## Training procedure ### Preprocessing TODO ### Pretraining TODO ## Evaluation results Please refer to [Table 2](https://arxiv.org/pdf/2303.04995.pdf) for TVP's performance on Temporal Video Grounding task. ### BibTeX entry and citation info ```bibtex @inproceedings{zhang2023text, title={Text-visual prompting for efficient 2d temporal video grounding}, author={Zhang, Yimeng and Chen, Xin and Jia, Jinghan and Liu, Sijia and Ding, Ke}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={14794--14804}, year={2023} } ```