hyf015 commited on
Commit
0c49901
1 Parent(s): 8b1346a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -18,5 +18,6 @@ size_categories:
18
  This repository contains the data and benchmark code of the following paper:
19
  > **EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World**<br>
20
  > [Yifei Huang](https://hyf015.github.io/), [Guo Chen](https://scholar.google.com/citations?user=lRj3moAAAAAJ), [Jilan Xu](https://scholar.google.com/citations?user=mf2U64IAAAAJ), [Mingfang Zhang](https://scholar.google.com/citations?user=KnQO5GcAAAAJ), [Lijin Yang](), [Baoqi Pei](), [Hongjie Zhang](https://scholar.google.com/citations?user=Zl_2sZYAAAAJ), [Lu Dong](), [Yali Wang](https://scholar.google.com/citations?hl=en&user=hD948dkAAAAJ), [Limin Wang](https://wanglimin.github.io), [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/index.html)<br>
21
- > IEEE/CVF Conference on Computer Vision and Pattern Recognition (**CVPR**), 2024<br>
22
- > Presented by [OpenGVLab](https://github.com/OpenGVLab) in Shanghai AI Lab<be>
 
 
18
  This repository contains the data and benchmark code of the following paper:
19
  > **EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World**<br>
20
  > [Yifei Huang](https://hyf015.github.io/), [Guo Chen](https://scholar.google.com/citations?user=lRj3moAAAAAJ), [Jilan Xu](https://scholar.google.com/citations?user=mf2U64IAAAAJ), [Mingfang Zhang](https://scholar.google.com/citations?user=KnQO5GcAAAAJ), [Lijin Yang](), [Baoqi Pei](), [Hongjie Zhang](https://scholar.google.com/citations?user=Zl_2sZYAAAAJ), [Lu Dong](), [Yali Wang](https://scholar.google.com/citations?hl=en&user=hD948dkAAAAJ), [Limin Wang](https://wanglimin.github.io), [Yu Qiao](http://mmlab.siat.ac.cn/yuqiao/index.html)<br>
21
+ > IEEE/CVF Conference on Computer Vision and Pattern Recognition (**CVPR**), 2024<be>
22
+
23
+ EgoExoLearn is a dataset that emulates the human demonstration following process, in which individuals record egocentric videos as they execute tasks guided by exocentric-view demonstration videos. Focusing on the potential applications in daily assistance and professional support, EgoExoLearn contains egocentric and demonstration video data spanning 120 hours captured in daily life scenarios and specialized laboratories. Along with the videos we record high-quality gaze data and provide detailed multimodal annotations, formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints.