Papers
arxiv:2411.04335

GazeGen: Gaze-Driven User Interaction for Visual Content Generation

Published on Nov 7
· Submitted by He-Yen on Nov 8
Authors:
,
,
,
,
,
,
,

Abstract

We present GazeGen, a user interaction system that generates visual content (images and videos) for locations indicated by the user's eye gaze. GazeGen allows intuitive manipulation of visual content by targeting regions of interest with gaze. Using advanced techniques in object detection and generative AI, GazeGen performs gaze-controlled image adding/deleting, repositioning, and surface material changes of image objects, and converts static images into videos. Central to GazeGen is the DFT Gaze (Distilled and Fine-Tuned Gaze) agent, an ultra-lightweight model with only 281K parameters, performing accurate real-time gaze predictions tailored to individual users' eyes on small edge devices. GazeGen is the first system to combine visual content generation with real-time gaze estimation, made possible exclusively by DFT Gaze. This real-time gaze estimation enables various visual content generation tasks, all controlled by the user's gaze. The input for DFT Gaze is the user's eye images, while the inputs for visual content generation are the user's view and the predicted gaze point from DFT Gaze. To achieve efficient gaze predictions, we derive the small model from a large model (10x larger) via novel knowledge distillation and personal adaptation techniques. We integrate knowledge distillation with a masked autoencoder, developing a compact yet powerful gaze estimation model. This model is further fine-tuned with Adapters, enabling highly accurate and personalized gaze predictions with minimal user input. DFT Gaze ensures low-latency and precise gaze tracking, supporting a wide range of gaze-driven tasks. We validate the performance of DFT Gaze on AEA and OpenEDS2020 benchmarks, demonstrating low angular gaze error and low latency on the edge device (Raspberry Pi 4). Furthermore, we describe applications of GazeGen, illustrating its versatility and effectiveness in various usage scenarios.

Community

Paper submitter
edited 4 days ago

We propose GazeGen, a system that lets users generate visual content simply by looking, using gaze as a natural, hands-free control method.

  1. Gaze-Based Image Editing: With GazeGen, users can edit images by focusing on specific parts of the image, allowing:
  • Adding objects.
  • Replacing or repositioning objects.
  • Applying an object’s style to another object the user focuses on.
  1. Creating Videos with Gaze: Users can create videos by looking at specific areas, allowing actions such as:
  • Adding animated objects where they’re looking.
  • Replacing static objects with animated ones.
  1. Real-Time Gaze Prediction: Our lightweight gaze estimation model, DFT Gaze (around 300KB), provides real-time gaze estimation with just 360ms latency on edge devices, like a Raspberry Pi 4.

  2. Object Recognition with Gaze: GazeGen recognizes object categories based on where the user looks, allowing object-specific interactions.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2411.04335 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2411.04335 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2411.04335 in a Space README.md to link it from this page.

Collections including this paper 3