taesiri commited on
Commit
7c0927e
1 Parent(s): 08805b6

Upload papers/2312/2312.05277.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2312/2312.05277.tex +631 -0
papers/2312/2312.05277.tex ADDED
@@ -0,0 +1,631 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass{article}
2
+
3
+
4
+
5
+
6
+
7
+
8
+
9
+
10
+
11
+
12
+
13
+ \usepackage[final]{neurips_2023}
14
+
15
+
16
+
17
+
18
+ \usepackage{amsmath}
19
+ \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{xcolor} \usepackage{natbib}
20
+ \usepackage{graphicx}
21
+ \usepackage[ruled,noend]{algorithm2e}
22
+ \usepackage{amssymb}
23
+
24
+
25
+
26
+ \newcommand{\Stanford}{$^\diamond$}
27
+ \newcommand{\USC}{$^\dagger$}
28
+ \newcommand{\Bosch}{$^\S$}
29
+
30
+ \title{3D Copy-Paste: Physically Plausible Object Insertion for Monocular 3D Detection}
31
+
32
+
33
+
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
+ \author{Yunhao Ge\Stanford\USC,
42
+ Hong-Xing Yu\Stanford,
43
+ Cheng Zhao\Bosch,
44
+ Yuliang Guo\Bosch,
45
+ Xinyu Huang\Bosch,
46
+ Liu Ren\Bosch, \\
47
+ \textbf{Laurent Itti}\USC,
48
+ \textbf{Jiajun Wu}\Stanford\\
49
+ {\Stanford}Stanford University~~~
50
+ {\USC}University of Southern California\\
51
+ {\Bosch}Bosch Research North America, Bosch Center for Artificial Intelligence (BCAI)\\
52
+ \small\texttt{\{yunhaoge, koven, jiajunwu\}@cs.stanford.edu}~~~
53
+ \texttt{\{yunhaoge, itti\}@usc.edu}~~~\\
54
+ \small \texttt{\{Cheng.Zhao, Yuliang.Guo2, Xinyu.Huang, Liu.Ren\}@us.bosch.com}~~~
55
+ \\
56
+ }
57
+
58
+ \definecolor{MyWineRed}{rgb}{0.694,0.071, 0.149}
59
+ \newcommand{\ky}[1]{\textcolor{MyWineRed}{{[Koven: #1]}}}
60
+
61
+
62
+ \begin{document}
63
+
64
+ \maketitle
65
+
66
+
67
+
68
+
69
+ \begin{abstract}
70
+
71
+ A major challenge in monocular 3D object detection is the limited diversity and quantity of objects in real datasets.
72
+ While augmenting real scenes with virtual objects holds promise to improve both the diversity and quantity of the objects, it remains elusive due to the lack of an effective 3D object insertion method in complex real captured scenes.
73
+ In this work, we study augmenting complex real indoor scenes with virtual objects for monocular 3D object detection.
74
+ The main challenge is to automatically identify plausible physical properties for virtual assets (e.g., locations, appearances, sizes, etc.) in cluttered real scenes.
75
+ To address this challenge, we propose a physically plausible indoor 3D object insertion approach to automatically \emph{copy} virtual objects and \emph{paste} them into real scenes. The resulting objects in scenes have 3D bounding boxes with plausible physical locations and appearances. In particular, our method first identifies physically feasible locations and poses for the inserted objects to prevent collisions with the existing room layout. Subsequently, it estimates spatially-varying illumination for the insertion location, enabling the immersive blending of the virtual objects into the original scene with plausible appearances and cast shadows. We show that our augmentation method significantly improves existing monocular 3D object models and achieves state-of-the-art performance.
76
+ For the first time, we demonstrate that a physically plausible 3D object insertion, serving as a generative data augmentation technique, can lead to significant improvements for discriminative downstream tasks such as monocular 3D object detection. Project website: \textcolor{blue}{\url{https://gyhandy.github.io/3D-Copy-Paste/}}.
77
+
78
+ \end{abstract}
79
+
80
+
81
+
82
+ \section{Introduction}
83
+
84
+ Monocular indoor 3D object detection methods have shown promising results in various applications such as robotics and augmented reality \citep{yang2019cubeslam, chen2017multi}.
85
+ However, the deployment of these methods is potentially constrained by the limited diversity and quantity of objects in existing real datasets.
86
+ For example, in SUN RGB-D dataset~\citep{song2015sun},
87
+ the bathtub category has only less than 500 annotations compared to chair which has over 19,000 annotations.
88
+ This may be due to the difficulty in acquiring and labeling substantial indoor scene datasets with diverse 3D object annotations \citep{silberman2012indoor, song2015sun, dai2017scannet}.
89
+
90
+ Data augmentation techniques have been widely utilized in 2D detection and segmentation tasks to improve the diversity and quantity of the available training data \citep{dwibedi2017cut, ge2022neural, ghiasi2021simple, ge2022dall, ge2023beyond}. However, it is non-trivial to scale 2D augmentation methods to 3D scenes due to physical constraints in real 3D scenes. In particular, technical challenges emerge especially in how to maintain physical plausibility for:
91
+ (1) \textbf{Collision and Occlusion Handling}: In 3D data augmentation, handling collisions between objects is more challenging than in 2D data. Properly managing collisions is essential to prevent artifacts and ensure that objects appear as natural and coherent parts of the scene.
92
+ (2) \textbf{Illumination and Shading}: For 3D data, augmenting objects requires careful consideration of the lighting conditions in the scene to create realistic shading and reflections. This involves estimating the spatially-varying illumination and adapting the appearance of the inserted objects to maintain visual coherence.
93
+ (3) \textbf{Geometric Consistency}: In 3D data augmentation, maintaining geometric consistency is crucial to ensure that the augmented objects fit naturally within the scene. Unlike 2D augmentation, which deals with flat images, 3D augmentation must consider spatial relationships, object orientations, and their interaction with the surrounding environment.
94
+
95
+ \begin{figure}[t]
96
+ \centering
97
+ \includegraphics[width=\textwidth]{Fig/Fig-1.pdf}
98
+ \caption{Overall pipeline of physically plausible object insertion for monocular 3D object detection: Our approach \emph{copies} external 3D objects (e.g., from Objaverse \citep{deitke2022objaverse}) and \emph{pastes} them into indoor scene datasets (e.g., SUN RGB-D \citep{song2015sun}) in a physically plausible manner. The augmented indoor scene dataset, enriched with inserted 3D objects, is then used to train monocular 3D object detection models, resulting in significant performance improvements.}
99
+ \label{fig:overall}
100
+ \end{figure}
101
+
102
+
103
+
104
+
105
+
106
+ In this paper, we explore a novel approach, 3D Copy-Paste, to achieve 3D data augmentation in indoor scenes. We employ physically plausible indoor 3D object insertion to automatically generate large-scale annotated 3D objects with both plausible physical location and illumination.
107
+ Unlike outdoor scenarios, indoor environments present unique challenges: (1) complex spatial layouts, notably cluttered backgrounds and limited space for object placement, which require a meticulously crafted method for automated object positioning (ensuring realistic position, size, and pose), and (2) intricate lighting effects, such as soft shadows, inter-reflections, and long-range light source dependency, which necessitate sophisticated lighting considerations for harmonious object insertion.
108
+
109
+ Fig.~\ref{fig:overall} shows our overall pipeline. In our approach, we take advantage of existing large-scale 3D object datasets, from which we \emph{copy} simulated 3D objects and \emph{paste} them into real scenes. To address the challenges associated with creating physically plausible insertions, we employ a three-step process. First, we analyze the scene by identifying all suitable planes for 3D object insertion. Next, we estimate the object's pose and size, taking into account the insertion site to prevent collisions. Lastly, we estimate the spatially-varying illumination to render realistic shading and shadows for the inserted object, ensuring that it is seamlessly blended into the scene.
110
+
111
+ Our proposed method augment existing indoor scene datasets, such as SUN RGB-D \citep{song2015sun}, by incorporating large-scale 3D object datasets like Objaverse \citep{deitke2022objaverse} using our 3D Copy-Paste approach. Our method is an offline augmentation method that creates a new augmented dataset.
112
+ The monocular 3D object detection model, ImvoxelNet \cite{rukhovich2022imvoxelnet}, trained on this augmented dataset, achieves new state-of-the-art performance on the challenging SUN RGB-D dataset.
113
+ We systematically evaluate the influence of the inserted objects' physical position and illumination on the downstream performance of the final monocular 3D object detection model.
114
+ Our results suggest that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performances in discriminative downstream tasks such as monocular 3D object detection.
115
+
116
+
117
+ We make three main contributions:
118
+ (1) We introduce 3D Copy-Paste, a novel physically plausible indoor object insertion technique for automatically generating large-scale annotated 3D objects. This approach ensures the plausibility of the objects' physical location, size, pose, and illumination within the scene.
119
+ (2) We demonstrate that training a monocular 3D object detection model on a dataset augmented using our 3D Copy-Paste technique results in state-of-the-art performance. Our results show that a physically plausible 3D object insertion method can serve as an effective generative data augmentation technique, leading to significant improvements in discriminative downstream monocular 3D object detection tasks.
120
+ (3) We conduct a systematic evaluation on the effect of location and illumination of the inserted objects on the performance of the downstream monocular 3D object detection model. This analysis provides valuable insights into the role of these factors in the overall effectiveness of our proposed approach.
121
+ %
122
+
123
+ \section{Related Works}
124
+ \subsection{Monocular 3D Object Detection}
125
+
126
+ Monocular 3D Object Detection estimates the 3D location, orientation, and dimensions (3D bounding box) of objects from a single 2D image. It has garnered significant attention in recent years due to its potential applications in autonomous driving, robotics, and augmented reality. There are many works of monocular 3D detection in driving scenarios, such as 3DOP\citep{chen20153d}, MLFusion\citep{xu2018multi}, M3D-RPN\citep{brazil2019m3d}, MonoDIS\citep{simonelli2019disentangling}, Pseudo-LiDAR\citep{wang2019pseudo}, FCOS3D\citep{wang2021fcos3d}, SMOKE\citep{liu2020smoke}, RTM3D\citep{li2020rtm3d}, PGD\citep{wang2022probabilistic}, CaDDN\citep{reading2021categorical}. Specifically, Geometry-based Approaches: MV3D \cite{chen2017multi} utilized both LiDAR-based point clouds and geometric cues from images for 3D object detection. \cite{mousavian20173d} introduced a method that regresses object properties such as dimensions, orientation, and location from 2D bounding boxes using geometric constraints.
127
+ In the context of indoor scenes, multi-task learning has gained traction. Recent studies, including PointFusion by \cite{xu2018pointfusion}, have amalgamated 3D object detection with tasks like depth estimation or semantic segmentation to improve performance. Total3D \citep{nie2020total3dunderstanding} and Implicit3D \citep{zhang2021holistic} use end-to-end solutions to jointly reconstruct room layout, object bounding boxes and meshes from a single image.
128
+ ImvoxelNet \citep{rukhovich2022imvoxelnet} achieves state-of-the-art performance by using the image-voxels projection for monocular 3d object detection.
129
+
130
+ \subsection{3D Data Augmentation}
131
+
132
+ Data augmentation in 3D has become increasingly vital for enhancing performance across various 3D perception tasks. Most of work focuses on outdoor scenes \citep{zhang2020exploring, lian2022exploring, abu2018augmented, chen2021geosim, tong20233d}. Geometric Transformations: \cite{wu20153d} applied rotations, translations, and scaling to augment the ModelNet dataset, improving classification and retrieval tasks.
133
+ Point Cloud Augmentation: \cite{engelcke2017vote3deep} proposed techniques such as random point removal, Gaussian noise, and point cloud interpolation for augmenting LiDAR datasets, enhancing object detection and segmentation performance.
134
+ Generative Model-based Augmentation: \cite{smith2017improved} used a conditional GAN to generate diverse and realistic 3D objects. Similarly, \cite{achlioptas2018learning} employed a VAE for learning a generative model of 3D shapes for shape completion and exploration tasks. However, while 3D generative models can achieve object-level augmentation, they are not scalable to scene-level augmentation. 2D generative models can produce highly realistic images, but they do not provide physically plausible 3D labels. 3D Common corruptions \citep{kar20223d} use 3D information to generate real-world corruptions for 2D dataset, which can evaluate the model robustness and be used as a data augmentation for model training, but does not support 3D detection because it does not introduce new 3D object content.
135
+
136
+ \subsection{Illumination Estimation}
137
+
138
+ Illumination estimation is a critical focus within computer vision research, given its crucial role in various applications. \cite{li2020inverse} addressed the inverse rendering problem for complex indoor scenes, estimating spatially-varying lighting, SVBRDF, and shape from a single image. Meanwhile, a differentiable ray tracing method combined with deep learning was proposed for the learning-based inverse rendering of indoor scenes \citep{zhu2022learning}. Additionally, research has been conducted on using deep learning for indoor lighting estimation, with methods like Deep Parametric Indoor Lighting Estimation offering enhanced accuracy and efficiency \cite{gardner2019deep}. Furthermore, \cite{wang2022neural} introduced Neural Light Field Estimation, a method that effectively models complex lighting conditions for virtual object insertion in street scenes. These studies underscore the potential of machine learning in improving illumination estimation capabilities in rendering and computer vision tasks.
139
+ \section{Methods}
140
+
141
+
142
+ \begin{figure}[t]
143
+ \centering
144
+ \includegraphics[width=\textwidth]{Fig/Fig-2.pdf}
145
+ \caption{3D Copy-Paste method overview: Our method (a) processes the input RGB image and depth data to reconstruct floor planes that can accommodate inserted objects. (b) Using the reconstructed planes and information about objects in the original scene, we estimate a physically plausible position, pose, and size for the inserted objects, ensuring they do not collide with existing objects. (c) We predict the spatially-varying lighting of the scene. (d) By registering the insertion position determined in (b) to spatially-varying lighting, our light estimation module (d) refined an HDR environment map to represent the lighting information for the inserted objects. (e) The insertion rendering module takes the position, pose, size, and lighting as input and inserts a 3D object into the real scene, adjusting the object's lighting and shadows accordingly to ensure it seamlessly integrates as a natural and coherent part of the scene.}
146
+ \label{fig:method}
147
+ \end{figure}
148
+
149
+
150
+ This section presents our proposed physically plausible indoor 3D object insertion approach. Fig.~\ref{fig:method} shows our 3D Copy-Paste method overview. Section~\ref{sec:3.1} addresses the question of "where and how to place the object", detailing the process of estimating suitable insertion positions, poses, and sizes for the objects while avoiding collisions with existing objects. Section~\ref{sec:3.2} explains "what illumination should we add to the object": estimate the scene's spatially-varying illumination and render the inserted objects with realistic lighting and shadows. Section~\ref{sec:3.3} describes how we create an augmented dataset using the inserted objects and train monocular 3D object detection models.
151
+
152
+ \subsection{\textit{Where and how}: Physically Plausible Position, Pose, and Size Estimation}
153
+
154
+ This section describes handling the first challenge of avoiding collisions during insertion by estimating physically plausible position, pose, and size parameters.
155
+
156
+ \label{sec:3.1}
157
+ \subsubsection{Ground Plane Selection}
158
+ \label{sec:3.1.1}
159
+ Given a scene and a 3D object to insert, the initial question is where to place the object. To accommodate a new object, we must identify and understand the available regions where the object can be situated. We perform plane reconstruction to comprehend the scene's layout and subsequently, we estimate physically plausible key parameters such as position, size, and pose. Fig.~\ref{fig:method}(a) presents an overview of our plane reconstruction and selection module, which takes an RGB image and depth data as input and predicts all potential planes, then narrows down to the ground plane.
160
+
161
+ To get a rough plane reconstruction, we followed the plane extraction method using Agglomerative Hierarchical Clustering (AHC) described in \cite{feng2014fast}. There are three main steps: (1) we construct a graph with nodes and edges representing groups of points, obtained by dividing the point cloud (merging RGB with depth) into non-overlapping groups. (2) We then perform AHC on the organized graph to identify potential planes by merging nodes that belong to the same plane, continuing until the mean squared error of plane fitting surpasses a threshold. (3) We use a pixel-wise region-growing method to refine the detected planes. To further refine the extracted planes while preserving clear face textures and sharp features without losing geometric details, we utilize a back-end indoor plane optimization and reconstruction method described in \cite{wang2018plane}. Specifically, we first partition the entire dense mesh into different planar clusters based on the planes extracted with AHC, treating them as plane primitives. We then create a texture patch for each plane and sample points on it, followed by executing a global optimization process to maximize the photometric consistency of sampled points across frames by optimizing camera poses, plane parameters, and texture colors. Further, we optimize the mesh geometry by maximizing consistency between geometry and plane primitives, further preserving the original scene's sharp features, such as edges and corners of plane intersections. Finally, we get the reconstructed plane with the geometry parameters (e.g., surface normal).
162
+
163
+ To select a proper plane for insertion, we first identify all horizontal planes based on surface direction and the standard deviation along the Z-axis. Specifically, there are two constraints for considering a plane as horizontal: (1) The plane must have a surface normal aligned with the positive direction of the Z-axis (opposite of the gravity vector), and (2) the standard deviation along the Z-axis should be smaller than a predefined threshold. In our scenario, we aim to insert furniture into the scene, such as the ten interest classes in the SUN RGB-D dataset \citep{song2015sun}: sofa, bed, chair, desk, table, nightstand, dresser, bookshelf, toilet, and bathtub. Consequently, we must identify the floor plane by selecting the horizontal plane with the lowest average Z value among all detected horizontal planes.
164
+
165
+
166
+ \subsubsection{Constrained Insertion Parameter Search}
167
+ \label{sec:3.1.2}
168
+ To address the question of where and how to place the object, we estimate specific insertion parameters: position ($p$), size ($s$), and pose ($o$). We propose an efficient constrained insertion parameter searching algorithm to calculate plausible insertion parameters while avoiding collisions with existing objects in the scene (Algorithm~\ref{alg:insertion}). Given the reconstructed floor plane, we first determine the search space for each parameter. For position, we want the inserted object to touch the floor, so we find the 3D bounding box of the object and calculate the center of the bottom surface ($p$) as the optimization parameter of position. To prevent potential collisions between the inserted object and existing assets in the original scene, we search for a suitable position around the center of the reconstructed floor. As shown in Fig.~\ref{fig:method}(b), we first calculate the floor's center $c \gets (c_x, c_y, c_z)$, and set a search square, which uses twice the floor's standard deviation along X axis, $\sigma_x$, and Y axis, $\sigma_y$, as square width and length. The insertion position is sampled from a Uniform distribution inside the search square $p_x \sim \mathcal{U}[c_x - \sigma_x,c_x + \sigma_x]$ and $p_y \sim \mathcal{U}[c_y - \sigma_y,c_y + \sigma_y]$, $p \gets (p_x, p_y, c_z) $.
169
+ For size ($s$), we use the height of the 3D bounding box of the object as the optimization parameter. For each object category, we first calculate the mean $m_h$ and standard deviation $\sigma_h$ of the height of the object belonging to the same category in the original scene dataset. We then assume the height size follows a Normal distribution and sample a height size from this Normal distribution: $s \in \mathcal{N} (m_h, \sigma_h)$.
170
+ For the pose ($o$), we only allow the object to rotate along the Z-axis to maintain its stability. The optimization parameter is the rotation angles alone the Z-axis, which follows uniform distribution as $o \sim \mathcal{U}[-\pi,\pi]$.
171
+
172
+ Algorithm \ref{alg:insertion} details the Constrained Insertion Parameter Search algorithm. We first set a search budget: $k$ search iterations. For each iteration, we randomly sample each parameter (position, size, and pose) from their corresponding search spaces and calculate the inserted object's bounding box based on the sampled parameters. We then check for collisions with existing objects and quantitatively evaluate the degree of collisions.
173
+ A direct approach for collision checking is to convert the inserted object into a point cloud and then calculate the overlap with existing objects' point clouds. However, this method is time-consuming due to the large number of points involved. We simplify the problem by converting the original 3D collision into a 2D collision to speed up the collision check. Since the inserted objects are on the floor, if two objects collide, their 3D bounding box projections on the top view would also often collide (but not always, e.g., when an object may be placed under a table; we here ignore these candidate placements). In other words, we disregard the absolute value of the 3D volume and use the 2D collision projection as a relative collision score.
174
+ Utilizing an efficient collision check allows us to set a relatively large search iteration number, such as $k=1000$, while still maintaining a limited search time (less than 0.5 seconds). We also consider a resize factor $r$ to shrink the size of the inserted object to handle inserting a large object in a small empty floor scenario. During the search, we terminate the process if we find an insertion with a collision score of 0; otherwise, we continue to track the best insertion with the lowest collision score and return it after completing $k$ search iterations.
175
+
176
+
177
+ \begin{algorithm}[tp]
178
+ \small
179
+ \caption{Constrained Insertion Parameter Search}
180
+ \label{alg:insertion}
181
+ \SetKwInput{KwData}{Input}
182
+ \SetKwInput{KwResult}{Output}
183
+ \KwData{An RGBD image of the scene, a reconstructed floor, a 3D object belonging to the class of interest, $j$}
184
+ \KwResult{Position ($\hat{p}$: 3D bounding box bottom center), size ($\hat{s}$: 3D bounding box (bbox) height), and pose ($\hat{o}$: orientation along Z-axis)}
185
+ \SetAlgoLined
186
+
187
+
188
+ \nl
189
+ Compute position search constrains: floor center $c \gets (c_x, c_y, c_z)$, standard deviation $\sigma_x$ and $\sigma_y$
190
+
191
+ \nl
192
+ Initialize search parameters: $k \gets 1000$, degree of collision $\hat{l} \gets \inf$
193
+
194
+ \For{\( i \in \{1, 2, \ldots, k\} \)}{
195
+ \nl
196
+ Sample position: $p_x \sim \mathcal{U}[c_x - \sigma_x,c_x + \sigma_x]$ and $p_y \sim \mathcal{U}[c_y - \sigma_y,c_y + \sigma_y]$, $p \gets (p_x, p_y, c_z) $
197
+
198
+ \nl
199
+ Sample size: $s \sim \mathcal{N} (m_h, \sigma_h)$, resize factor $r \sim \mathcal{U} [1, r]$, $s \gets s / r$,\\
200
+ \ \ \ \ \ where $m_h$ and $\sigma_h$ are mean and standard deviation of object height in class $j$ in the raw dataset
201
+
202
+ \nl
203
+ Sample pose: $o \sim \mathcal{U}[-\pi,\pi]$
204
+
205
+ \nl
206
+ Calculate 3D bbox $x_\text{3D}$, parameter based on the sampled insertion parameter ($p$, $s$ and $o$)
207
+
208
+ \nl
209
+ Project 3D bbox to 2D bbox $x_\text{2D}$ in top view
210
+
211
+ \nl
212
+ Calculate collision score $l= F (x_\text{2D})$ with existing objects in the scene
213
+
214
+ \If{$l == 0$}{
215
+ Return $p$, $s$, $o$
216
+ }
217
+ \If{$l < \hat{l}$}{ $\hat{p} \gets p $, $\hat{s} \gets s $, $\hat{o} \gets o $
218
+
219
+ $\hat{l} \gets l$}
220
+ }
221
+ \nl
222
+ Return $\hat{p}$, $\hat{s}$, $\hat{o}$
223
+ \end{algorithm}
224
+
225
+
226
+
227
+ \subsection{\textit{What} Illumination is on the object}
228
+ \label{sec:3.2}
229
+
230
+ \subsubsection{Spatial-varying Illumination Estimation and Retrieval}
231
+ To answer the question of what kind of illumination should be cast on the object, we first need to estimate the spatially-varying illumination of the scene. This process involves encapsulating intricate global interactions at each spatial location. To achieve this, we utilize the deep inverse rendering framework proposed by \cite{li2020inverse}. Initially, we estimate intermediate geometric features such as albedo, normal, depth, and roughness. Subsequently, a LightNet structure, consisting of an encoder-decoder setup, ingests the raw image and the predicted intermediate features. This, in turn, enables the estimation of spatially-varying lighting across the scene.
232
+
233
+ As depicted in Fig.~\ref{fig:method}(c), the estimated spatially-varying illumination is represented as environment maps. Specifically, each 4x4 pixel region in the raw image is associated with an environment map, which captures the appearance of the surrounding environment and is used for reflection, refraction, or global illumination. These maps are spherical (equirectangular), representing the environment on a single 2D texture. The X-axis corresponds to longitude, and the Y-axis corresponds to latitude. Each point on the texture corresponds to a specific latitude and longitude on a sphere.
234
+
235
+ To obtain the environment map associated with the position of the inserted object, we register and retrieve the corresponding environment map based on the estimated position after performing the constrained insertion parameter search.
236
+
237
+
238
+ \subsubsection{Environment Map Refinement}
239
+ \noindent{\bf Coordinate transformation.} The environment map, estimated for the inserted object, is based on the local coordinates of the insertion position. In particular, it establishes a coordinate system where the surface normal is designated as the Z-axis. In order to apply this map for relighting the inserted object using a rendering method (such as Blender), it becomes necessary to transform the environment map to align with Blender's coordinate system.
240
+
241
+ \noindent{\bf Latitude completion.} The estimated environment map only contains latitudes in the range (0, $\pi/2$) because the inverse rendering method cannot estimate the illumination beneath the surface. As shown in Fig.~\ref{fig:method}(d), we complete the entire environment map by filling in artificial values in the second half.
242
+
243
+ \noindent{\bf Intensity refinement.} The estimated environment map is in Low Dynamic Range (LDR) format, lacking High Dynamic Range (HDR) details and high contrast. If we use the predicted value directly, the rendered shadow appears relatively fuzzy. We refine the value by adjusting the scale in log space to estimate the HDR value: $I_\text{HDR} = I_\text{LDR} ^ \gamma$, where $\gamma$ is a hyperparameter .
244
+
245
+ Finally, we input the HDR environment map after transformation and refinement, along with the position, size, and pose, into an insertion renderer (e.g., Blender). This allows us to obtain the inserted image with 3D bounding boxes serving as ground truth.
246
+
247
+
248
+ \subsection{Dataset Augmentation with Insertion and Downstream Model Training}
249
+ \label{sec:3.3}
250
+
251
+ Given an indoor scene dataset and a set of interest classes $\mathcal{C}$ for potential insertion, we can identify external 3D objects set $\mathcal{E}$ that fall within these classes of interest.
252
+ Before any insertion, we calculate the statistical parameters for each class of interest that we aim to augment. For every class $j \in \mathcal{C}$, we assume the size parameter (for instance, the height) fits a Gaussian distribution. We then calculate the mean and standard deviation of this size parameter to guide the insertion of external objects.
253
+ Here are the detailed steps for insertion:
254
+ For each scene within the indoor scene dataset, we randomly select a category $j$ from the class of interest set $\mathcal{C}$. Next, we randomly choose an instance from the external 3D objects set $\mathcal{E}$ that belongs to the selected class $j$. We then utilize our physically plausible insertion method (Algorithm \ref{alg:insertion}) to integrate this external 3D object into the scene.
255
+ We could train any downstream monocular 3D object detection model with the augmented dataset because we automatically obtain the 3D annotations of the inserted objects.
256
+ \section{Experiments}
257
+
258
+ This section presents experiments to assess the effectiveness of our proposed physically-plausible 3D object insertion method and evaluate how different insertion parameters affect the final performance of monocular 3D object detection.
259
+
260
+
261
+
262
+ \subsection{Dataset and Model Setting}
263
+ \label{sec:4.1}
264
+
265
+ \noindent{\bf Indoor scene dataset.} We utilize the SUN RGB-D dataset \citep{song2015sun} as our primary resource for indoor scenes. It is one of the most challenging benchmarks in indoor scene understanding. SUN RGB-D comprises 10,335 RGB-D images captured using four distinct sensors. The dataset is divided into 5,285 training scenes and 5,050 test scenes. Furthermore, it includes 146,617 2D polygons and 58,657 3D bounding boxes, providing a comprehensive dataset for our research.
266
+
267
+ We also use ScanNet dataset \citep{dai2017scannet}. ScanNet v2 is a large-scale RGB-D video dataset, which contains 1,201 videos/scenes in the training set and 312 scenes in the validation set. Adapting it for monocular 3D object detection, we utilized one RGB-D image per video, amounting to 1,201 RGB-D images for training and 312 for validation. We compute the ground truth 3D bounding box label for each of our used views from their provided scene level label, as some objects in the scene may not be visible in our monocular viewpoint.
268
+
269
+
270
+
271
+ \noindent{\bf External 3D object assets.}
272
+ The quality of 3D objects is crucial for effective insertion. Hence, we use Objaverse \citep{deitke2022objaverse}, a robust dataset with over 800,000 annotated 3D objects. Using word parsing, we extract objects that align with the classes of interest for monocular 3D object detection within SUN RGB-D.
273
+ Table~\ref{tab:1} shows the selected Objaverse data for each SUN RGB-D class.
274
+
275
+
276
+ \begin{table}[t]
277
+ \centering
278
+ \small
279
+ \caption{Statistics of external 3D objects from Objaverse~\citep{deitke2022objaverse}.}
280
+ \label{tab:1}
281
+ \resizebox{\textwidth}{!}{
282
+ \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
283
+ \toprule
284
+ Category & Bed & Table & Sofa & Chair & Desk & Dresser & Nightstand & Bookshelf & Toilet & Bathtub\\
285
+ \midrule
286
+ Number & 190 & 854 & 361 & 934 & 317 & 52 & 13 & 99 & 142 & 24\\
287
+ \bottomrule
288
+ \end{tabular}
289
+ }
290
+ \end{table}
291
+
292
+
293
+
294
+ \noindent{\bf Monocular 3D object detection model.} We focus on the challenging task of monocular 3D object detection that relies solely on a single RGB image as input. We employ ImVoxelNet, which achieves state-of-the-art performance on the raw SUN RGB-D dataset using only a single RGB image as input. Other existing methods either resort to using additional modalities and multiple datasets for extra supervision or exhibit underwhelming performance. For the purpose of monocular 3D object detection, we train the same ImVoxelNet model on the original SUN RGB-D dataset and its various versions, each augmented via different insertion methods. All mAP results are mAP$@$0.25.
295
+
296
+ \begin{table}[t]
297
+ \centering
298
+ \small
299
+ \caption{ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with different object insertion methods. When inserting randomly, the accuracy of the downstream object detector drops, i.e., the detector suffers from random insertions (which may have collisions, occlusions, incorrect lighting, etc.). In contrast, by only applying physically plausible position, size, and pose, performance significantly improved (41.80\%). Further, when plausible lighting and shadows are added, our 3D copy-paste improves the accuracy of the downstream detector to a new state-of-the-art accuracy (43.79\%). We use mAP (\%) with 0.25 IOU threshold.
300
+ }
301
+ \label{tab:2}
302
+ \resizebox{\textwidth}{!}{
303
+ \begin{tabular}{l|c|c|c}
304
+ \toprule
305
+ Setting & Insertion Position, Pose, Size & Insertion Illumination & mAP$@$0.25\\
306
+ \midrule
307
+ ImVoxelNet & N/A & N/A & 40.96\\
308
+ ImVoxelNet + random insert & Random & Camera point light & 37.02 \\
309
+ ImVoxelNet + 3D Copy-Paste (w/o light) & Plausible position, size, pose & Camera point light & 41.80 \\
310
+ ImVoxelNet + 3D Copy-Paste & Plausible position, size, pose & Plausible dynamic light & \textbf{43.79} \\
311
+ \bottomrule
312
+ \end{tabular}
313
+ }
314
+ \end{table}
315
+
316
+ \subsection{Physically-plausible position, pose, size, and illumination leads to better monocular detection performance}
317
+ \label{sec:4.2}
318
+
319
+ Our 3D Copy-Paste focuses on solving two challenges: (1) Where and how to put the object: we estimate the object's position, orientation, and size for insertion while ensuring no collisions.
320
+ (2) What illumination is on the object: we estimate the spatially-varying illumination and apply realistic lighting and shadows to the object rendering. The following experiments evaluate the model performance.
321
+
322
+
323
+
324
+
325
+ \begin{table}[t]
326
+ \centering
327
+ \small
328
+ \caption{Per class average precision (AP) of ImVoxelNet 3D monocular object detection performance on SUN RGB-D dataset.}
329
+ \label{tab:3}
330
+ \resizebox{\textwidth}{!}{
331
+ \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c}
332
+ \toprule
333
+ Setting & mAP$@$0.25& bed& chair& sofa& table& bkshf& desk& bathtub& toilet & dresser &nightstand \\
334
+ \midrule
335
+ ImVoxelNet &40.96&72.0&55.6&53.0&41.1&\textbf{7.6}&21.5&29.6&76.7&19.0&33.4\\
336
+ ImVoxelNet + 3D Copy-Paste & \textbf{43.79} &\textbf{72.6}&\textbf{57.1}&\textbf{55.1}&\textbf{41.8}&7.1&\textbf{24.1}&\textbf{40.2}&\textbf{80.7}&\textbf{22.3}&\textbf{36.9}\\
337
+ \bottomrule
338
+ \end{tabular}
339
+ }
340
+ \end{table}
341
+
342
+
343
+
344
+
345
+
346
+
347
+
348
+
349
+
350
+
351
+
352
+
353
+
354
+
355
+ Table~\ref{tab:2} presents the results of monocular 3D object detection on the SUN RGB-D dataset, utilizing various object insertion augmentation techniques. The first row is the performance of ImVoxelNet trained on the raw SUN RGB-D dataset without any insertion.
356
+ The ``ImVoxelNet + random insert'' row displays results achieved through a naive 3D object insertion without applying physically plausible constraints (random location and Camera point light). This approach led to a drop in accuracy from 40.96\% to 37.02\%, likely due to the lack of physical plausibility causing severe collisions and occlusions in the final image.
357
+ The ``ImVoxelNet + 3D Copy-Paste (w/o light)'' row showcases the performance after implementing our method for only estimating physically plausible insertion position, pose, and size. Despite using a rudimentary camera point light, this approach outperforms ``ImVoxelNet'' without any insertion, and also outperforms the naive ``ImVoxelNet + random insert'' (+4.78 \% improvement). This result shows that applying plausible geometry is essential for downstream tasks and makes 3D data augmentation useful over a naive, random augmentation.
358
+ After further applying physically plausible dynamic light, our proposed ``ImVoxelNet + 3D Copy-Paste'' further improved the performance and achieved new state-of-the-art, surpassing ImVoxelNet without insertion (+2.83 \%) on monocular 3D object detection task.
359
+ This performance improvement suggests that our 3D Copy-Paste insertion can serve as an efficient data augmentation method to positively benefit downstream 3D object detection tasks.
360
+ Table~\ref{tab:3} shows detailed SUN RGB-D monocular 3D object detection results with ImVoxelNet on each individual object category.
361
+
362
+
363
+ \begin{table}[t]
364
+ \centering
365
+ \small
366
+ \caption{ImVoxelNet 3D monocular object detection performance on the ScanNet dataset with different object insertion methods.}
367
+ \label{tab:4}
368
+ \resizebox{\textwidth}{!}{
369
+ \begin{tabular}{l|c|c|c|c|c|c|c|c|c}
370
+ \toprule
371
+ Setting & mAP$@$0.25& bed& chair& sofa& table& bkshf& desk& bathtub& toilet\\
372
+ \midrule
373
+ ImVoxelNet &14.1& 25.7& 7.9& \textbf{13.2}& 7.8& 4.2& 20.5& 22.1& \textbf{11.5}\\
374
+ ImVoxelNet + 3D Copy-Paste & \textbf{16.9} &\textbf{27.7} &\textbf{12.7} &10.0 &\textbf{10.8} &\textbf{9.2} &\textbf{26.2} &\textbf{29.2} &9.0 \\
375
+ \bottomrule
376
+ \end{tabular}
377
+ }
378
+ \end{table}
379
+
380
+ Table~\ref{tab:4} presents the results of monocular 3D object detection on the ScanNet dataset. We utilized one RGB-D image per video: 1,201 for training and 312 for validation. We compute the ground truth 3D bounding box label for each of our used views from their provided scene-level label.
381
+ For the baseline, we train an ImVoxelNet monocular 3D object detection model on the training set and test on the validation set. For our method, there are 8 overlapping categories (sofa, bookshelf, chair, table, bed, desk, toilet, bathtub) in the 18 classes of ScanNet with our collected Objaverse data. We use our 3D Copy-Paste to augment the training set and train an ImVoxelNet model. All the training parameters are the same as the training on SUN RGB-D dataset. We show the results on the average accuracy of the 8 overlapping classes (mAP$@$0.25) in the Table~\ref{tab:4}. Our 3D Copy-Paste improves ImVoxelNet by 2.8\% mAP.
382
+
383
+
384
+
385
+
386
+
387
+
388
+
389
+
390
+
391
+
392
+
393
+
394
+
395
+
396
+
397
+
398
+
399
+ \subsection{Ablation study on the influence of insertion illumination and position on monocular 3D object detection}
400
+ \label{sec:4.3}
401
+
402
+ We first explore the influence of illumination of inserted objects on downstream monocular 3D object detection tasks. Table~\ref{tab:5} shows the ImVoxelNet performance on SUN RGB-D with different illumination settings during 3D Copy-Paste. To eliminate the influence of other insertion parameters, we fix the estimated position, pose, and size for each scene among all experiments in Table~\ref{tab:5}.
403
+
404
+
405
+
406
+
407
+
408
+
409
+
410
+
411
+
412
+
413
+
414
+ \begin{table}[t]
415
+ \centering
416
+ \small
417
+ \caption{ImVoxelNet 3D monocular object detection performance on SUN RGB-D dataset with different illumination during insertion rendering. All experiments use the same ImVoxelNet model, insertion also uses our proposed physically plausible position, size, and pose.}
418
+ \label{tab:5}
419
+ \resizebox{\textwidth}{!}{
420
+ \begin{tabular}{r|c|c|c|c|c}
421
+ \toprule
422
+ Setting & Light source type & Intensity & Direction & With shadow? & mAP$@$0.25\\
423
+ \midrule
424
+ Point Light 1 & Point & 100W & Camera position & Yes & 41.80\\
425
+ Point Light 2 & Point & 100W & Side (left) & Yes & 42.38\\
426
+
427
+
428
+ Area Light 1 & Area & 100W & Camera position & Yes & 42.67\\
429
+ Area Light 2 & Area & 100W & Side (left) & Yes & 42.02 \\
430
+
431
+
432
+ Spot Light 1 & Spot & 100W & Camera position & Yes & 40.92\\
433
+ Spot Light 2 & Spot & 100W & Side (left) & Yes & 42.10\\
434
+
435
+
436
+ Sun Light 1 & Sun & 5 & Camera position & Yes & 42.11\\
437
+ Sun Light 2 & Sun & 5 & Side (left) & Yes & 41.21\\
438
+
439
+ Ours (Dynamic Light) & Estimated Plausible light & Dynamic & Dynamic & No & 41.83 \\
440
+ Ours (Dynamic Light) & Estimated Plausible light & Dynamic & Dynamic & Yes & \textbf{43.79}\\
441
+
442
+ \bottomrule
443
+ \end{tabular}
444
+ }
445
+ \end{table}
446
+
447
+ \begin{figure}[t]
448
+ \centering
449
+ \includegraphics[width=\textwidth]{Fig/Fig-3.pdf}
450
+ \caption{Visualization of different illumination on inserted objects.}
451
+ \label{fig:vis-diff-illu}
452
+ \end{figure}
453
+
454
+
455
+ Fig.~\ref{fig:vis-diff-illu} provides a visualization of the effects of various light sources and light parameters during the insertion rendering process. The corresponding monocular 3D object detection results are presented in Table~\ref{tab:5}.
456
+ These illustrate how lighting not only impacts the visual perception of the inserted object from a human observer's standpoint but also considerably affects the performance of downstream detection tasks. Thus, an accurate and physically plausible lighting estimation is crucial for both understanding the scene and for the practical application of downstream detection tasks.
457
+
458
+
459
+
460
+
461
+
462
+
463
+
464
+ \begin{table}[t]
465
+ \centering
466
+ \small
467
+ \caption{Ablation study of global context influence on ImVoxelNet monocular 3D object detection performance on SUN RGB-D.}
468
+ \label{tab:6}
469
+ \begin{tabular}{l|c|c|c}
470
+ \toprule
471
+ Method & Follow global context? & Select class based on empty size? & mAP$@$0.25\\
472
+ \midrule
473
+ ImVoxelNet + 3D Copy-Paste & Yes & No & 43.75 \\
474
+ ImVoxelNet + 3D Copy-Paste & Yes & Yes & 43.74 \\
475
+ ImVoxelNet + 3D Copy-Paste & No & Yes & 42.50 \\
476
+ ImVoxelNet + 3D Copy-Paste & No & No & \textbf{43.79}\\
477
+ \bottomrule
478
+ \end{tabular}
479
+ \end{table}
480
+
481
+ \begin{figure}[h!]
482
+ \centering
483
+ \includegraphics[width=\textwidth]{Fig/Fig-4.pdf}
484
+ \caption{Qualitative results on the SUN RGB-D dataset. }\label{fig:vis-qual}
485
+ \end{figure}
486
+
487
+ Table.~\ref{tab:2} shows the importance of physical position, pose, and size (local context) on monocular 3D object detection tasks. We also explored the importance of the global context to the detection performance. The global context here means the semantic relationship of the inserted object to the whole scene. For instance, inserting a toilet into a living room may not satisfy the global context. We propose a plausible global context insertion method where the inserted object class considers the global scene information. Also, we could select an inserted class based on the floor size: insert larger size objects (e.g., bed, bookshelf) on only a large size floor.
488
+ Table.~\ref{tab:6} shows results on different settings. We find considering the global context during the insertion is on par with the random category selecting setting, and the following downstream detection model may not be sensitive to that.
489
+
490
+
491
+
492
+
493
+
494
+
495
+
496
+
497
+
498
+
499
+
500
+ \subsection{Qualitative Analysis}
501
+ \label{sec:4.4}
502
+
503
+ Fig.~\ref{fig:vis-qual} shows the qualitative results of monocular 3D object detection on SUN RGB-D dataset. Our method demonstrates enhanced capabilities in detecting objects with significant occlusion, provides improved pose estimation, and effectively suppresses false positives.
504
+
505
+
506
+
507
+
508
+
509
+ \section{Conclusion and Discussion}
510
+
511
+
512
+ Our work addresses the challenge of scarce large-scale annotated datasets for monocular 3D object detection by proposing a physically plausible indoor 3D object insertion approach. This technique allows us to effectively augment existing indoor scene datasets, such as SUN RGB-D, with large-scale annotated 3D objects that have both plausible physical location and illumination. The resulting augmented dataset enables training a monocular 3D object model that achieves new state-of-the-art performance.
513
+ Our approach carefully considers physically feasible locations, sizes, and poses for inserted objects, avoiding collisions with the existing room layout, and estimates spatially-varying illumination to seamlessly integrate the objects into the original scene. We also systematically evaluate the impact of the physical position and illumination of the inserted objects on the performance of the final monocular 3D object detection model.
514
+ This paper is the first to demonstrate that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performance in discriminative downstream tasks like monocular 3D object detection. Our findings highlight the potential of 3D data augmentation in improving the performance of 3D perception tasks, opening up new avenues for research and practical applications.
515
+ \paragraph{Acknowledgments.}
516
+ This work is in part supported by Bosch, Ford, ONR MURI N00014-22-1-2740, NSF CCRI \#2120095, Amazon ML Ph.D. Fellowship, National Science Foundation (award 2318101), C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA) and the Army Research Office (W911NF2020053). The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof.
517
+
518
+
519
+
520
+
521
+ \bibliography{camera_neurips-2023.bbl}
522
+ \bibliographystyle{abbrvnat}
523
+
524
+ \newpage
525
+
526
+ \appendix
527
+
528
+
529
+ \section{Experiments on more Monocular 3D Object Detection methods}
530
+
531
+ In our main paper, we utilize ImVoxelNet \citep{rukhovich2022imvoxelnet} for monocular 3D object detection. To show the robustness of our 3D Copy-Paste across different downstream detection methods.
532
+ We conducted additional experiments with another monocular 3D object detection model: Implicit3DUnderstanding (Im3D \citep{zhang2021holistic}).
533
+ The Im3D model predicts object 3D shapes, bounding boxes, and scene layout within a unified pipeline. Training this model necessitates not only the SUN RGB-D dataset but also the Pix3D dataset \citep{sun2018pix3d}, which supplies 3D mesh supervision. The Im3D training process consists of two stages. In stage one, individual modules - the Layout Estimation Network, Object Detection Network, Local Implicit Embedding Network, and Scene Graph Convolutional Network - are pretrained separately. In stage two, all these modules undergo joint training. We incorporate our 3D Copy-Paste method only during this second stage of joint training, and it's exclusively applied to the 10 SUN RGB-D categories we used in the main paper.
534
+ We implemented our experiment following the official Im3D guidelines\footnote{https://github.com/chengzhag/Implicit3DUnderstanding}.
535
+
536
+ Table~\ref{tab:Im3D} displays the Im3D results for monocular 3D object detection on the SUN RGB-D dataset, adhering to the same ten categories outlined in main paper. Im3D without insertion, attained a mean average precision (mAP) detection performance of 42.13\%. After applying our 3D Copy-Paste method—which encompasses physically plausible insertion of position, pose, size, and light—the monocular 3D object detection mAP performance increased to 43.34. These results further substantiate the robustness and effectiveness of our proposed method.
537
+
538
+ \begin{table}[h]
539
+ \centering
540
+ \small
541
+ \caption{Im3D \citep{zhang2021holistic} 3D monocular object detection performance on the SUN RGB-D dataset (same 10 categories as the main paper).}
542
+ \label{tab:Im3D}
543
+ \resizebox{\textwidth}{!}{
544
+ \begin{tabular}{l|c|c|c}
545
+ \toprule
546
+ Setting & Insertion Position, Pose, Size & Insertion Illumination & mAP\\
547
+ \midrule
548
+ Im3D & N/A & N/A & 42.13\\
549
+ Im3D + 3D Copy-Paste & Plausible position, size, pose & Plausible dynamic light & \textbf{43.34} \\
550
+ \bottomrule
551
+ \end{tabular}
552
+ }
553
+ \end{table}
554
+
555
+ \section{More experiment details}
556
+ We run the same experiments multiple times with different random seeds. Table~\ref{tab:error} shows the main paper Table ~\ref{tab:2} results with error range.
557
+
558
+ \begin{table}[h]
559
+ \centering
560
+ \small
561
+ \caption{ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with different object insertion methods (with error range).}
562
+ \label{tab:error}
563
+ \resizebox{\textwidth}{!}{
564
+ \begin{tabular}{l|c|c|c}
565
+ \toprule
566
+ Setting & Insertion Position, Pose, Size & Insertion Illumination & mAP$@$0.25\\
567
+ \midrule
568
+ ImVoxelNet & N/A & N/A & 40.96 $\pm$ 0.4\\
569
+ ImVoxelNet + random insert & Random & Camera point light & 37.02$\pm$ 0.4 \\
570
+ ImVoxelNet + 3D Copy-Paste (w/o light) & Plausible position, size, pose & Camera point light & 41.80$\pm$ 0.3 \\
571
+ ImVoxelNet + 3D Copy-Paste & Plausible position, size, pose & Plausible dynamic light & \textbf{43.79} $\pm$ 0.4 \\
572
+ \bottomrule
573
+ \end{tabular}
574
+ }
575
+ \end{table}
576
+
577
+
578
+ We also show our results with mAP$@$0.15 on SUN RGB-D dataset (Table~\ref{tab:map0.15}), our method shows consistent improvements.
579
+
580
+
581
+
582
+
583
+
584
+
585
+
586
+
587
+
588
+
589
+ \begin{table}[h]
590
+ \centering
591
+ \small
592
+ \caption{ImVoxelNet 3D monocular object detection performance on the SUN RGB-D dataset with mAP$@$0.15.}
593
+ \label{tab:map0.15}
594
+ \resizebox{\textwidth}{!}{
595
+ \begin{tabular}{l|c|c|c}
596
+ \toprule
597
+ Setting & Insertion Position, Pose, Size & Insertion Illumination & mAP$@$0.15\\
598
+ \midrule
599
+ ImVoxelNet & N/A & N/A & 48.45 \\
600
+ ImVoxelNet + 3D Copy-Paste & Plausible position, size, pose & Plausible dynamic light & \textbf{51.16} \\
601
+ \bottomrule
602
+ \end{tabular}
603
+ }
604
+ \end{table}
605
+
606
+ \section{Discussion on Limitations and Broader Impact}
607
+
608
+ \noindent
609
+ \textbf{Limitations.}
610
+ Our method, while effective, does have certain limitations. A key constraint is its reliance on the availability of external 3D objects, particularly for uncommon categories where sufficient 3D assets may not be readily available. This limitation could potentially impact the performance of downstream tasks. Moreover, the quality of inserted objects can also affect the results. Possible strategies to address this limitation could include leveraging techniques like Neural Radiance Fields (NeRF) to construct higher-quality 3D assets for different categories.
611
+
612
+
613
+ \noindent
614
+ \textbf{Broader Impact.}
615
+ Our proposed 3D Copy-Paste method demonstrate that physically plausible 3D object insertion can serve as an effective generative data augmentation technique, leading to state-of-the-art performance in discriminative downstream tasks like monocular 3D object detection. The implications of this work are profound for both the computer graphics and computer vision communities. From a graphics perspective, our method demonstrates that more accurate 3D property estimation, reconstruction, and inverse rendering techniques can generate more plausible 3D assets and better scene understanding. These assets not only look visually compelling but can also effectively contribute to downstream computer vision tasks. From a computer vision perspective, it encourages us to utilize synthetic data more effectively to tackle challenges in downstream fields, including computer vision and robotics.
616
+
617
+
618
+
619
+
620
+
621
+
622
+
623
+
624
+
625
+
626
+
627
+
628
+
629
+ %
630
+
631
+ \end{document}