id
stringlengths
18
42
text
stringlengths
0
2.45M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
190 values
original_shard_idx
int64
0
311k
num_tokens
int64
1
412k
proofpile-arXiv_065-18
\section{Introduction} A number of new and upcoming applications require ultra-high data rates that are beyond the capabilities of mmWave-based 5G communication systems. In order to meet these requirements, higher frequencies such as the THz band (0.1-10 THz) are being investigated because of the availability of considerable amounts of unused spectrum in these bands \cite{Tataria_6G,5764977,huq2019terahertz,rappaport2019wireless}. Therefore the THz band, especially the frequencies between 0.1-0.5 THz, has been explored by a number of studies, e.g., \cite{Kurner2014,6898846,khalid2019statistical,ju2021subterahertz}. The recent decision of the Federal Communication Commission (FCC), the US spectrum regulator, to provide experimental licenses in this band has fostered additional research interest, and this band is widely expected to be an important part of 6G wireless systems \cite{tataria20216g}. It is important to know the characteristics of a wireless channel before the design of a communication system that is to operate in it can proceed. Channel sounding measurements and their statistical analysis are an essential first step towards the understanding of a channel and consequently towards the design and deployment of a wireless system \cite{molisch2012wireless}. Since channel characteristics are highly dependent on the operating frequency range as well as the environment and the scenarios a wireless channel operates in, channel sounding campaigns need to be performed in the key scenarios of interest. Existing channel measurements in the THz bands are mostly limited to short-distance indoor channels, see \cite{priebe2010measurement,6898846,6574880,khalid2019statistical,abbasi2020channel,xing2021millimeter}, usually as a result of measurement setup constraints; see also \cite{han2021terahertz} and references therein. However, recently there has been some progress on longer distances and outdoor scenarios as well. These include the first long-distance (100 m) double-directional channel measurements for the 140 GHz band, which were reported in 2019 \cite{abbasi2019double,abbasi2020double} by our group, as well as our recent works \cite{abbasi2021ultra,abbasi2021double, Abbasi2021THz} where we target device-to-device (D2D) scenarios, where both Tx and Rx are at about 1.6 m height. Another recent series of papers \cite{xing2021propagation,ju2021subterahertz,9558848} also reported channel measurements, path loss and statistical modeling at 140 GHz over longer channel lengths in an urban scenario; in those measurements the Tx is placed at 4 m above the ground (i.e., typical lamppost height). Our current paper aims to provide analysis for a scenario where the Tx is significantly higher, at 11.5 m, which is comparable to the height of a typical microcell base station height. This paper presents the results of an extensive measurement campaign in this environment, with sufficient points to allow a meaningful statistical evaluation. To the best of our knowledge, such a detailed channel measurement campaign for cases where Tx is elevated more than 10 m above the ground has not been reported before in the THz band. The results of this paper are based on ultra-wideband double-directional channel measurements for a 1 GHz bandwidth between 145-146 GHz\footnote{Some authors prefer to use the term "THz" to identify the frequency range $>300$ GHz while using "high mmWave", "sub-THz" or `low-THz' for frequencies between 100-300 GHz. Other authors use the term "THz" for both these cases. Since the latter is the most widely used terminology, we will employ it in this paper as well}, conducted at 26 different transmitter (Tx) - receiver (Rx) location pairs. 13 of these represent line-of-sight (LoS) scenarios with direct Tx-Rx distances ranging from nearly 20 m to 83 m, while the other 13 are non-line-of-sight (NLoS) cases with direct Tx-Rx distances also in approximately the same range. Based on the nearly 110,000 directional impulse responses we collected from these measurements, we model the path loss, shadowing, delay spread, angular spread and multipath (MPC) power distribution for both LoS and NLoS cases. Our detailed analysis includes results both for the maximum-power-beam direction (max-dir) and the omni-directional characteristics as well as the distance dependence of the key parameters, and their relevant confidence intervals for the various model fits. The remainder of this paper is organized as follows. In Section II, we describe the channel sounding setup and the measurement locations. Key parameters of interest and their processing is described in Section III. The results of the measurements and modeling are presented in Section IV. We finally conclude the manuscript in Section V. \section{Measurement equipment and site} \subsection{Testbed description} \begin{figure}[t!] \centering \includegraphics[width=12cm]{setup.png} \caption{Channel sounding setup.} \label{fig:setup} \end{figure} \begin{table}[t!] \centering \caption{Setup parameters.} \label{table:parameters} \begin{tabular}{|l|l|l|} \hline \textbf{Parameter} & \textbf{Symbol} & \textbf{Value} \\ \hline\hline \textit{Measurement points} & $N$ & 1001 \\ \textit{Tx height} & $h_{Tx}$ & 11.5 m \\ \textit{Rx height} & $h_{Rx}$ & 1.7 m \\ \textit{Start frequency} & $ f_{start} $ & 145 GHz \\ \textit{Stop frequency} & $ f_{stop} $ & 146 GHz \\ \textit{Bandwidth} & $BW$ & 1 GHz \\ \textit{IF bandwidth} & $IF_{BW}$ & 10 KHz \\ \textit{THz IF} & $ f_{THz IF} $ & 279 MHz \\ \textit{Antenna 3 dB beamwidth} & $\theta_{3dB}$ & 13$^{\circ}$ \\ \textit{Tx Az rotation range} & $\phi_{Tx}$ & [-60$^{\circ}$,60$^{\circ}$] \\ \textit{Tx Az rotation resolution} & $\Delta \phi_{Tx}$ & 10$^{\circ}$ \\ \textit{Rx Az rotation range} & $\phi_{Rx}$ & [0$^{\circ}$,360$^{\circ}$] \\ \textit{Rx Az rotation resolution} & $\Delta \phi_{Rx}$ & 10$^{\circ}$ \\ \textit{Tx El rotation range} & $\tilde{\theta}_{Tx}$ & [-13$^{\circ}$,13$^{\circ}$] \\ \textit{Tx El rotation resolution} & $\Delta \tilde{\theta}_{Tx}$ & 13$^{\circ}$ \\ \textit{Rx El rotation range} & $\tilde{\theta}_{Rx}$ & [-13$^{\circ}$,13$^{\circ}$] \\ \textit{Rx El rotation resolution} & $\Delta \tilde{\theta}_{Rx}$ & 13$^{\circ}$ \\ \hline \end{tabular} \end{table} For this measurement campaign, a frequency-domain channel sounder was used (see in Fig. \ref{fig:setup}), similar to \cite{Abbasi2021THz}. It is based on a Vector Network Analyzer (VNA), PNAX N5247A from Keysight, which has a frequency range from 10 MHz to 67 GHz. Frequency extenders, WR-5.1 VNAX manufactured by Virginia Diodes, were used to increase the VNA's frequency range to the 140-220 GHz band, which encompasses the band of interest to us. The extenders were used with the "high sensitivity" waveguide option to improve the received Signal to Noise Ratio (SNR). The antennas (along with the extenders) are mounted on a rotating positioning system. A key aspect of this setup is the use of a RF-over-fiber (RFoF) link, which was originally introduced in \cite{abbasi2020double}. The RFoF allows us to measure over longer distances than the typical 5-10 m range of similar systems without the link. For further details of the system please see \cite{Abbasi2021THz}. Table \ref{table:parameters} shows the configuration parameters for the sounder. The IF bandwidth of the VNA was selected such that there is a compromise between the dynamic range and the measurement duration, such that the duration of a measurement sweep is lower than the mechanical movement of the horn, and therefore has only a minor impact on the total measurement time. Each sweep of the VNA contains 1001 frequency points over the 1 GHz bandwidth, therefore allowing a maximum excess delay of 1 $\mu s$ without suffering the effects of aliasing. In other words, the maximum measurable excess runlength for multipaths is 300 m, a reasonable distance considering the scenarios and the frequency band being sounded. Given that the measurements take a significant amount of time, they were conducted at night while ensuring the scenario remains static/quasi static. The measurement locations were selected to be typical of a "microcellular" scenario. The Tx for the current measurements is set at a height of 11.5 m above the ground while the Rx is placed 1.7 m high from the ground. These parameters have been selected following the 3GPP UMi Street Canyon model, (3GPP TR 38.901 version 14.0.0 Release 14 suggests $h_{Tx}=10m$ and $1.5m \leq h_{Rx} \leq 22.5m$). Additionally, to extract the double-directional characteristics of the channel, the frequency sweeps of the VNA were repeated with sets of different orientations of the antennas. The positioners were oriented to ensure that the azimuth angle zero at both ends (Tx and Rx) corresponded to the LoS direction, irrespective of whether an unblocked optical LoS connection between Tx and Rx actually exists or not. We anticipated that multiple elevation scans are required to properly analyze the scenario, due to the different heights of the link ends, therefore, three elevation cuts are scanned on both the Tx and Rx. The Tx azimuth will scan a $120^\circ$ sector from $-60^\circ$ to $60^\circ$ with $10^\circ$ of azimuthal resolution, meanwhile, the Rx will carry out a complete azimuth scan, from $0^\circ$ to $360^\circ$ in steps of $10^\circ$, similar to Tx. In elevation, Tx and Rx are aligned so that when both antennas are facing ($\tilde{\theta}_{Tx}=\tilde{\theta}_{Rx}=0^\circ$), they are in the same elevation cut. After that, both ends will make additional scans $13^\circ$ above and $13^\circ$ below the "alignment", giving a total of 9 elevation scans per Tx-Rx location (3 elevation scans at the Tx and 3 for the Rx). The measurements were performed on different days, due to the long measurement time per point. For each day a calibration of the VNA, as well an over-the-air calibration (OTA) with the Tx and Rx at a LoS location was performed. Additional details of the setup are described in \cite{abbasi2020double,abbasi2021double,abbasi2021ultra} \footnote{It is important to mention that $\tilde{\theta} = 0^\circ$ is not equivalent to $\theta = 90^\circ$ in elevation, i.e. it is not the horizontal. $\tilde{\theta} = 0^\circ$ is different on each point in an absolute elevation reference.}.\\ Finally, the frequency domain-sounder provides a high phase stability which allows to conduct Fourier analysis and High Resolution Parameter Extraction (HRPE). Although HRPE can provide more accurate results, the current paper only uses Fourier analysis; HRPE analysis will be discussed in future work. \subsection{Measurement locations} A very important step in the measurement campaign is the selection of suitable locations so that we can realistically measure samples of LoS and NLoS scenarios. For this purpose we selected an area inside the University Park Campus of the University of Southern California (USC) in Los Angeles California, USA, that is located in the center of the city and is characterized as an urban environment. Fig. \ref{fig:Micro_sce} shows the scenario and locations of the Tx and Rx locations. As can be seen, the measurement campaign is divided into 6 routes with LoS or NLoS points each corresponding to a unique Tx location. For all 6 Tx locations, the positioner was placed on the edge of the Downey Way Parking Structure (PSA) building on the third floor. \begin{figure}[ht] \centering \includegraphics[width=1\columnwidth]{Microcellular.png} \caption{Microcellular campaign measurement scenario.} \vspace*{0mm} \label{fig:Micro_sce} \end{figure} Route One contains 6 LoS points aligned on the walkway of the Andrus Gerontology Center (GER) on the McClintock side of the building, covering a distance range from 33.5 to 81.7 m (see Fig. \ref{fig:LOS TX1-RX1}). Ronald Tutor Hall (RTH) and the Hughes Aircraft Electrical Engineering Center (EEB) together with the GER building create a "street canyon" for Route One points. It is important to note that the LoS was not obstructed or partially obstructed by foliage or other environmental objects. The three NLoS points were placed under the portico of the GER building (see Fig. \ref{fig:NLOS TX1-RX7}). Apart from the roof of the building, the pillars provide additional obstructions to the LoS. The second route is at the opposite side of PSA on a parking lot surrounded by Ray Irani (RRI) and Michelson Hall (MCB). While photo of Fig. \ref{fig:Micro_sce} shows cars, no cars were present during the measurement. Rx points 10, 11 and 13 were set on a straight line aligned to the Tx and 12 was set 30 meters north of point 11. For Route Three, the Tx is moved 40 meters along PSA parallel to Downey Way. Here, MCB's side corner completely blocks the LoS components for points 14-16. The distances for this route are approximately in the range of 40 to 60 meters. \begin{figure*}[t!] \centering \begin{subfigure}{0.31\textwidth} \centering \centering \hspace{7mm} \includegraphics[width=1\columnwidth]{Tx1Rx1LOS.png} \caption{Tx1-Rx1 LoS; $d=81.7 m$.} \vspace*{0mm} \label{fig:LOS TX1-RX1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \centering \hspace{7mm} \includegraphics[width=1\columnwidth]{TX1RX7NLOS.png} \caption{Tx1-Rx7 NLoS ; $d=83.2 m$.} \vspace*{0mm} \label{fig:NLOS TX1-RX7} \end{subfigure} \caption{LoS and NLoS measurement points for Route One.} \label{fig:TX1} \end{figure*} Route Four places the Tx in the north west corner of PSA, the three Rx locations are placed in an alley between Technical Theatre Laboratory (TTL) and the Scene Dock Theatre (SCD) buildings at distances ranging from 35 to 65 meters approximately. Route Five places the Tx 15 meters south of the Tx location in Route Four and the four Rx locations were placed in the same alley between SCD and TTL as Route Four. The obstruction for this route is provided by the TTL building and foliage as shown in Fig. \ref{fig:Micro_sce} \footnote{Delay domain results for the subset of measurements on Route Four and Five will be presented in \cite{abbasi2022double}. This analysis is significantly different from the statistical analysis of the current work, which is based on a large set of measurements.}. \begin{figure*}[t!] \centering \begin{subfigure}{0.3\textwidth} \centering \hspace{0mm} \includegraphics[width=1\columnwidth]{TX4RX19LOS.png} \caption{Tx4-Rx19 LoS $d=64.6 m$.} \vspace*{0mm} \label{fig:LOS TX4-RX19} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \centering \hspace{0mm} \includegraphics[width=\columnwidth]{TX5RX23NLOS.png} \caption{Tx5-Rx23 NLoS $d=45.5 m$.} \vspace*{0mm} \label{fig:NLOS TX5-RX23} \end{subfigure} \begin{subfigure}{0.3\textwidth} \centering \centering \hspace{0mm} \includegraphics[width=0.92\columnwidth]{Tx6RX24NLOS.png} \caption{Tx6-Rx24 NLoS $d=20.4 m$} \label{fig:NLOS TX6-RX24} \end{subfigure} \caption{LoS and NLoS sample points for Routes Four, Five and Six.} \label{fig:TX4_5} \end{figure*} Finally, for Route Six, the points are located on the McClintock side of PSA, approximately 10 meters behind the location of the Tx on Route One. The Rx locations were placed on the sidewalk next to McClintock Ave. Similar to the points in Route One, Olin Hall of Engineering (OHE), and RTH building create a "street canyon" environment for this route. The main obstruction of the LoS is provided by the foliage between the Tx and Rx locations. A sample point (Tx6-Rx24) is shown in Fig. \ref{fig:NLOS TX6-RX24}. Table \ref{tab:dist_Tx-Rx} shows a summary of the routes, locations and distances for all the measurement points of the campaign. \begin{table}[ht] \centering \caption{Description of Tx-Rx links and their respective direct distances.} \label{tab:dist_Tx-Rx} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Tx identifier} & \textbf{LoS Rx identifier} & $\mathbf{d_{LoS}}$ \textbf{(m)} & \textbf{NLoS Rx identifier} & $\mathbf{d_{NLoS}}$ \textbf{(m)} \\ \hline \hline \multicolumn{1}{|l|}{\textbf{$Tx_1$}} & 1-6 & 82.5, 64.5, 40.8, 72.3, 49.8, 32.1 & 7-9 & 83.2, 73.6, 46.4 \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_2$}} & 10-13 & 20.4, 33.9, 45.9, 54.3 & - & - \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_3$}} & - & - & 14-16 & 62.6, 53.4, 40.7 \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_4$}} & 17-19 & 36.3, 57.9, 65.7 & - & - \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_5$}} & - & - & 20-23 & 35, 58.5, 66.8, 45.5 \\ \hline \multicolumn{1}{|l|}{\textbf{$Tx_6$}} & - & - & 24-26 & 20.8, 30,20 \\ \hline \end{tabular} \end{table} \section{Parameters and processing} \subsection{Data processing} The VNA-based measurement setup explained in Section II produces a collection of frequency scans for each Tx-Rx geographical location. Each measurement can be described as a five-dimensional tensor $H_{meas}(f,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d)$ where $f$ denotes the frequency points over the 1 GHz bandwidth (145-146 GHz), $\phi_{Tx}$ and $\phi_{Rx}$ denote the azimuth orientation of the Tx and Rx, respectively, $\tilde{\theta}_{Tx}$ and $\tilde{\theta}_{Rx}$ denote elevation orientation of the Tx and Rx, respectively, and $d$ is the Tx-Rx distance. Each tensor, $H_{meas}$, has dimensions of $N \times N^{\tilde{\theta}}_{Tx} \times N^{\phi}_{Tx} \times N^{\tilde{\theta}}_{Rx} \times N^{\phi}_{Rx}$ where $N$ is the number of frequency points per sweep (1001), $ N^{\tilde{\theta}}_{Tx}$ and $ N^{\tilde{\theta}}_{Rx}$ are the number of azimuth directions at the Tx (13) and Rx (36), and $ N^{\phi}_{Tx}$ and $ N^{\phi}_{Rx}$ are the number of elevation directions at the Tx $(3)$ and Rx $(3)$, respectively. Before the processing and parameter analysis we calibrate the measurement (eliminating the effects of the system and antennas) transfer functions. The OTA calibration $H_{OTA}(f)$ is used to obtain the calibrated directional channel transfer function by dividing the measured channel transfer function by the OTA calibration: $H(f,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d) =H_{meas}(f,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d)/H_{OTA}(f)$. The calibrated channel frequency response is used to compute different parameters such as the directional power delay profile (PDP) as \begin{equation} P_{calc}(\tau,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx},d)=|\mathcal{F}_{f}^{-1}\{H(f,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx},d)\}|^2, \end{equation} where $\mathcal{F}_{f}^{-1}$ is the inverse fast Fourier transform (IFFT) with respect to $f$. To minimize the effects of noise, thresholding and delay gating are applied similar to \cite{gomez-ponce2020,abbasi2021double} that is expressed as \begin{equation} P(\tau)=[P_{calc}(\tau): (\tau\leq\tau_{gate}) \land (P_{calc}(\tau)\geq P_{\lambda})] \end{equation} or $0$ if it does not fulfill these conditions. The value $\tau_{gate}$ is the delay gating threshold set to avoid using long delay bins or points with the "wrap-around" effect of the IFFT. $P_{\lambda}$ is the noise threshold that is selected to ignore the power of delay bins with noise which could particularly distort delay spread and angular spread. For the current measurements, $\tau_{gate}$ is set to 933.33 ns (corresponding to 280 m excess runlength) and $P_{\lambda}$ is selected to be 6 dB above the noise floor (average noise power) of the PDP. From the collection of directional PDPs we selected the strongest beam as the beam-pair with the highest power (max-dir) as \begin{equation} P_{\rm max}(\tau)=P(\tau,\phi_{\hat{i}},\tilde{\theta}_{\hat{j}},\phi_{\hat{k}},\tilde{\theta}_{\hat{l}},d); (\hat{i},\hat{j},\hat{k},\hat{l}) = \max_{i,j,k,l} \sum_\tau P(\tau,\phi_i,\tilde{\theta}_j,\phi_k,\tilde{\theta}_l,d). \end{equation} Finally, an "omni-directional" PDP is constructed by first combining all the elevations by summing over different elevations for each delay bin, and then selecting the azimuth with the strongest contribution. The selection of the strongest azimuth direction per delay bin to reconstruct a PDP is similar to \cite{Hur_omni,abbasi2021ultra}. Overall, this process can be summed up as \begin{equation} P_{\rm omni}(\tau;d)=\max_{\phi_{Tx},\phi_{Rx}} \sum_i \sum_j P(\phi_{Tx},\tilde{\theta}_{Tx}^i,\phi_{Rx},\tilde{\theta}_{Rx}^j;d). \end{equation} where $i,j\in \{1,2,3\}$ represents the elevations ($ \tilde{\theta}_{Tx}^i, \tilde{\theta}_{Rx}^j \in \{-13^\circ,0^\circ,13^\circ\} $) for Tx and Rx, respectively. The adding of the different elevation cuts is meaningful because the spacing of the cuts in the elevation domain was taken as $13^\circ$, which is identical to the (full width half maximum (FWHM)) beamwidth. Thus, the effective elevation pattern of the sum is approximately constant in the range $-13^\circ \le \tilde{\theta}_{Tx} \le 13^\circ$, and has a FWHM of $39^\circ$, and similar at the Rx \subsection{Parameter computation} \label{sec:par} Similar to the analysis performed in \cite{Abbasi2021THz}, we use the directional and omni-directional PDPs described in the previous section to compute several condensed parameters in order to characterize the propagation channels. The computations are based on the noise-thresholded and delay-gated PDPs calculated as described above. \subsubsection{Path loss and shadowing} The first parameter to be computed is the path loss. By definition (\cite{molisch2012wireless}) it is computed as the sum of the power on each delay bin in the PDP. \begin{equation} PL_i(d)=\sum_\tau P_i(\tau,d), \end{equation} where $i$ can denote omni-directional (omni) or the strongest beam (best-dir). To model its behavior as a function of distance, we use the classical single slope "power law" also known as $\alpha - \beta$ model, such that the pathloss in dB is \begin{equation} PL_{\rm dB}(d)=\alpha+10\beta \log_{10}(d)+\epsilon, \end{equation} where $\alpha$ and $\beta$ are the estimated parameters, and $\epsilon$ represents the "Shadowing" or random variation of the data with respect to its mean. It is assumed to follow a zero-mean normal distribution $\epsilon \sim N(0,\sigma)$, where $\sigma$ is the standard deviation of the distribution. To obtain the parameters of the model, we can use approaches such as maximum likelihood estimation (MLE) or ordinary least squares (OLS) \cite{molisch2012wireless,kartunen_PL}. Following common assumptions in the modeling of path loss, the procedure is separated between the ensemble of LoS and NLoS measurement points. An analysis carried out in \cite{karttunen2016path} describes the challenges of an uneven density of distances between the Tx and Rx (in linear and logarithmic scale). This non-uniformity can lead to an increasing in the leverage of some points in the regression analysis compared to others. To compensate for this effect, \cite{karttunen2016path} implemented a weighted regression model for path loss modeling. Each weight ($w_i$) is computed according to the density of points along the distance in $log_{10}$ scale. So, $w_i$ will be larger for points located in low density areas and vice versa. While multiple weighting methods are described in the paper, however, we adopt the approach of "equal weights to N bins over $log_{10}(d)$ ($w_i \propto log_{10}(d)$)", because this strategy corresponds to a least square fitting of "dB vs $log_{10}(d)$". \subsubsection{Delay spread} The rms delay spread (RMSDS) is calculated as the second central moment of the PDP \cite{molisch2012wireless}: \begin{equation} \sigma_\tau=\sqrt{\frac{\int_\tau P_i(\tau)\tau^2 d\tau}{\int_\tau P_i(\tau)d\tau} - \left(\frac{\int_\tau P_i(\tau)\tau d\tau}{\int_\tau P_i(\tau)d\tau}\right)^2}, \end{equation} where $i$ can be "omni" or "max-dir". Noise and delay thresholding are essential for reducing the impact of long-delayed artefacts. Since this parameter is defined for continuous waveforms, therefore to approximate it, we increase the number of samples in the PDPs by oversampling them. Additionally, we apply a Hann window to reduce the impact of the sidelobes in the parameter estimation. \subsubsection{Angular spread} \label{sect:AS} The measurement campaign creates a "virtual" MIMO scenario for each location pair, allowing angular analysis. A way to quantify the dispersion of power over different angular directions is the angular spread. The starting point of its computation is the double-directional angular power spectrum ($DDAPS_{full}$), a function of the power concentration over different directions (particular azimuth, elevation directions) at Tx and Rx. The DDAPS is computed as \begin{equation} DDAPS_{full}(\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d)=\sum_\tau P(\tau,\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d). \end{equation} Similar to the delay spread analysis, noise and delay gating are important before the computation of $DDAPS_{full}$ to minimize noise accumulation in directions where no significant MPC is observed. Using the $DDAPS_{full}$, we add the contribution of different elevations from both ends to have a similar DDAPS as \cite{Abbasi2021THz}. \begin{equation} DDAPS(\phi_{Tx},\phi_{Rx};d)=\sum_{\tilde{\theta}_{Tx}}\sum_{\tilde{\theta}_{Rx}} DDAPS_{full}(\phi_{Tx},\tilde{\theta}_{Tx},\phi_{Rx},\tilde{\theta}_{Rx};d). \end{equation} We combine the different elevations we measured since the limited number of elevation cuts (which was imposed by limits on the measurement duration) is insufficient for a detailed elevation analysis. Moreover, since the direction of the primary propagation is well covered, it is expected that there will be less information in other elevation cuts. Finally, to compute the (azimuthal) angular power spectrum (APS) at the Tx, we integrate over $\phi_{Rx}$, and do the same for the APS at the Rx. Using the APS, we compute the angular spread by applying Fleury's definition \cite{fleury2000first}: \begin{equation} \sigma^\circ=\sqrt{\frac{\sum_\phi \left|e^{j\phi}-\mu_\phi \right|^2 APS_k(\phi)}{\sum_\phi APS_k(\phi)}}, \end{equation} where $k$ can be Tx or Rx indicating departure or arrival APS and $\mu_\phi$ can be computed as \begin{equation} \mu_\phi=\frac{\sum_\phi e^{j\phi} APS_k(\phi)}{\sum_\phi APS_k(\phi)}. \end{equation} It is important to mention that the obtained values will be an upper bound for the actual angular spreads of the channel due to the finite horn antenna beamwidth \cite{Abbasi2021THz}. \subsubsection {Power distribution over MPC} In channel analysis, it is important to examine the power distribution of MPCs over the delay domain. Specially, the concentration of power in the "strongest" MPC versus the rest of the MPCs in the channel. Thus, we define $\kappa_1$, a parameter computed as follows: \begin{equation} \kappa_1=\frac{P_i(\tilde{\tau}_1)}{\sum_{\tilde{\tau}=\tilde{\tau}_2}^{\tilde{\tau}_N} P_i(\tilde{\tau})}, \end{equation} where $i$ can be "omni" or "max-dir", and $\tilde{\tau_k}$ is the delay bin of the $k$-th local maximum of the PDP $P_i(\tilde{\tau})$, ordered by magnitude, so that $\tilde{\tau_1}$ signifies the location of the largest local maximum. As explained in \cite{abbasi2020channel}, $\kappa_1$ is different from the "Rice Factor" because it is not possible to differentiate between closely spaced MPCs, therefore, the local maximum of the PDP is not strictly identical to an MPC. To perform the most accurate Rice Factor analysis, HRPE can be used so that MPCs are properly identified, and this will be presented in future work. Similarly as $\sigma_\tau$ we apply oversampling and a Hann window to avoid the sidelobe effects and to have a better estimation of the parameter.\\ In the next section, regression analysis will be added in the estimation of the parameters $\sigma_\tau,\kappa_1$ similar to \cite{Abbasi2021THz}. With this regression, we will observe their behavior with respect to the distance between Tx and Rx. The linear regression model is with respect to logarithmic quantities and it is $Z=\alpha+\beta \log_{10}(d)$. \section{Measurement results} In this section the results for the measurement campaign are discussed. \subsection{Power delay profiles} To start with the measurement analysis, we first present some sample PDPs, characterizing one LoS and two NLoS location pairs. The LoS measurement was taken at a distance of 82.5 m. Fig. \ref{fig:PDP-LOS} presents the omni-directional and max-dir PDPs. The LoS MPC is clearly observed in both the max-dir and omni-directional PDPs. Apart from the LoS MPC, multiple MPCs with runlength $\leq 160$ m with power only up to 30dB lower than the LoS. These "extra" components are diminished in the max-dir as a result of the spatial filtering effect provided by the antennas. In this particular case, for the omni-directional case, we observed several (very weak) MPCs arriving before the LoS MPC. As explained in section II, the maximum measurable excess delay of the system is $1 \mu s$ which leads to 300 m of maximum runlength. Any MPC with delay $\geq 1 \mu s$ will suffer from aliasing, and so be wrapped around in delay domain. This effect was corrected for all figures. Additionally, the PDPs shown are oversampled and windowed using a Hann window to diminish the effect of sidelobes and observe low power MPCs. \\ \begin{figure} \centering \centering \hspace{7mm} \includegraphics[width=0.5\columnwidth]{los_82.5m.eps} \caption{LoS case with $d=82.5 m$ (Tx1-Rx1).} \label{fig:PDP-LOS} \end{figure} For the NLoS case, we present two location pairs, with Tx-Rx distances of 45.5 and 83 m, respectively. A richer multipath scenario is expected because of the attenuation of the LoS component and increase of additional MPCs that arrive at the Rx. In the case of the 45.5 m measurement, we see a concentrated max-dir PDP, and small quantity of additional MPCs with power $\leq 30dB$, similar to a LoS scenario. The scenario for this measurement is shown in Fig. \ref{fig:NLOS TX5-RX23}, and as can be seen, the Tx is set in the PSA building and the Rx is located in the alley between TTL and SCD, creating a "street-canyon" and concentrating (in the delay domain) the power reaching to the Rx, since all components guided by the canyon have fairly similar delays created by different number of reflections on the housewalls, which are just a street width apart. We also note that while the first pronounced peak in the PDP is the strongest one, it is {\em not} a quasi-LoS (as often observed at low frequencies), as shown by the fact that its associated delay is {\em longer} than that of the (theoretical) LoS. \begin{figure*}[t!] \centering \begin{subfigure}{0.45\textwidth} \centering \hspace{7mm} \includegraphics[width=1\columnwidth]{nlos_45.5m.eps} \caption{NLoS case with $d=45.5 m$ (Tx5-Rx23).} \label{fig:PDP-NLOS1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \centering \hspace{7mm} \includegraphics[width=1\columnwidth]{nlos_83m.eps} \caption{NLoS case with $d=83 m$ (Tx1-Rx7).} \vspace*{0mm} \label{fig:PDP-NLOS2} \end{subfigure} \caption{PDP for two sample NLoS measurement cases.} \label{fig:PDP-NLOS} \end{figure*} The second point is shown at Fig. \ref{fig:NLOS TX1-RX7}, in this case it is observed that there is a larger set of MPCs, especially for the omni-directional PDP, compared to the previous NLoS case. These MPCs are a product of reflections coming from the RTH building. This effect can be noticed in Fig. \ref{fig:APS-NLOS2}, and we see that the first significant MPC is not the strongest one. More details of this scenario will be discussed in the next subsection in more detail. \subsection{Angular power spectrum} This section discusses the Angular Power Spectrum (APS) of the selected sample LoS and NLoS location pairs. For the LoS case, we observe a large concentration of MPCs in the LoS direction, an additional concentration of MPCs can be observed at $\phi_{Tx}=37,\phi_{Rx}=35$. These MPCs correspond to reflections coming off the RTH building, additionally, we can also observe MPCs at angles close to $\phi_{Tx}=0,\phi_{Rx}=180$.\\ \begin{figure} \centering \centering \includegraphics[width=0.5\columnwidth]{los_82.5m_aps.eps} \caption{LoS APS for $d=82.5 m$ (Tx1-Rx1).} \vspace*{0mm} \label{fig:APS-LOS} \end{figure} The NLoS points have a different behavior compared to LoS. In the case of the point Tx5-Rx20, we see a large concentration of MPCs in one main direction, similar as in the sample LoS. However, the center of this concentration is not not in the LOS direction, but rather in the direction of the street, with $\phi_{Tx}=-15,\phi_{Rx}=-27$. This concentration of MPCs are a product of the "street canyon" effect created by the SCD and the TTL building (see Fig. \ref{fig:NLOS TX5-RX23}). An additional concentration of MPCs can be observed at $\phi_{Tx}=-15,\phi_{Rx}=47$; in this case, the Tx horn is still facing towards the canyon but the receiver collects a weaker reflection inside it. For the last NLoS location pair, (Tx1-Rx7) shows, has a distance of ($d=83$ m), and as can be seen in Fig. \ref{fig:APS-NLOS2}, several maximuma in the APS, with the strongest one at $\phi_{Tx}=37,\phi_{Rx}=28$. This corresponds to Tx and Rx looking towards the RTH building, and is thus congruent with the scenario observed in Fig. \ref{fig:NLOS TX1-RX7}. As can be seen in the picture, the LoS is blocked by the pillars in front of the receiver and the right-hand side of the receiver has an opening facing towards McClintock Ave, the OHE, RTH and EEB buildings. Moreover, additional weaker MPCs (approx. 8dB weaker than the strongest MPCs) are observed at $\phi_{Tx}=-38,\phi_{Rx}=27$. These MPCs are reflections from RTH, similar to the previous MPCs, however they reach the receiver from the left hand side gap observed between the inner wall of GER building and the pillar, which means additional attenuation. \begin{figure*} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{nlos_45.5m_aps.eps} \caption{NLoS APS for $d=45.5 m$ (Tx5-Rx23).} \vspace*{0mm} \label{fig:APS-NLOS1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{nlos_83m_aps.eps} \caption{NLoS APS for $d=83 m$ (Tx1-Rx7).} \vspace*{0mm} \label{fig:APS-NLOS2} \end{subfigure} \caption{Sample NLoS APSes for two cases.} \label{fig:APS-NLOS} \end{figure*} The above discussions not only provide a description of relevant propagation effects, but also support the correctness of the measurements, as the extracted MPCs are in agreement with the geometry of the environment. Further verifications, not shown here for space reasons, were done for other location pairs as well. \subsection{Path loss and shadowing} In this section we start analyzing the ensemble of measurement locations. For the analysis, the points will be separated into LoS and NLoS to analyze their characteristics separately. For the LoS case, Fig. \ref{PLOSS-LOS} shows the path loss analysis using "max-dir", "omni-directional" PDPs and the Friis Model. For all points it can be observed that the path loss for the "max-dir" is larger or equal to the "omni" path loss points ($PL_{max-dir}\geq PL_{omni}$). Max-dir and omni-directional PL models are lower than the Friis model. The PL exponent is $\beta=1.88$, lower than the free space model. This fact is congruent with the scenario because the LoS points in Routes One and Four are in "street canyon" LoS environments (9 of 13 locations), therefore the "waveguiding" effect will produce a path loss lower than the free space. The parameters extracted by the "weighted" regression and the OLS are similar because of the low variations of the points against their linear models, additionally the shadowing shows the same variance in both cases and has a small difference in the mean value. \begin{figure*}[t!] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{pl_los.eps} \caption{Path loss modeling with $log_{10}(d)$ weighting.} \vspace*{0mm} \label{PLOSS-LOS} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{sha_los.eps} \caption{Shadowing.} \vspace*{0mm} \label{SHA-LOS} \end{subfigure} \caption{Path loss and shadowing models for LoS points.}% \label{fig:los_PL_SHA}% \vspace{-0 mm} \vspace{-5mm} \end{figure*} Fig. \ref{PLOSS-NLOS} shows the regression modeling for the NLoS case. The max-dir points show large values of PL compared to the omni-directional points, since in this case a significant percentage of energy is contained in MPCs whose directions are different from the max-Dir horn orientations. For a similar reason, the path loss exponent for the max-dir and omni-directional case are different ($\beta=2.57,\beta=1.76$ respectively). The omni-directional case has a smaller slope due to more MPCs from different directions provide energy at large distances. The shadowing oscillates between -15 and 15 dB for the omni and max-dir cases. The observed shadowing standard deviations for both cases are 6.21 and 7.89 for the max-dir and omni-directional cases, respectively. A summary of the estimated regression parameters for path loss and statistical parameters for the shadowing with their respecting 95\% confidence interval is shown in Tables \ref{tab:PL} and \ref{tab:sha}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{PL.eps} \caption{Linear fitting with $log_{10}(d)$ weighting.} \label{PLOSS-NLOS} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{sha.eps} \caption{Shadowing.} \label{SHA-NLOS} \end{subfigure} \caption{Path loss and shadowing models for NLoS points.}% \vspace{-0 mm} \label{fig:nlos_PL_SHA}% \vspace{-5mm} \end{figure*} In the NLoS case, we observed path loss values larger compared to Friis, except for the point (Tx5-Rx23). This point is located in a corridor between SCD and TTL buildings, (see Fig. \ref{fig:NLOS TX5-RX23}). In this case there exists a very strong reflection, and the associated directional pathloss equals Friis, while the omni-directional pathloss is lower due to the existence of additional MPCs; similar to the LoS situation; this is {\em not} unphysical. \begin{table*}[t!] \centering \caption{Path loss parameters with $95\%$ confidence interval.} \label{tab:PL} {% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Linear model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\alpha$} & \multicolumn{1}{c|}{$\alpha_{min,95\%}$} & \multicolumn{1}{c|}{$\alpha_{max,95\%}$} & \multicolumn{1}{c|}{$\beta$} & \multicolumn{1}{c|}{$\beta_{min,95\%}$} & \multicolumn{1}{c|}{$\beta_{max,95\%}$} \\ \hline \hline $PL_{omni}^{LoS}$ & 72.88 & 69.91 & 75.86 & 1.93 & 1.74 & 2.11 \\ \hline $PL_{max-dir}^{LoS}$ & 77.33 & 74.1 & 80.57 & 1.88 & 1.68 & 2.08 \\ \hline $PL_{omni}^{LoS} OLS$ & 75.02 & 70.47 & 79.58 & 1.8 & 1.53 & 2.08 \\ \hline $PL_{max-dir}^{LoS} OLS$ & 77.06 & 71.74 & 82.37 & 1.89 & 1.58 & 2.21 \\ \hline $PL_{omni}^{NLoS}$ & 91.28 & 62.71 & 119.85 & 1.76 & -0.05 & 3.56 \\ \hline $PL_{max-dir}^{NLoS}$ & 84.54 & 49.21 & 119.88 & 2.57 & 0.34 & 4.81 \\ \hline $PL_{omni}^{NLoS} OLS$ & 86.81 & 52.96 & 120.66 & 2.03 & -0.01 & 4.07 \\ \hline $PL_{max-dir}^{NLoS} OLS$ & 82.91 & 39.96 & 125.87 & 2.68 & 0.09 & 5.27 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t!] \centering \caption{Shadowing model parameters with $95\%$ confidence interval.} \label{tab:sha} {% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Statistical model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\mu_{min,95\%}$} & \multicolumn{1}{c|}{$\mu_{max,95\%}$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\sigma_{min,95\%}$} & \multicolumn{1}{c|}{$\sigma_{max,95\%}$} \\ \hline \hline $\epsilon_{omni}^{LoS}$ & 0.09 & -0.35 & 0.52 & 0.72 & 0.52 & 1.19 \\ \hline $\epsilon_{max-dir}^{LoS}$ & -0.01 & -0.5 & 0.48 & 0.8 & 0.58 & 1.33 \\ \hline $\epsilon_{omni}^{LoS} OLS$ & 0 & -0.42 & 0.42 & 0.69 & 0.49 & 1.14 \\ \hline $\epsilon_{max-dir}^{LoS} OLS$ & 0 & -0.49 & 0.49 & 0.8 & 0.58 & 1.33 \\ \hline $\epsilon_{omni}^{NLoS}$ & 0.04 & -3.73 & 3.81 & 6.24 & 4.48 & 10.3 \\ \hline $\epsilon_{max-dir}^{NLoS}$ & 0.18 & -4.59 & 4.94 & 7.89 & 5.66 & 13.02 \\ \hline $\epsilon_{omni}^{NLoS} OLS$ & 0 & -3.76 & 3.76 & 6.21 & 4.46 & 10.26 \\ \hline $\epsilon_{max-dir}^{NLoS} OLS$ & 0 & -4.77 & 4.77 & 7.89 & 5.65 & 13.02 \\ \hline \end{tabular}% } \end{table*} \subsection{RMSDS} The next parameter to evaluate is the RMSDS. In the LoS case, we expect lower values for the max-dir due to the spatial filtering. Similarly, an increase in the RMSDS with increasing distance between the Tx and Rx is expected, due to a large number and difference in runlength of the MPCs. Fig. \ref{fig:RMSDS-LOS}a shows the probability density function of the RMSDS. It is plotted on a logarithmic scale, i.e., dBs, as is common in particular in 3GPP. This representation also allows to easily see the excellent fit of a lognormal distribution to the measurement results. The variance of the max-dir points is approximately $62\%$ the value of the omni-directional case. Fig. \ref{fig:RMSDS-LOS}b shows the RMSDS as a function of distance and the linear regression, showing an increase with distance, as anticipated (and also in agreement with experimental results at lower frequencies). It is also observed that for all measurement points the max-dir values are smaller than the omni-directional. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{ds_los.eps} \caption{CDF of delay spread.} \vspace*{0mm} \label{fig:RMSDS-LOS-CDF}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{dsvd_los.eps} \caption{Linear modeling of $\sigma_\tau$ with weighting.} \vspace*{0mm} \label{fig:RMSDS-LOS-LF}% \end{subfigure} \caption{Modeling of delay spread for LoS cases.}% \vspace{-0 mm} \label{fig:RMSDS-LOS}% \vspace{-5mm} \end{figure*} Fig. \ref{fig:RMSDS-NLOS} shows the RMSDS analysis for the NLoS case. It is observed that the CDFs have a different slope ($\beta_{omni}^{NLoS}=11.91, \beta_{max-dir}^{NLoS}=7.14$). This behavior can be related to the "street-canyon" scenarios of Routes One, Four, and Six. The waveguiding effect allows a concentration of the power and MPCs in a small set of directions, so the max-dir PDPs have low number of MPCs that are concentrated in smaller range of delay bins. A special case of the "waveguiding" effect is the point Tx5-Rx23 ($d=45.5m$), in which the $\sigma_\tau$ values for the omni-directional and max-dir cases are almost equal. A summary of the estimated regression parameters and the statistical analysis are shown in Tables \ref{tab:linear-model-RMSDS}, \ref{tab:RMSDS_CDF}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{ds.eps} \caption{CDF} \vspace*{0mm} \label{fig:RMSDS-NLOS-CDF}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{dsvd.eps} \caption{Linear fitting with $log_{10}(d)$ weighting.} \vspace*{0mm} \label{fig:RMSDS-NLOS-LF}% \end{subfigure} \caption{Modeling of delay spread for NLoS points.}% \vspace{-0 mm} \label{fig:RMSDS-NLOS}% \vspace{-5mm} \end{figure*} \begin{table*}[t!] \centering \caption{Linear model parameters for $\sigma_{\tau}$ with $95\%$ confidence interval.} \label{tab:linear-model-RMSDS} {% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Linear model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\alpha$} & \multicolumn{1}{c|}{$\alpha_{min,95\%}$} & \multicolumn{1}{c|}{$\alpha_{max,95\%}$} & \multicolumn{1}{c|}{$\beta$} & \multicolumn{1}{c|}{$\beta_{min,95\%}$} & \multicolumn{1}{c|}{$\beta_{max,95\%}$} \\ \hline \hline $\sigma_{\tau_{omni}}^{LoS}$ & -108.22 & -122.6 & -93.83 & 17.82 & 8.76 & 26.88 \\ \hline $\sigma_{\tau_{max-dir}}^{LoS}$ & -94.11 & -100.92 & -87.29 & 4.99 & 0.7 & 9.28 \\ \hline $\sigma_{\tau_{omni}}^{NLoS}$ & -96.16 & -114.09 & -78.22 & 11.91 & 0.57 & 23.26 \\ \hline $\sigma_{\tau_{max-dir}}^{NLoS}$ & -96.71 & -113.8 & -79.63 & 7.14 & -3.67 & 17.95 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t!] \centering \caption{Statistical model parameters for $\sigma_{\tau}$ with $95\%$ confidence interval.} \label{tab:RMSDS_CDF} {% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Statistical model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\mu_{min,95\%}$} & \multicolumn{1}{c|}{$\mu_{max,95\%}$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\sigma_{min,95\%}$} & \multicolumn{1}{c|}{$\sigma_{max,95\%}$} \\ \hline \hline $\sigma_{\tau_{omni}}^{LoS}$ & -78.11 & -80.68 & -75.55 & 4.25 & 3.05 & 7.01 \\ \hline $\sigma_{\tau_{max-dir}}^{LoS}$ & -85.8 & -86.98 & -84.62 & 1.95 & 1.4 & 3.22 \\ \hline $\sigma_{\tau_{omni}}^{NLoS}$ & -76.38 & -79.2 & -73.56 & 4.66 & 3.34 & 7.7 \\ \hline $\sigma_{\tau_{max-dir}}^{NLoS}$ & -84.96 & -87.58 & -82.34 & 4.33 & 3.11 & 7.15 \\ \hline \end{tabular}% } \end{table*} \subsection{Angular spread} The next parameter to analyze is the angular spread. In this case, the analysis is separated between the Tx and Rx end. As explained in Section II, the scan range for Tx and Rx are different, so our conjecture is to observe a larger angular spread in the Rx side for both LoS and NLoS cases. Furthermore, the richer number of scattering objects at street level is expected to compound this effect. \\ \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{as_los.eps} \caption{LoS case.} \vspace*{0mm} \label{fig:AS_CDF_LOS}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{AS.eps} \caption{NLoS case.} \vspace*{0mm} \label{fig:AS_CDF_NLOS}% \end{subfigure} \caption{Modeling of $\sigma^\circ$ for all points.}% \vspace{-0 mm} \label{fig:AS-CDF}% \vspace{-5mm} \end{figure*} Fig. \ref{fig:AS-CDF} shows the CDF for LoS and NLoS cases. In both cases, the data confirm our hypothesis. For example, in the LoS case the Tx points show a smaller spread compared to the Rx ($\sigma^\circ_{NLoS} Tx < \sigma^\circ_{NLoS} Rx$). This result is related to the fact that reflected MPCs are reflected in the vicinity of the Rx, and are ''seen" by the Tx under angles similar to that of the LoS. On the other hand, the NLoS points show AS points with a similar spread (i.e. $\sigma^\circ_{NLoS} Tx \approx \sigma^\circ_{NLoS} Rx$). A possible cause for this behavior is the waveguiding in the "street canyon" environments, which concentrates the MPCs in a narrower angular range. A summary of the estimated statistical parameters with their $95\%$ confidence interval is shown in Table \ref{tab:stat-model-AS}. \begin{table*}[t!] \centering \caption{Statistical model parameters for $\sigma^\circ$ with $95\%$ confidence interval.} \label{tab:stat-model-AS} {% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Statistical model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\mu_{min,95\%}$} & \multicolumn{1}{c|}{$\mu_{max,95\%}$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\sigma_{min,95\%}$} & \multicolumn{1}{c|}{$\sigma_{max,95\%}$} \\ \hline \hline $\sigma^\circ_{LoS} Tx$ & -0.72 & -0.77 & -0.67 & 0.08 & 0.06 & 0.13 \\ \hline $\sigma^\circ_{LoS} Rx$ & -0.51 & -0.62 & -0.4 & 0.18 & 0.13 & 0.3 \\ \hline $\sigma^\circ_{NLoS} Tx$ & -0.49 & -0.6 & -0.38 & 0.18 & 0.13 & 0.3\\ \hline $\sigma^\circ_{NLoS} Rx$ & -0.33 & -0.45 & -0.21 & 0.19 & 0.14 & 0.32\\ \hline \end{tabular}% } \end{table*} \subsection{Power distribution of MPCs} The final parameter estimated is the $\kappa_1$. Our hypothesis is to observe larger values of $\kappa_1$ in max-dir cases compared the omni-directional ones. Fig. \ref{fig:k1-LOS} shows the estimated values for the LoS case. As can be observed in Fig. \ref{fig:k1_CDF} the LoS points for the omni-directional case have a similar spread compared to the max-dir cases, but significantly smaller mean. Fig. \ref{fig:k1_LS} shows the regression analysis of the power distribution. The observed range oscillates between 4 and 23 dB. As observed in the plot, $\kappa_1$ for the max-dir grows as the distance increases and for the omni-directional case shows a decreasing trend. The filtering effect of the antenna decreases the number of MPCs received by the MPC. As the distance increases, additional MPCs (coming from reflections) suffer from further attenuation and only those in the LoS directions are boosted by the antenna gain. On the other hand, in the omni-directional case, the value of $\kappa_1$ decreases because as the distance increases more MPCs will be collected from different direction apart from the LoS\footnote{An unusual behavior is observed in point Tx1-Rx6 where ($\kappa_1^{omni} > \kappa_1^{max-dir}$), though the difference is small. We conjecture that this is caused by imperfections in the calibration procedure and the generation of omni-directional PDPs from the directional PDPs.}. A summary of the parameters see Tables \ref{tab:linear-model-kappa}, \ref{tab:stat-model-kappa}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{k1_los.eps} \caption{CDF.} \vspace*{0mm} \label{fig:k1_CDF}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{k1vd_los.eps} \caption{Linear fitting with $log_{10}(d)$ weighting.} \vspace*{0mm} \label{fig:k1_LS}% \end{subfigure} \caption{Modeling of $\kappa_1$ for LoS points.}% \vspace{-0 mm} \label{fig:k1-LOS}% \vspace{-5mm} \end{figure*} In the NLoS case, we observed the values with a range from -10 to 22 dB. This high variability can be related to the multiple points in "street canyon" scenarios (Routes One, Five, and Six). The "street canyon" filters/concentrates the MPCs arriving at the Rx. Furthermore, $\kappa_1$ is reduced when the distance increases, both for the omni- and the max-Dir case. Similarly to the RMSDS analysis, the points Tx5-Rx23 and Tx6-Rx24 shows a different behavior ($\kappa_{1_{omni}}^{NLoS}>\kappa_{1_{max-dir}}^{NLoS}$). This is related to the fact that the strongest MPC angle is between two azimuthal captures, which produces this unusual behavior. More details about the regression analysis and statistical modeling and estimation are shown in Tables \ref{tab:linear-model-kappa},\ref{tab:stat-model-kappa}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{k1.eps} \caption{CDF.} \vspace*{0mm} \label{fig:k1_CDF_NLOS}% \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1\columnwidth]{k1vd.eps} \caption{Linear fitting with $log_{10}(d)$ weighting.} \vspace*{0mm} \label{fig:k1_LS_NLOS}% \end{subfigure} \caption{Modeling of $\kappa_1$ for NLoS points.}% \vspace{-0 mm} \label{fig:k1_NLOS}% \vspace{-5mm} \end{figure*} \begin{table*}[t!] \centering \caption{Linear model parameters for $\kappa_1$ with $95\%$ confidence interval.} \label{tab:linear-model-kappa} {% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Linear model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|l|}{} & \multicolumn{1}{c|}{$\alpha$} & \multicolumn{1}{c|}{$\alpha_{min,95\%}$} & \multicolumn{1}{c|}{$\alpha_{max,95\%}$} & \multicolumn{1}{c|}{$\beta$} & \multicolumn{1}{c|}{$\beta_{min,95\%}$} & \multicolumn{1}{c|}{$\beta_{max,95\%}$} \\ \hline \hline $\kappa_{1_{omni}}^{LoS}$ & 25.59 & 7.49 & 43.7 & -8.87 & -20.27 & 2.53\\ \hline $\kappa_{1_{max-dir}}^{LoS}$ & 1.32 & -16.25 & 18.89 & 8.13 & -2.94 & 19.19 \\ \hline $\kappa_{1_{omni}}^{NLoS}$ & 38.54 & 8.12 & 68.96 & -23.29 & -42.54 & -4.05\\ \hline $\kappa_{1_{max-dir}}^{NLoS}$ & 28.95 & 1.39 & 56.52 & -11.35 & -28.78 & 6.09 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t!] \centering \caption{Statistical model parameters for $\kappa_1$ with $95\%$ confidence interval.} \label{tab:stat-model-kappa} {% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Parameter}}} & \multicolumn{6}{c|}{\textbf{Statistical model parameters estimated with 95\% CI}} \\ \cline{2-7} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\mu_{min,95\%}$} & \multicolumn{1}{c|}{$\mu_{max,95\%}$} & \multicolumn{1}{c|}{$\sigma$} & \multicolumn{1}{c|}{$\sigma_{min,95\%}$} & \multicolumn{1}{c|}{$\sigma_{max,95\%}$} \\ \hline \hline $\kappa_{1_{omni}}^{LoS}$ & 11.01 & 8.03 & 13.98 & 4.92 & 3.53 & 8.12 \\ \hline $\kappa_{1_{max-dir}}^{LoS}$ & 14.72 & 11.87 & 17.57 & 4.72 & 3.38 & 7.78 \\ \hline $\kappa_{1_{omni}}^{NLoS}$ & 0 & -5.14 & 5.13 & 8.5 & 6.1 & 14.03 \\ \hline $\kappa_{1_{max-dir}}^{NLoS}$ & 10.57 & 6.15 & 14.98 & 7.3 & 5.23 & 12.05 \\ \hline \end{tabular}% } \end{table*} \subsection{Summary of results} In this section, a summary of the estimated parameter for a systems design or channel simulation are shown in Tables \ref{tab:linear-model-summary}, \ref{tab:stat-model-summary}. Table \ref{tab:linear-model-summary} shows the regression analysis (i.e. linear modeling) for the distance dependence of the parameters for both LoS and NLoS cases. Table \ref{tab:stat-model-summary} shows the estimated parameters for the statistical fits/modeling carried out in this analysis for both LoS and NLoS cases. Please note that the presented statistical results are valid for the ranges of distances we measured over ($\approx 20-\approx 85$ m). It is important to note that the parameters obtained in the analysis are directly related to the number of points and the selection of measurement locations. In other words, this analysis is impacted by the fact that the measurement locations were chosen such that reasonable Rx power could be anticipated. An analysis of outage probability should consider a ''blind" selection of points, e.g., on a regular grid, that would allow an assessment of the percentage of points that cannot sustain communications at a given sensitivity level. Also other parameters, which might be correlated to the received power, might conceivably be influenced by the selection of the points. The results in this paper should thus be interpreted as ''conditioned on the existence of reasonable Rx power". Furthermore, while in the current campaign more than 100,000 transfer functions were measured, the number of measured {\em location pairs} is still somewhat limited. Hence, this model is based on a relatively small number of points to provide an initial channel model to give a realistic analysis and model for system design. A larger number of measurement locations will obviously increase the number of measurement locations and increase the validity of the analysis. However, the time required to perform the current campaign was quite significant (several months), and it is among the largest double-directional campaigns ever performed in the THz regime (for any type of environment). Future measurements will be added to improve the model further. \begin{table*}[t!] \centering \caption{Linear model parameters summary.} \label{tab:linear-model-summary} {% \begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Parameter}} & \multicolumn{1}{c|}{$\alpha$} & \multicolumn{1}{c|}{$\beta$} \\ \hline \hline $PL_{omni}^{LoS}$ & 72.88 & 1.93 \\ \hline $PL_{max-dir}^{LoS}$ & 77.33 & 1.88 \\ \hline $PL_{omni}^{LoS} OLS$ & 75.02 & 1.8 \\ \hline $PL_{max-dir}^{LoS} OLS$ & 77.06 & 1.89 \\ \hline $\sigma_{\tau_{omni}}^{LoS}$ & -108.22 & 17.82 \\ \hline $\sigma_{\tau_{max-dir}}^{LoS}$ & -94.11 & 4.99 \\ \hline $\kappa_{1_{omni}}^{LoS}$ & 25.59 & -8.87 \\ \hline $\kappa_{1_{max-dir}}^{LoS}$ & 1.32 & 8.13 \\ \hline $PL_{omni}^{NLoS}$ & 91.28 & 1.76 \\ \hline $PL_{max-dir}^{NLoS}$ & 84.54 & 2.57 \\ \hline $PL_{omni}^{NLoS} OLS$ & 86.81 & 2.03 \\ \hline $PL_{max-dir}^{NLoS} OLS$ & 82.91 & 2.68 \\ \hline $\sigma_{\tau_{omni}}^{NLoS}$ & -96.16 & 11.91\\ \hline $\sigma_{\tau_{max-dir}}^{NLoS}$ & -96.71 & 7.14\\ \hline $\kappa_{1_{omni}}^{NLoS}$ & 38.54 & -23.29\\ \hline $\kappa_{1_{max-dir}}^{NLoS}$ & 28.95 & -11.35 \\ \hline \end{tabular}% } \end{table*} \begin{table*}[t!] \centering \caption{Statistical model parameters summary.} \label{tab:stat-model-summary} {% \begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Parameter}} & \multicolumn{1}{c|}{$\mu$} & \multicolumn{1}{c|}{$\sigma$}\\ \hline \hline $\epsilon_{omni}^{LoS}$ & 0.09 & 0.72 \\ \hline $\epsilon_{max-dir}^{LoS}$ & -0.01 & 0.8 \\ \hline $\epsilon_{omni}^{LoS} OLS$ & 0 & 0.69\\ \hline $\epsilon_{max-dir}^{LoS} OLS$ & 0 & 0.8\\ \hline $\sigma^{\circ}_{LoS} Tx$ & -0.72 & 0.08\\ \hline $\sigma^{\circ}_{LoS} Rx$ & -0.51 & 0.18\\ \hline $\sigma_{\tau_{omni}}^{LoS}$ & -78.11 & 4.25\\ \hline $\sigma_{\tau_{max-dir}}^{LoS}$ & -85.8 & 1.95 \\ \hline $\kappa_{1_{omni}}^{LoS}$ & 11.01 & 4.92 \\ \hline $\kappa_{1_{max-dir}}^{LoS}$ & 14.72 & 4.72 \\ \hline $\epsilon_{omni}^{NLoS}$ & 0.04 & 6.24 \\ \hline $\epsilon_{max-dir}^{NLoS}$ & 0.18 & 6.21\\ \hline $\epsilon_{omni}^{NLoS} OLS$ & 0 & 6.21 \\ \hline $\epsilon_{max-dir}^{NLoS} OLS$ & 0 & 7.89 \\ \hline $\sigma^{\circ}_{NLoS} Tx$ & -0.49 & 0.18 \\ \hline $\sigma^{\circ}_{NLoS} Rx$ & -0.33 & 0.19 \\ \hline $\sigma_{\tau_{omni}}^{NLoS}$ & -76.38 & 4.66 \\ \hline $\sigma_{\tau_{max-dir}}^{NLoS}$ & -84.96 & 4.33 \\ \hline $\kappa_{1_{omni}}^{NLoS}$ & 0 & 8.5 \\ \hline $\kappa_{1_{max-dir}}^{NLoS}$ & 10.57 & 7.37\\ \hline \end{tabular}% } \end{table*} \section{Conclusions} In this paper, we presented the results of the first extensive wideband, double-directional THz outdoor channel measurements for microcell scenarios with Tx heights of more than 10 m above the ground. We provide an overview of the measurement methodology and environments, as well as the signal processing to extract parameters characterizing the channels. Most importantly, we provided a parameterized statistical description of our measurement results that can be used to assess THz systems. The key parameters discussed in the current paper include path loss, shadowing, angular spread, delay spread and MPC power distribution. These results are an important step towards drawing some important first conclusions about the implications on system design and deployment in the THz regime. \section*{Acknowledgment} Helpful discussions with Sundeep Rangan, Mark Rodwell and Zihang Cheng are gratefully acknowledged.
2024-02-18T23:39:39.828Z
2021-12-06T02:15:27.000Z
algebraic_stack_train_0000
5
9,761
proofpile-arXiv_065-39
\section{Introduction} Humans have a native capability to manipulate objects without much effort. We use different sensing modalities, e.g., visual, auditory and tactile sensing, to perceive the object properties and manipulate them in space. Among these sensing modalities, tactile sensing is not affected by changes of light conditions and occlusions of hands as vision, or influenced by the noise of the ambient environment. It can provide us rich information of the object in hand, e.g., the texture, temperature, shape and pose of the object. To equip the robot with similar tactile sensing capabilities, various tactile sensors have been proposed for robots in the past decades to imitate the human skin~\cite{TactileSensingRobotHandsSurvey,Directionstowardeffective,luo2017robotic}. Traditional approaches aimed at providing the robot with force information at a contact point~\cite{tactileSensing}. In order to provide the robot with more information on the contact, camera-based optical tactile sensors have been proposed and the GelSight sensor is one of them. The GelSight sensor captures high resolution geometric information of the object it interacts with, thus having the capability to aid the manipulation task~\cite{gomes2021generation}. It uses a camera to capture the deformation of a soft elastomer, using illumination sources from different directions~\cite{gomes2021generation}. It also has a few variants of different morphologies and camera/light configurations such as GelTip~\cite{gomes2020geltip} and GelSlim~\cite{donlon2018gelslim}. \begin{figure}[t] \centering \centerline{\includegraphics[scale=0.31]{Figures/key_results.png}} \caption{The adapted tactile images (middle) are generated from the simulation tactile images (left) using our proposed texture generation network. Compared to the corresponding real images (right), it can be observed that the contact regions in the adapted tactile images have similar textures to ones in the real images, with the uncontacted areas free from the textures.} \label{fig:cover} \label{overview} \end{figure} Due to the use of a soft elastomer on the top of the sensor to interact with objects, similar to many other tactile sensors, the camera based tactile sensors are fragile and suffer from wear and tear. To mitigate the damage to the sensor and save time for training on a real robot, the robot can be trained in a simulated environment first with simulated tactile sensors, before deploying the trained model in a real environment. To this end, simulation models of the GelSight sensor have been proposed~\cite{gomes2019gelsight,gomes2021generation}. However, the gap between the simulated tactile environment and the real world is still large, which may perturb the model and greatly impact its performance. As discussed in~\cite{gomes2021generation}, the imperfections of reality such as the scratches and other object deformation are what helps the trained model distinguish between objects via touch sensing. In contrast, in the simulation, those are not present, thus creating a gap between the two domains. One of the methods to diminish such gaps is to make the simulation as real as possible. In order to do so, the introduction of noise either in the form of textures and other methods such as adding Gaussian noise or domain randomisation can be adopted~\cite{tobin2017domain,domainRandomization2}. But its main issue is that the probability distribution of the imperfections in the reality domain has a long tail: although the probability of encountering a novel imperfection is small, it will eventually happen. In the context of robotic manipulation tasks, this can become a potential dangerous situation or damage the sensors~\cite{gomes2021generation}. To address this challenge, we propose a novel texture generation network to reduce the domain gap from simulated tactile images to the real tactile images for camera-based optical tactile sensing. In the proposed network, different regions of the simulated tactile image are adapted differently: the areas in contact with the object are applied with the generated textures from real tactile images, whereas regions without a contact maintain their appearance as when the sensor is not in contact with any object. We have conducted extensive experiments to evaluate the proposed method using a dataset of real and simulated tactile images from a GelSight tactile sensor. The experiments show that the proposed method can generate realistic artefacts on the deformed regions of the sensor, while avoiding leaking the textures into ones without a contact, as shown in Fig.~\ref{fig:cover}. In comparing the resulted tactile images with real tactile images, it achieved a low Mean Absolute Error (MAE) of 10.53\% on average and a similarity of 0.751 in the Structural Similarity Index (SSIM) metric. Beyond that, the experiments show that when using the adapted images generated by our proposed network for Sim2Real transfer of a learnt model for a classification task, the drop accuracy caused by the Sim2Real gap is reduced from $38.43\%$ to merely $0.81\%$. As such, this work has the potential to accelerate the Sim2Real learning for robotic tasks with tactile sensing. \section{Related Works} \subsection{Optical tactile sensors} Optical tactile sensors that use a camera underneath a soft elastomer layer are one highly practical method for providing robots the sense of touch, and thus a variety of sensor designs have been proposed. Currently, these can be grouped in two main families: marker-based, represented by TacTip sensors~\cite{TacTipFamily}, and image-based, represented by GelSight sensors~\cite{dong2017improved}. In this paper, we focus on GelSight sensors as they are better suited to capture the fine textures introduced by manufacturing defects or wear and tear. The \textit{GelSight} working principle was proposed in~\cite{RetrographicSensing} as a method for reconstructing the texture and shape of contacted objects\cite{cao2020spatio,luo2018vitac,lee2019touching}. To that purpose, light sources are placed from opposite angles next to a transparent elastomer that is coated with an opaque reflective paint, resulting in three different shaded images of the in-contact object texture. A direct mapping between the observed image pixel intensities and the elastomer surface orientation can then be found to create a lookup table, enabling the surface to be reconstructed using photometric stereo. Since its initial proposal, new designs have been proposed that aim at reducing the size~\cite{dong2017improved, donlon2018gelslim}, improving its sensing capability~\cite{donlon2018gelslim, dong2017improved} or providing curved finger-shaped surface for improved robotic dexterity~\cite{BlocksWorldOfTouch, softRoundGelSight, cao2021touchroller}. However, optical tactile sensors are still brittle and extensive experimentation with them often results in their sensing membrane being damaged. \subsection{Simulation of tactile sensors} It is desirable to develop and test robot agents initially within a simulator before their deployment in the real environment as running experiments with real hardware is time consuming and damage prone. To this end, a variety of methods have been proposed to simulate different tactile sensors. We proposed to simulate GelSight sensors in~\cite{gomes2019gelsight, gomes2021generation}, by considering close-up depth-maps extracted from simulators and using the Phong illumination model for rendering the RGB tactile images. However, despite the efforts in making the simulations as realistic as possible, some artefacts such as the textures that are not represented in the simulated object model and the scratches resulting from wear and tear contribute largely to the gap between the simulated and real images. They often hinder transferring the models trained on simulated data to the real robots (i.e., Sim2Real learning). Therefore, the gap itself must be addressed. \subsection{Reducing the Sim2Real gap for tactile sensing} A common approach to address the gap between training and test data is to augment the training data such that the test data becomes one particular subset of the whole augmented training data, i.e., Domain Randomisation. For Real2Real computer vision tasks, this augmentation is commonly performed at the image level, by applying random colour or geometric transformations to the images. When the training images are collected in simulation for Sim2Real learning, one other form of augmentation is to directly augment the simulation data, by randomising object colours, scene illumination or the environment physics~\cite{tobin2017domain}. As colours and illumination are constant for tactile images collected from the same tactile sensor, in~\cite{gomes2021generation} we experiment with augmenting the synthetic dataset by perturbing the in-contact object shapes using simple texture maps that resemble the artefacts mostly contributing to the gap observed in our dataset: the textures introduced in the 3D printing of our real reference object dataset. The method was proved to be a more effective augmentation schema than image-based augmentations. However, target domain agnostic randomisation is costly as it often requires running the same simulation in a great number of times, to capture all the dimensions' variations. Thus, in this paper we address the Sim2Real gap from a Domain Adaptation perspective and propose a network that adapts the simulated images into photo realistic counterparts. Domain adaptation has successfully been applied to vision-based tasks~\cite{cyCADA, photorealismEnhancement}, however, it has not been studied in the context of tactile sensing yet. \section{Methodology} \begin{figure}[t] \centerline{\includegraphics[scale=0.45]{Figures/framework.png}} \caption{The proposed texture generation network. Starting from the \textbf{Depth map} captured in the simulator, the \textbf{Simulated} tactile image and \textbf{Mask} are generated using \cite{gomes2021generation} and simple truncation of the depth map, respectively. The simulated image is then mapped to the \textbf{Adapted} target $R$ through $G_{S \rightarrow R}$. A discriminator $D_R$ then classifies the generated tactile image $G_{S \rightarrow R}($\textbf{Simulated}$)$ thus giving the adversarial loss $L_{GAN}$. The image is then cycled back to the simulated domain through $G_{R \rightarrow S}$, which then gives the cycle consistency loss ($L_{cycle}$). The mask of the in-contact area is used to cover the contact zones of the \textbf{Simulated}, \textbf{Adapted} and \textbf{Real images}, thus allowing us to constrain the background of the tactile image, while allowing the model to alter the contact zone with textures, resulting in $L_{mask}$.} \label{framework} \end{figure} \subsection{Problem description} In this paper, we address the domain gaps between the simulation tactile images and real tactile images, for the first time, that impede the ability of transferring a trained model in simulation to reality. The resulting factor that contributes to the domain gaps is represented by the textural artefacts~\cite{gomes2021generation}. Those artefacts are not limited to the production phase such as manufacturing defects and surface textures brought by the finishing, but can be created when an object is repeatedly interacted with thus being continuously deformed, i.e., wear and tear. Failure to consider the artefacts when training the robot agents can lead to an improper manipulation of a given object due to miss-classification and ultimately result in a possible damage to the robot. To this end, we aim at addressing the gap between simulated and real tactile images to diminish the risk of such situation and propose to learn the artefacts on object surfaces so as to mitigate the drop in performance in Sim2Real learning. It is challenging as texture artefacts should be applied only to the contact regions of the tactile sensors with the rest unaffected, as textural augmentation leaks to the untouched areas may lead to fake positive detection of contacts. In order to do so, we propose a novel texture generation network for applying textures to the contact surfaces in the simulation tactile images. \subsection{The texture generation network} As shown in Fig.~\ref{framework}, our proposed texture generation network has two generators: one generating a tactile image with textures $\hat{X}_A$ (i.e., an adapted tactile image) from a simulation tactile image $X_S$, i.e., $G_{S\rightarrow R}$; the other generating tactile images in the simulation domain from the adapted tactile image, i.e., $G_{R\rightarrow S}$. Two discriminators are responsible for distinguishing the real image from the generated one created by the generator in each domain: Discriminator $D_R$, aims at distinguishing $X_R$ from $G_{S\rightarrow R}(X_S)$ while the discriminator $D_S$ aims at distinguishing between $X_S$ and $G_{R\rightarrow S}(X_R)$. \begin{figure*}[tt] \centerline{\includegraphics[scale=0.92]{Figures/generatorComparisons.pdf}} \caption{Top row: Simulated samples collected using the GelSight simulation approach \cite{gomes2021generation}; Bottom row: The corresponding real samples captured using a real GelSight sensor \cite{dong2017improved}. In between, second to fourth rows: The two baselines that we experiment with and our final proposed network. As seen in the listed images, the original simulation tactile images lack the textures produced by the 3D printed process that can be observed in the real samples. On the opposite extreme, Pix2Pix~\cite{pix2pix} renders over textured tactile images, including outside the in-contact areas. CycleGAN~\cite{cycleGAN} produces much cleaner textures, when compared to Pix2Pix, however, some texture leaking can still be observed, e.g., in the first and last samples (columns). Finally, our proposed network generates the best results, with the textures generated only within the in-contact areas.} \label{figGenerativeModels} \end{figure*} The generator follows a U-Net architecture~\cite{unetBasic} which consists an encoder (downsample) part and a decoder (upsample) part. Each layer in the encoder, consists of blocks of a convolutional layer, followed by an instance normalisation layer and a Leaky ReLU activation. Each convolution has a stride of two and with each layer, the number of filters is doubled until the image is reduced to height and width of one and $512$ filters. The decoder is constructed of layers that consist of a transposed convolution of stride two, followed by an instance normalisation, a dropout layer, and a ReLU activation. The dropout layer is applied to the upsample section of the generator and functions as a network regulator. The network is compelled to learn meaningful representations from the latent space as a result of this. In addition, skip connections are tied between the mirrored layers in the encoder and decoder part of the model, which allows the model to propagate context information to higher resolution layers~\cite{unetBasic}. This is done by concatenating the mirrored layer with the output of the downsample layer. As a result, each layer in the decoder has the amount of filters doubled than the corresponding mirrored layer in the decoder network. The first downsample block does not use the normalisation, while in the decoder part only the first three blocks use a dropout layer. The Discriminator follows a Patch-GAN architecture~\cite{pix2pix}, where each layer consists of a convolution of a stride of two followed by an instance normalisation layer and a Leaky ReLU activation. The discriminator, rather than giving an absolute value, it outputs a patch of $N{\times}N$ dimensions, in our case, $N=33$. This allows the computation of $L1$ loss between the patches output by the discriminator. \subsection{Loss Functions} \textbf{Adversarial Loss}. The generator $G_{S \rightarrow R}$ represents a mapping function that takes an element from the distribution $X_S$ and maps it to the distribution $X_R$ while $D_R(x_s)$ outputs the probability that an instance comes from $X_R$ rather than from $X_S$. The discriminator tries to maximise the probability of correctly assigning a label to the $X_R$ and $G_{S \rightarrow R}(X_S)$ while the generator aims to minimise $log(1-D(G_{S \rightarrow R}(X_S))$. In other words, the loss can be described as a minimax game where the generator $G_{S \rightarrow R}$ aims at translating a tactile image from the synthetic domain to the reality domain whereas the discriminator $D_R$ aims at distinguishing between a generated tactile image and a real one. This corresponds to: \begin{equation} \begin{aligned} \mathcal{L}_{GAN}(G_{S \rightarrow R},D_R,X_R,X_S) = & E_{x_r\sim X_R}[logD_R(x_r)] + & \\ & E_{x_s\sim X_S}[log(1- & \\ & D_R(G_{S\rightarrow R}(x_s))]\label{adversarialLoss1} \end{aligned} \end{equation} \noindent In addition, the generator $G_{R \rightarrow S}$ learns to map the images from $X_R$ to $X_S$ while the discriminator $D_S$ distinguishes between them. This results into: \begin{equation} \begin{aligned} \mathcal{L}_{GAN}(G_{R \rightarrow S},D_S,X_S,X_R) = & E_{x_s\sim X_S}[logD_S(x_s)] + & \\ & E_{x_r\sim X_R}[log(1-& \\ & D_S(G_{R\rightarrow S}(x_r))]\label{adversarialLoss2} \end{aligned} \end{equation} Together, \eqref{adversarialLoss1} and \eqref{adversarialLoss2} give the total adversarial loss of: \begin{equation} \begin{aligned} \mathcal{L}_{GAN} = & \mathcal{L}_{GAN}(G_{S \rightarrow R},D_R,X_R,X_S) + \\ & \mathcal{L}_{GAN}(G_{R \rightarrow S},D_S,X_S,X_R)\label{adversarialLossCombined} \end{aligned} \end{equation} \textbf{The Cycle Consistency loss}. While the tactile image generated by $G_{S \rightarrow R}(X_S)$ may learn to produce convincing results that seem like they are sampled from the real distribution, it may not preserve the information in $X_S$, such as the class and the location of the object in the image. In order to enforce the stability and consistency of the model, the cycle consistency loss ${L}_{cycle}$~\cite{cycleGAN} has been implemented. ${L}_{cycle}$ calculates the difference between the input simulation tactile image $x_{s}$ and the image translated to a real image through the generator $G_{S \rightarrow R}$ then back to the synthetic domain through the generator $G_{R \rightarrow S}$. This allows the model to learn the mappings between the domains without the need of paired data such as in CycleGAN~\cite{cycleGAN}, DualGAN~\cite{dualGAN}, and DiscoGAN~\cite{discoGAN}. Mathematically, given an image $x_s$, and the cycled image $G_{R \rightarrow S}(G_{S \rightarrow R}(x_s))$, we want $G_{R \rightarrow S}(G_{S \rightarrow R}(x_s))\approx x_s$. Similarly, for an image $x_r$, we want $G_{S \rightarrow R}(G_{R \rightarrow S}(x_r))\approx x_r$. Both of the losses give the total cycle consistency loss: \begin{equation} \begin{aligned} \mathcal{L}_{cycle}&(X_S,X_R,G_{S \rightarrow R},G_{R \rightarrow S})=\\ & \mathbb{E}_{x_s \sim X_S}[||x_s - G_{R \rightarrow S}(G_{S \rightarrow R}(x_s))||_1] +\\ &\mathbb{E}_{x_r \sim X_R}[||x_r - G_{S \rightarrow R}(G_{R \rightarrow S}(x_r))||_1] \label{cycleLoss} \end{aligned} \end{equation} \textbf{Identity Loss.} In order to preserve the colours when the tactile image gets translated from one domain to the other, an identity loss is introduced such that, when a simulation tactile image from $X_S$ is translated through the generator $G_{R \rightarrow S}$, the output $x_s$ should have similar colour settings from the light configurations in simulation. This results in: \begin{equation} \begin{aligned} \mathcal{L}_{identity}= & \mathbb{E}_{x_s \sim X_S}[||x_s - G_{R \rightarrow S}(x_s)||_1] + \\ & \mathbb{E}_{x_r \sim X_R}[||x_r - G_{S \rightarrow R}(x_r)||_1]\label{idLoss} \end{aligned} \end{equation} \textbf{Mask Loss.} As shown in Fig.~\ref{framework}, using the depth maps in the simulation, we can distinguish between the foreground and the background by setting any region that is less than the height of the elastomer to one and the rest to zero and thus we created the binary mask $m_s$ of the object $x_s$. In order to make the areas that are not in contact unaffected by the textures, we constrain the image background on both the simulated and the real image background thus not only giving the model stability to the outside the contact regions, but also accounting for class shift that the model is prone to~\cite{cyCADA}. Furthermore, we propose to use a hyperparameter $\alpha$ to balance the background target (simulated and real) that enable us to control the generated background to copy more features from the real or simulated backgrounds. This results in the formulation of our mask loss: \begin{equation} \begin{aligned} \mathcal{L}_{mask} & (M_S,X_S,X_R,G_{S \rightarrow R})= \\ & E_{x_s\sim X_S} [\alpha ||(G_{S \rightarrow R}(x_s)-x_s)(1-m_s)||_1+\\ &(1-\alpha)||(G_{S \rightarrow R}(x_s)-x_r)(1-m_s)||_1] \label{maskLoss} \end{aligned} \end{equation} where a higher $\alpha$ would mean that the tactile image is more constrained on the simulated dataset whereas a lower $\alpha$ would imply that the image is more constrained on the real dataset. \section{The Dataset and Experiment setup} To carry out experiments and evaluation we make use of the dataset captured in~\cite{gomes2021generation}. This dataset consists of paired sets of simulated tactile images $X_S$, real tactile images $X_R$ and raw close-up depth maps that are collected by tapping a GelSight sensor~\cite{dong2017improved} against 21 reference objects of different shapes. These objects were modelled in CAD and printed using a Formlabs~Form~2~3D~printer. To ensure a controlled position of the sensor relative to the object, a Fused Deposition Modeling (FDM) 3D printer \textit{A30} from Geeetech was used as a Cartesian actuator, to move the sensor and tap the reference objects in $3 \times 3$ grid and 11 depths. This results in each set containing 2,079 ($ 21 \times 99 $) tactile samples. Identical setups were created both in the real world and simulation (in Gazebo), and the Robot Operating System (ROS) was used to orchestrate the different software components and the overall data collection. While in the real setup the tactile images $X_R$ were directly captured, in the simulated counterpart the close-up depth maps were firstly captured online, and then the tactile images $X_S$ were generated using the simulation method~\cite{gomes2021generation} offline. For more details of the dataset, we refer the reader to~\cite{gomes2021generation} and the project website\footnote{https://danfergo.github.io/gelsight-simulation/}. Despite the high resolution of the 3D printer, textures were introduced during the printing process that significantly affect the \textit{Sim2Real} transfer. For instance, in Fig.~\ref{figGenerativeModels} it can be observed that the real samples present different textures compared to the ones in the simulated counterparts. Furthermore, it can be seen that the difference between the real and simulated samples are in the high frequency texture, while the overall shapes of the model are the same. Even though this texture could be further smoothed using a variety of methods, we keep them and consider them as unexpected artefacts that could result from natural and unpredictable wear of the object that are commonly seen in the real life. In order to conduct our experiments we first preprocessed the data. For the training dataset, we first normalised the tactile images at the pixel level into the $[-1;1]$ interval. We then employed a data augmentation method, in which we increased the resolution of the tactile images and applied a random crop over the tactile images, followed by a slight rotation and a horizontal flip applied randomly. We implemented all of the models using the Keras API available through Tensorflow. \begin{table}[t] \caption{Classification Task Summary} \def1.2{1.2} \begin{center} \begin{tabular}{c|c|c} \hline \textbf{\textit{Model}}& \textbf{\textit{Sim}}& \textbf{\textit{Real}}\\ \hline Direct & \textbf{91.90}\% $\pm 1.80$ & $53.47\%\pm 6.64$ \\ Pix2Pix & $91.07\%\pm 0.95$ & $60.53\%\pm 2.81$\\ CycleGAN & $90.07\%\pm 1.04$ & $85.57\%\pm 3.36$ \\ CycleGAN w Mask Sim & $91.07\%\pm 1.15$ & \textbf{90.26} \%$\pm 2.70$\\ CycleGAN w Mask Real & $90.41\%\pm 0.89$ & $86.25\%\pm 5.15$ \\ CycleGAN w Mask Combined & $90.55\%\pm 0.84$ & $89.17\%\pm 2.45$ \\ \hline \end{tabular} \label{tab2} \end{center} \end{table} \section{Experiments and discussion} We evaluate the proposed texture generation network with three sets of experiments. Firstly, we compare the generated tactile images against corresponding adapted samples, with both quantitative and qualitative analyses; and then, we demonstrate the advantages of considering the adapted, instead of the original simulated images, for Sim2Real transfer learning in a classification task. As shown in Fig.~\ref{figGenerativeModels}, the generated tactile images using our proposed network appear substantially more similar to the real images than the simulated counterparts, and from Table~\ref{tab2} it can be seen that the initial drop in performance caused by the Sim2Real gap of $38.43\%$ is reduced to $0.81\%$ when considering the images adapted by our network. \begin{table}[b] \caption{Real and Adapted comparison} \def1.2{1.2} \begin{center} \begin{tabular}{c|c|c} \hline \textbf{\textit{Model}}& \textbf{\textit{SSIM $\uparrow$}}& \textbf{\textit{MAE $\downarrow$}}\\ \hline Pix2Pix & $0.332$ & $30.80\%$ \\ CycleGAN & $0.631$ & $23.26\%$ \\ CycleGAN w Mask Sim & $0.734$ & $10.70\%$ \\ CycleGAN w Mask Real & \textbf{0.751} & $10.80\%$ \\ CycleGAN w Mask Combined& $0.719$ & \textbf{10.50\%} \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \subsection{Constraining the augmented texture areas} During our early experimentation phase, when analysing tactile images generated CycleGAN models~\cite{cycleGAN} we observe that they produce realistic results with only slight discrepancies from the real tactile images. However, one tendency of the CycleGAN is to mirror the background and light of the tactile images. Furthermore, the model is not constrained to maintain the object structure while being mapped~\cite{cyCADA}. While this result is not entirely detrimental for cases such as classification, where one can associate such behaviour with a domain randomisation technique, it highlights the instability of the model. Such instability can be observed in the column one, of the CycleGAN row, of Figure~\ref{figGenerativeModels}, where the background is flipped and anomalies are injected into the picture thus creating deformations. To mitigate the issue, we constrain the model on the background of both the simulated and the real pair of image by using our proposed mask loss in Eq.~\eqref{maskLoss}. We further weight the terms differently in relation to the background of provenience such as, a weight of $0.4$ implies that the latter would be multiplied by a weight of $0.6$ and giving a total error based on both backgrounds. We test both of the extremes, constraining the model on only simulation background, only real background as well as the mixture of two. With the mask loss implemented, we observed a greater stability of the background, where the flip of colours along with the background does not occur. Furthermore, the model applies different textures on the contact zones, adding scratches at different angles and on different figures or not adopting a particular scratch. This has the potential to minimise the situation where the model runs into an unexpected type of scratch. \begin{figure} \centerline{\includegraphics[scale=0.15]{Figures/mae.png}} \caption{Difference maps of the generated adapted tactile images, using the different studied methods, against the real reference, with white pixels representing zero difference. As seen in the figure, the textures of the real images are directly visible in \textbf{Sim}, demonstrating the smoothness of the original simulated images. \textbf{Pix2Pix}~\cite{pix2pix} produces randomised textures throughout the entire image, resulting in significant differences even in areas of no contact. \textbf{CycleGAN}~\cite{cycleGAN} produces better results than~\textbf{Pix2Pix}, however, some artefacts can be seen in areas of non contact, e.g., first column, and a distortion on the pose of the object is visible in the last column. Finally, \textbf{Ours} method produces the overall smaller differences. } \label{diffs} \end{figure} \subsection{Comparison of different domain adaptation methods} In order to compare different domain adaptation methods quantitatively, we compute the average Structural Similarity (SSIM) and Mean Absolute Error (MAE) between the adapted images generated by the different methods and the real corresponding pairs. The obtained results are reported in Table~\ref{tab1}. We further compute the relative absolute differences maps, between the samples generated by each method and real counterparts, to improve the understanding of the numerical results, shown in Fig.~\ref{diffs}. Our method of adding information from the background results in the greatest SSIM score $0.751$ when being constrained on the real background, while managing to achieve the lowest MAE ($0.105$) when using a mixed background approach. The Pix2Pix network ~\cite{pix2pix}, although it can create realistic samples, requires a greater amount of time to converge and the model is free to shift the location of the objects freely, resulting in a lower value on SSIM. Furthermore, the random light flipping that we observe will affect the value negatively. \subsection{Sim2Real transfer for object classification} To evaluate the advantages of considering the adapted tactile images \textit{versus} the original simulations for Sim2Real learning, we consider a simple task of object classification using tactile images. To this end, we start by mapping all the simulated images to the target domain, using the pre-trained CycleGAN on which we further add our structural constraint, and proceed by training a classification model on the mapped images. For this purpose, we use the ResNet50 architecture~\cite{resNet} with the weights pretrained on the ImageNet dataset. On top of the base model, two blocks composed of a dense layer, batch normalisation and an ELU activation were added. For each of the added layers, we use the He initialisation \cite{initialization:he} to avoid the problem of vanishing and exploding gradients present in deep architectures. In addition, we add an output layer composed of 21 neurons and a softmax activation. We repeat the procedure of training the classifier and testing the results for 10 times. Each time we train the classifier for 30 epochs. We then test the models on the target domain by computing the accuracy of the model. The results are presented in table Table~\ref{tab2}. The direct transfer between the two domains shows the greatest amount of gap, with a drop of $38.43\%$ whereas our method has the least amount of drop ($0.81\%$) when the mask loss relies mostly on the simulated background as well as the greatest accuracy on the testing dataset ($90.26\%$). The results show that by providing the model with background information, the model is more stable and does not shift the classes, which has been encountered in previous works~\cite{yang2018unpaired,cycleGAN}. \section{Conclusion} In this paper, we proposed a novel texture generation network that is capable of bridging the gap between simulation and reality in the context of tactile images generated with a GelSight sensor. This allows the convenient training of other models in a simulated environment thus reducing the cost and the damage that can occur if the model is transferred to the domain of reality directly. Besides the ability to bridge the gap, the model is capable of generating new textures on the same object thus acting as a domain randomised and increasing the robustness of a model that is trained in simulation. We discovered that anomalies are created in the differences between the contact areas and the background and we stabilised the model using a proposed mask loss. In the future work, we would like to implement our proposed method on more complex Sim2Real tasks, for example, robot grasping and manipulation with tactile sensing.
2024-02-18T23:39:39.927Z
2021-12-06T02:17:12.000Z
algebraic_stack_train_0000
6
5,103
proofpile-arXiv_065-66
\section{Preface} \label{s_preface} This paper primarily serves as a reference for my Ph.D. dissertation, which I am currently writing. As a consequence, the framework is not under active development. The presented concepts, problems, and solutions may be interesting regardless, even for other problems than Neural Architecture Search (NAS). The framework's name, UniNAS, is a wordplay of University and Unified NAS since the framework was intended to incorporate almost any architecture search approach. \section{Introduction and Related Work} \label{s_introduction} An increasing supply and demand for automated machine learning causes the amount of published code to grow by the day. Although advantageous, the benefit of such is often impaired by many technical nitpicks. This section lists common code bases and some of their disadvantages. \subsection{Available NAS frameworks} \label{u_introduction_available} The landscape of NAS codebases is severely fragmented, owing to the vast differences between various NAS methods and the deep-learning libraries used to implement them. Some of the best supported or most widely known ones are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item {NASLib~\citep{naslib2020}} \item { Microsoft NNI \citep{ms_nni} and Archai \citep{ms_archai} } \item { Huawei Noah Vega \citep{vega} } \item { Google TuNAS \citep{google_tunas} and PyGlove \citep{pyglove} (closed source) } \end{itemize} Counterintuitively, the overwhelming majority of publicly available NAS code is not based on any such framework or service but simple and typical network training code. Such code is generally quick to implement but lacks exact comparability, scalability, and configuration power, which may be a secondary concern for many researchers. In addition, since the official code is often released late or never, and generally only in either TensorFlow~\citep{tensorflow2015-whitepaper} or PyTorch~\citep{pytorch}, popular methods are sometimes re-implemented by some third-party repositories. Further projects include the newly available and closed-source cloud services by, e.g., Google\footnote{\url{https://cloud.google.com/automl/}} and Microsoft\footnote{\url{https://www.microsoft.com/en-us/research/project/automl/}}. Since they require very little user knowledge in addition to the training data, they are excellent for deep learning in industrial environments. \subsection{Common disadvantages of code bases} \label{u_introduction_disadvantages} With so many frameworks available, why start another one? The development of UniNAS started in early 2020, before most of these frameworks arrived at their current feature availability or were even made public. In addition, the frameworks rarely provide current state-of-the-art methods even now and sometimes lack the flexibility to include them easily. Further problems that UniNAS aims to solve are detailed below: \paragraph{Research code is rigid} The majority of published NAS code is very simplistic. While that is an advantage to extract important method-related details, the ability to reuse the available code in another context is severely impaired. Almost all details are hard-coded, such as: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item { the used gradient optimizer and learning rate schedule } \item { the architecture search space, including candidate operations and network topology } \item { the data set and its augmentations } \item { weight initialization and regularization techniques } \item { the used hardware device(s) for training } \item { most hyper-parameters } \end{itemize} This inflexibility is sometimes accompanied by the redundancy of several code pieces that differ slightly for different experiments or phases in NAS methods. Redundancy is a fine way to introduce subtle bugs or inconsistencies and also makes the code confusing to follow. Hard-coded details are also easy to forget, which is especially crucial in research where reproducibility depends strongly on seemingly unimportant details. Finally, if any of the hard-coded components is ever changed, such as the optimizer, configurations of previous experiments can become very misleading. Their details are generally not part of the documented configuration (since they are hard-coded), so earlier results no longer make sense and become misleading. \paragraph{A configuration clutter} In contrast to such simplistic single-purpose code, frameworks usually offer a variety of optimizers, schedules, search spaces, and more to choose from. By configuring the related hyper-parameters, an optimizer can be trivially and safely exchanged for another. Since doing so is a conscious and intended choice, it is also documented in the configuration. In contrast, the replacement of hard-coded classes was not intended when the code was initially written. The disadvantage of this approach comes with the wealth of configurable hyper-parameters, in different ways: Firstly, the parametrization is often cluttered. While implementing more classes (such as optimizers or schedules) adds flexibility, the list of available hyper-parameters becomes increasingly bloated and opaque. The wealth of parametrization is intimidating and impractical since it is often nontrivial to understand exactly which hyper-parameters are used and which are ineffective. As an example, the widely used PyTorch Image Models framework~\citep{rw2019timm} (the example was chosen due to the popularity of the framework, it is no worse than others in this respect) implements an intimidating mix of regularization and data augmentation settings that are partially exclusive.\footnote{\url{https://github.com/rwightman/pytorch-image-models/blob/ba65dfe2c6681404f35a9409f802aba2a226b761/train.py}, checked Dec. 1st 2021; see lines 177 and below.} Secondly, to reduce the clutter, parameters can be used by multiple mutually exclusive choices. In the case of the aforementioned PyTorch Image Models framework, one example would be the selection of gradient-descent optimizers. Sharing common parameters such as the learning rate and the momentum generally works well, but can be confusing since, once again, finding out which parameters affect which modules necessitates reading the code or documentation. Thirdly, even with an intimidating wealth of configuration choices, not every option is covered. To simplify and reduce the clutter, many settings of lesser importance always use a sensible default value. If changing such a parameter becomes necessary, the framework configurations become more cluttered or changing the hard-coded default value again results in misleading configurations of previous experiments. To summarize, the hyper-parametrization design of a framework can be a delicate decision, trying for them to be complete but not cluttered. While both extremes appear to be mutually exclusive, they can be successfully united with the underlying configuration approach of UniNAS: argument trees. \paragraph{} Nonetheless, it is great if code is available at all. Many methods are published without any code that enables verifying their training or search results, impairing their reproducibility. Additionally, even if code is overly simplistic or accompanied by cluttered configurations, reading it is often the best way to clarify a method's exact workings and obtain detailed information about omitted hyper-parameter choices. \section{Argument trees} \label{u_argtrees} The core design philosophy of UniNAS is built on so-called \textit{argument trees}. This concept solves the problems of Section~\ref{u_introduction_disadvantages} while also providing immense configuration flexibility. As its basis, we observe that any algorithm or code piece can be represented hierarchically. For example, the task to train a network requires the network itself and a training loop, which may use callbacks and logging functions. Sections~\ref{u_argtrees_modularity} and~\ref{u_argtrees_register} briefly explain two requirements: strict modularity and a global register. As described in Section~\ref{u_argtrees_tree}, this allows each module to define which other types of modules are needed. In the previous example, a training loop may use callbacks and logging functions. Sections~\ref{u_argtrees_config} and~\ref{u_argtrees_build} explain how a configuration file can fully detail these relationships and how the desired code class structure can be generated. Finally, Section~\ref{u_argtrees_gui} shows how a configuration file can be easily manipulated with a graphical user interface, allowing the user to create and change complex experiments without writing a single line of code. \subsection{Modularity} \label{u_argtrees_modularity} As practiced in most non-simplistic codebases, the core of the argument tree structure is strong modularity. The framework code is fragmented into different components with clearly defined purposes, such as training loops and datasets. Exchanging modules of the same type for one another is a simple issue, for example gradient-descent optimizers. If all implemented code classes of the same type inherit from one base class (e.g., AbstractOptimizer) that guarantees specific class methods for a stable interaction, they can be treated equally. In object-oriented programming, this design is termed polymorphism. UniNAS extends typical PyTorch~\citep{pytorch} classes with additional functionality. An example is image classification data sets, which ordinarily do not contain information about image sizes. Adding this specification makes it possible to use fake data easily and to precompute the tensor shapes in every layer throughout the neural network. \begin{figure*}[ht] \hfill \begin{minipage}[c]{0.97\textwidth} \begin{python} @Register.task(search=True) class SingleSearchTask(SingleTask): @classmethod def args_to_add(cls, index=None) -> [Argument]: return [ Argument('is_test_run', default='False', type=str, is_bool=True), Argument('seed', default=0, type=int),` Argument('save_dir', default='{path_tmp}', type=str), ] @classmethod def meta_args_to_add(cls) -> [MetaArgument]: methods = Register.methods.filter_match_all(search=True) return [ MetaArgument('cls_device', Register.devices_managers, num=1), MetaArgument('cls_trainer', Register.trainers, num=1), MetaArgument('cls_method', methods, num=1), ] \end{python} \end{minipage} \vskip-0.3cm \caption{ UniNAS code excerpt for a SingleSearchTask. The decorator function in Line~1 registers the class with type ''task'' and additional information. The method in Line~5 returns all arguments for the task to be set in a config file. The method in Line~13 defines the local tree structure by stating how many modules of which types are needed. It is also possible to specify additional requirements, as done in Line~14. } \label{u_fig_register} \end{figure*} \subsection{A global register} \label{u_argtrees_register} A second requirement for argument trees is a global register for all modules. Its functions are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item { Allow any module to register itself with additional information about its purpose. The example code in Figure~\ref{u_fig_register} shows this in Line~1. } \item { List all registered classes, including their type (task, model, optimizer, data set, and more) and their additional information (search, regression, and more). } \item { Filter registered classes by types and matching information. } \item { Given only the name of a registered module, return the class code located anywhere in the framework's files. } \end{itemize} As seen in the following Sections, this functionality is indispensable to UniNAS' design. The only difficulties in building such a register is that the code should remain readable and that every module has to register itself when the framework is used. Both can be achieved by scanning through all code files whenever a new job starts, which takes less than five seconds. Python executes the decorators (see Figure~\ref{u_fig_register}, Line~1) by doing so, which handle registration in an easily readable fashion. \subsection{Tree-based dependency structures} \label{u_argtrees_tree} \begin{figure*} \vskip-0.7cm \begin{minipage}[l]{0.42\linewidth} \centering \includegraphics[trim=0 320 2480 0, clip, width=\textwidth]{./images/uninas/args_tree_s1_col.pdf} \vskip-0.2cm \caption{ Part of a visualized SingleSearchTask configuration, which describes the training of a one-shot super-network with a specified search method (omitted for clarity, the complete tree is visualized in Figure~\ref{app_u_argstree_img}). The white colored tree nodes state the type and number of requested classes, the turquoise boxes the specific classes used. For example, the \textcolor{red}{SingleSearchTask} requires exactly one type of \textcolor{orange}{hardware device} to be specified, but the \textcolor{cyan}{SimpleTrainer} accepts any number of \textcolor{green}{callbacks} or loggers. \\ \hfill } \label{u_argstree_trimmed_img} \end{minipage} \hfill \begin{minipage}[r]{0.5\textwidth} \begin{small} \begin{lstlisting}[backgroundcolor = \color{white}] "cls_task": <@\textcolor{red}{"SingleSearchTask"}@>, "{cls_task}.save_dir": "{path_tmp}/", "{cls_task}.seed": 0, "{cls_task}.is_test_run": true, "cls_device": <@\textcolor{orange}{"CudaDevicesManager"}@>, "{cls_device}.num_devices": 1, "cls_trainer": <@\textcolor{cyan}{"SimpleTrainer"}@>, "{cls_trainer}.max_epochs": 3, "{cls_trainer}.ema_decay": 0.5, "{cls_trainer}.ema_device": "cpu", "cls_exp_loggers": <@\textcolor{black}{"TensorBoardExpLogger"}@>, "{cls_exp_loggers#0}.log_graph": false, "cls_callbacks": <@\textcolor{green}{"CheckpointCallback"}@>, "{cls_callbacks#0}.top_n": 1, "{cls_callbacks#0}.key": "train/loss", "{cls_callbacks#0}.minimize_key": true, \end{lstlisting} \end{small} \vskip-0.2cm \caption{ Example content of the configuration text-file (JSON format) for the tree in Figure~\ref{u_argstree_trimmed_img}. The first line in each text block specifies the used class(es), the other lines their detailed settings. For example, the \textcolor{cyan}{SimpleTrainer} is set to train for three epochs and track an exponential moving average of the network weights on the CPU. } \label{u_argstree_trimmed_text} \end{minipage} \end{figure*} A SingleSearchTask requires exactly one hardware device and exactly one training loop (named trainer, to train an over-complete super-network), which in turn may use any number of callbacks and logging mechanisms. Their relationship is visualized in Figure~\ref{u_argstree_trimmed_img}. Argument trees are extremely flexible since they allow every hierarchical one-to-any relationship imaginable. Multiple optional callbacks can be rearranged in their order and configured in detail. Moreover, module definitions can be reused in other constellations, including their requirements. The ProfilingTask does not need a training loop to measure the runtime of different network topologies on a hardware device, reducing the argument tree in size. While not implemented, a MultiSearchTask could use several trainers in parallel on several devices. The hierarchical requirements are made available using so-called MetaArguments, as seen in Line~16 of Figure~\ref{u_fig_register}. They specify the local structure of argument trees by stating which other modules are required. To do so, writing the required module type and their amount is sufficient. As seen in Line~14, filtering the modules is also possible to allow only a specific subset. This particular example defines the upper part of the tree visualized in Figure~\ref{u_argstree_trimmed_img}. The names of all MetaArguments start with "cls\_" which improves readability and is reflected in the visualized arguments tree (Figure~\ref{u_argstree_trimmed_img}, white-colored boxes). \subsection{Tree-based argument configurations} \label{u_argtrees_config} While it is possible to define such a dynamic structure, how can it be represented in a configuration file? Figure~\ref{u_argstree_trimmed_text} presents an excerpt of the configuration that matches the tree in Figure~\ref{u_argstree_trimmed_img}. As stated in Lines~6 and~9 of the configuration, CudaDevicesManager and SimpleTrainer fill the roles for the requested modules of types "device" and "trainer". Lines~14 and~17 list one class of the types ''logger'' and ''callback'' each, but could provide any number of comma-separated names. Also including the stated "task" type in Line~1, the mentioned lines state strictly which code classes are used and, given the knowledge about their hierarchy, define the tree structure. Additionally, every class has some arguments (hyper-parameters) that can be modified. SingleSearchTask defined three such arguments (Lines~7 to~9 in Figure~\ref{u_fig_register}) in the visualized example, which are represented in the configuration (Lines~2 to~4 in Figure~\ref{u_argstree_trimmed_text}). If the configuration is missing an argument, maybe to keep it short, its default value is used. Another noteworthy mechanism in Line~2 is that "\{cls\_task\}.save\_dir" references whichever class is currently set as "cls\_task" (Line~1), without naming it explicitly. Such wildcard references simplify automated changes to configuration files since, independently of the used task class, overwriting "\{cls\_task\}.save\_dir" is always an acceptable way to change the save directory. A less general but perhaps more readable notation is "SingleSearchTask.save\_dir", which is also accepted here. A very interesting property of such dynamic configuration files is that they contain only the hyper-parameters (arguments) of the used code classes. Adding any additional arguments will result in an error since the configuration-parsing mechanism, described in Section~\ref{u_argtrees_build}, is then unable to piece the information together. Even though UniNAS implements several different optimizer classes, any such configuration only contains the hyper-parameters of those used. Generated configuration files are always complete (contain all available arguments), sparse (contain only the available arguments), and never ambiguous. A debatable design decision of the current configuration files, as seen in Figure~\ref{u_argstree_trimmed_text}, is that they do not explicitly encode any hierarchy levels. Since that information is already known from their class implementations, the flat representation was chosen primarily for readability. It is also beneficial when arguments are manipulated, either automatically or from the terminal when starting a task. The disadvantage is that the argument names for class types can only be used once ("cls\_device", "cls\_trainer", and more); an unambiguous assignment is otherwise not possible. For example, since the SingleSearchTask already owns "cls\_device", no other class that could be used in the same argument tree can use that particular name. While this limitation is not too significant, it can be mildly confusing at times. Finally, how is it possible to create configuration files? Since the dynamic tree-based approach offers a wide variety of possibilities, only a tiny subset is valid. For example, providing two hardware devices violates the defined tree structure of a SingleSearchTask and results in a parsing failure. If that happens, the user is provided with details of which particular arguments are missing or unexpected. While the best way to create correct configurations is surely experience and familiarity with the code base, the same could be said about any framework. Since UniNAS knows about all registered classes, which other (possibly specified) classes they use, and all of their arguments (including defaults, types, help string, and more), an exhaustive list can be generated automatically. However, resulting in almost 1600 lines of text, this solution is not optimal either. The most convenient approach is presented in Section~\ref{u_argtrees_gui}: Creating and manipulating argument trees with a graphical user interface. \begin{algorithm} \caption{ Pseudo-code for building the argument tree, best understood with Figures~\ref{u_argstree_trimmed_img} and~\ref{u_argstree_trimmed_text} For a consistent terminology of code classes and tree nodes: If the $Task$ class uses a $Trainer$, then in that context, $Trainer$ the child. Lines starting with \# are comments. } \label{alg_u_argtree} \small \begin{algorithmic} \Require $Configuration$ \Comment{Content of the configuration file} \Require $Register$ \Comment{All modules in the code are registered} \State{} \State{$\#$ recursive parsing function to build a tree} \Function{parse}{$class,~index$} \Comment{E.g. $(SingleSearchTask,~0)$} \State $node = ArgumentTreeNode(class,~index)$ \State{} \State{$\#$ first parse all arguments (hyper-parameters) of this tree node} \ForEach{($idx, argument\_name$) \textbf{in} $class.get\_arguments()$} \Comment{E.g. (0, $''save\_dir''$)} \State $value = get\_used\_value(Configuration,~class,~index,~argument\_name)$ \State $node.add\_argument(argument\_name,~value)$ \EndFor \State{} \State{$\#$ then recursively parse all child classes, for each module type...} \ForEach{$child\_class\_type$ \textbf{in} $class.get\_child\_types()$} \Comment{E.g. $cls\_trainer$} \State $class\_names = get\_used\_classes(Configuration,~child\_classes\_type)$ \Assert{ The number of $class\_names$ is in the specified limits} \State{} \State{$\#$ for each module type, check all configured classes} \ForEach{($idx,~class\_name$) \textbf{in} $class\_names$} \Comment{E.g. (0, $''SimpleTrainer''$)} \State $child\_class = Register.get(child\_class\_name)$ \State $child\_node = $\Call{parse}{$child\_class,~idx$} \State $node.add\_child(child\_class\_type,~idx,~child\_node)$ \EndFor \EndFor \Returnx{ $node$} \EndFunction \State{} \State $tree = $\Call{parse}{$Main, 0$} \Comment{Recursively parse the tree, $Main$ is the entry point} \Ensure every argument in the configuration has been parsed \end{algorithmic} \end{algorithm} \subsection{Building the argument tree and code structure} \label{u_argtrees_build} The arguably most important function of a research code base is to run experiments. In order to do so, valid configuration files must be translated into their respective code structure. This comes with three major requirements: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ Classes in the code that implement the desired functionality. As seen in Section~\ref{u_argtrees_tree} and Figure~\ref{u_argstree_trimmed_img}, each class also states the types, argument names and numbers of additionally requested classes for the local tree structure. } \item{ A configuration that describes which code classes are used and which values their parameters take. This is described in Section~\ref{u_argtrees_config} and visualized in Figure~\ref{u_argstree_trimmed_text}. } \item{ To connect the configuration content to classes in the code, it is required to reference code modules by their names. As described in Section~\ref{u_argtrees_register} this can be achieved with a global register. } \end{itemize} Algorithm~\ref{alg_u_argtree} realizes the first step of this process: parsing the hierarchical code structure and their arguments from the flat configuration file. The result is a tree of \textit{ArgumentTreeNodes}, of which each refers to exactly one class in the code, is connected to all related tree nodes, and knows all relevant hyper-parameter values. While they do not yet have actual class instances, this final step is no longer difficult. \begin{figure*}[h] \vskip -0.0in \begin{center} \includegraphics[trim=30 180 180 165, clip, width=\linewidth]{images/uninas/gui/gui1desc.png} \hspace{-0.5cm} \caption{ The graphical user interface (left) that can manipulate the configurations of argument trees (visualized right). Since many nodes are missing classes of some type ("cls\_device", ...), their parts in the GUI are highlighted in red. The eight child nodes of DartsSearchMethod are omitted for visual clarity. } \label{fig_u_gui} \end{center} \end{figure*} \subsection{Creating and manipulating argument trees with a GUI} \label{u_argtrees_gui} Manually writing a configuration file can be perplexing since one must keep track of tree specifications, argument names, available classes, and more. The graphical user interface (GUI) visualized in Figures~\ref{fig_u_gui} and~\ref{app_u_gui} solves these problems to a large extent, by providing the following functionality: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ Interactively add and remove nodes in the argument tree, thus also in the configuration and class structure. Highlight violations of the tree specification. } \item{ Setting the hyper-parameters of each node, using checkboxes (boolean), dropdown menus (choice from a selection), and text fields (other cases like strings or numbers) where appropriate. } \item{ Functions to save and load argument trees. Since it makes sense to separate the configurations for the training procedure and the network design to swap between different constellations easily, loading partial trees is also supported. Additional functions enable visualizing, resetting, and running the current argument tree. } \item{ A search function that highlights all matches since the size of some argument trees can make finding specific arguments tedious. } \end{itemize} In order to do so, the GUI manipulates \textit{ArgumentTreeNodes} (Section~\ref{u_argtrees_build}), which can be easily converted into configuration files and code. As long as the required classes (for example, the data set) are already implemented, the GUI enables creating and changing experiments without ever touching any code or configuration files. While not among the original intentions, this property may be especially interesting for non-programmers that want to solve their problems quickly. Still, the current version of the GUI is a proof of concept. It favors functionality over design, written with the plain Python Tkinter GUI framework and based on little previous GUI programming experience. Nonetheless, since the GUI (frontend) and the functions manipulating the argument tree (backend) are separated, a continued development with different frontend frameworks is entirely possible. The perhaps most interesting would be a web service that runs experiments on a server, remotely configurable from any web browser. \subsection{Using external code} \label{u_external} There is a variety of reasons why it makes sense to include external code into a framework. Most importantly, the code either solves a standing problem or provides the users with additional options. Unlike newly written code, many popular libraries are also thoroughly optimized, reviewed, and empirically validated. External code is also a perfect match for a framework based on argument trees. As shown in Figure~\ref{u_fig_external_import}, external classes of interest can be thinly wrapped to ensure compatibility, register the module, and specify all hyper-parameters for the argument tree. The integration is seamless so that finding out whether a module is locally written or external requires an inspection of its code. On the other hand, if importing the AdaBelief~\citep{zhuang2020adabelief} code fails, the module will not be registered and therefore not be available in the graphical user interface. UniNAS fails to parse configurations that require unregistered modules but informs the user which external sources can be installed to extend its functionality. Due to this logistic simplicity, several external frameworks extend the core of UniNAS. Some of the most important ones are: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item{ pymoo~\citep{pymoo}, a library for multi-objective optimization methods. } \item{ Scikit-learn~\citep{sklearn}, which implements many classical machine learning algorithms such as Support Vector Machines and Random Forests. } \item{ PyTorch Image Models~\citep{rw2019timm}, which provides the code for several optimizers, network models, and data augmentation methods. } \item{ albumentations~\citep{2018arXiv180906839B}, a library for image augmentations. } \end{itemize} \begin{figure*} \hfill \begin{minipage}[c]{0.95\textwidth} \begin{python} from uninas.register import Register from uninas.training.optimizers.abstract import WrappedOptimizer try: from adabelief_pytorch import AdaBelief # if the import was successful, # register the wrapped optimizer @Register.optimizer() class AdaBeliefOptimizer(WrappedOptimizer): # wrap the original ... except ImportError as e: # if the import failed, # inform the user that optional libraries are not installed Register.missing_import(e) \end{python} \end{minipage} \vskip-0.3cm \caption{ Excerpt of UniNAS wrapping the official AdaBelief optimizer code. The complete text has just 45 lines, half of which specify the optimizer parameters for the argument trees. } \label{u_fig_external_import} \end{figure*} \section{Dynamic network designs} \label{u_networks} As seen in the previous Sections, the unique design of UniNAS enables powerful customization of all components. In most cases, a significant portion of the architecture search configuration belongs to the network design. The FairNAS search example in Figure~\ref{app_u_argstree_img} contains 25 configured classes, of which 11 belong to the search network. While it would be easy to create a single configurable class for each network architecture of interest, that would ignore the advantages of argument trees. On the other hand, there are many technical difficulties with highly dynamic network topologies. Some of them are detailed below. \subsection{Decoupling components} In many published research codebases, network and architecture weights jointly exist in the network class. This design decision is disadvantageous for multiple reasons. Most importantly, changing the network or NAS method requires a lot of manual work. The reason is that different NAS methods need different amounts of architecture parameters, use them differently, and optimize them in different ways. For example: \begin{itemize}[noitemsep,parsep=0pt,partopsep=0pt] \item{ DARTS~\citep{liu2018darts} requires one weight vector per architecture choice. They weigh all different paths, candidate operations, in a sum. Updating the weights is done with an additional optimizer (ADAM), using gradient descent. } \item{ MDENAS~\citep{mdenas} uses a similar vector for a weighted sample of a single candidate operation that is used in this particular forward pass. Global network performance feedback is used to increase or decrease the local weightings. } \item{ Single-Path One-Shot~\citep{guo2020single} does not use weights at all. Paths are always sampled uniformly randomly. The trained network is used as an accuracy prediction model and used by a hyper-parameter optimization method. } \item{ FairNAS~\citep{FairNAS} extends Single-Path One-Shot to make sure that all candidate operations are used frequently and equally often. It thus needs to track which paths are currently available. } \end{itemize} \begin{figure}[t] \vskip -0.0in \begin{center} \includegraphics[trim=0 0 0 0, clip, width=\linewidth]{images/draw/search_net.pdf} \hspace{-0.5cm} \caption{ The network and architecture weights are decoupled. \textbf{Top}: The structure of a fully sequential super-network. Every layer (cell) uses the same set of candidate operations and weight strategy. \textbf{Bottom left}: One set of candidate operations that is used multiple times in the network. This particular experiment uses the NAS-Bench-201 candidate operations. \textbf{Bottom right}: A weight strategy that manages everything related to the used NAS method, such as creating the architecture weights or which candidates are used in each forward pass. } \label{fig_u_decouple} \end{center} \end{figure} The same is also true for the set of candidate operations, which affect the sizes of the architecture weights. Once the definitions of the search space, the candidate operations, and the NAS method (including the architecture weights) are mixed, changing any part is tedious. Therefore, strictly separating them is the best long-term approach. Similar to other frameworks presented in Section~\ref{u_introduction_available}, architectures defined in UniNAS do not use an explicit set of candidate architectures but allow a dynamic configuration. This is supported by a \textit{WeightStrategy} interface, which handles all NAS-related operations such as creating and updating the architecture weights. The interaction between the architecture definition, the candidate operations, and the weight strategy is visualized in Figure~\ref{fig_u_decouple}. The easy exchange of any component is not the only advantage of this design. Some NAS methods, such as DARTS, update network and architecture weights using different gradient descent optimizers. Correctly disentangling the weights is trivial if they are already organized in decoupled structures but hard otherwise. Another advantage is that standardizing functions to create and manage architecture weights makes it easy to present relevant information to the user, such as how many architecture weights exist, their sizes, and which are shared across different network cells. An example is presented in Figure~\ref{app_text}. \begin{figure}[hb!] \begin{minipage}[c]{0.24\textwidth} \centering \includegraphics[height=11.5cm]{./images/draw/mobilenetv2.pdf} \end{minipage} \hfill \begin{minipage}[c]{0.5\textwidth} \small \begin{python} "cell_3": { "name": "SingleLayerCell", "kwargs": { "name": "cell_3", "features_mult": 1, "features_fixed": -1 }, "submodules": { "op": { "name": "MobileInvConvLayer", "kwargs": { "kernel_size": 3, "kernel_size_in": 1, "kernel_size_out": 1, "stride": 1, "expansion": 6.0, "padding": "same", "dilation": 1, "bn_affine": true, "act_fun": "relu6", "act_inplace": true, "att_dict": null, "fused": false } } } }, \end{python} \end{minipage} \caption{ A high-level view on the MobileNet~V2 architecture~\citep{sandler2018mobilenetv2} in the top left, and a schematic of the inverted bottleneck block in the bottom left. This design uses two 1$\times$1 convolutions to change the channel count \textit{n} by an expansion factor of~6, and a spatial 3$\times$3 convolution in their middle. The text on the right-hand side represents the cell structure by referencing the modules by their names ("name") and their keyworded arguments ("kwargs"). } \label{u_fig_conf} \end{figure} \subsection{Saving, loading, and finalizing networks} \label{u_networks_save} As mentioned before, argument trees enable a detailed configuration of every aspect of an experiment, including the network topology itself. As visualized in Figure~\ref{app_u_argstree_img}, such network definitions can become almost arbitrarily complex. This becomes disadvantageous once models have to be saved or loaded or when super-networks are finalized into discrete architectures. Unlike TensorFlow~\citep{tensorflow2015-whitepaper}, the used PyTorch~\citep{pytorch} library saves only the network weights without execution graphs. External projects like ONNX~\citep{onnx} can be used to export limited graph information but not to rebuild networks using the same code classes and context. The implemented solution is inspired by the official code\footnote{\url{https://github.com/mit-han-lab/proxylessnas/tree/master/proxyless_nas}} of ProxylessNAS~\citep{proxylessnas}, where every code module defines two functions that enable exporting and importing the entire module state and context. As typical for hierarchical structures, the state of an outer module contains the states of all modules within. An example is visualized in Figure~\ref{u_fig_conf}, where one cell in the famous MobileNet V2 architecture is represented as readable text. The global register can provide any class definition by name (see Section~\ref{u_argtrees_register}) so that an identical class structure can be created and parameterized accordingly. The same approach that enables saving and loading arbitrary class compositions can also be used to change their structure. More specifically, an over-complete super-network containing all possible candidate operations can export only a specific configuration subset. The network recreated from this reduced configuration is the result of the architecture search. This is made possible since the weight strategy controls the use of all candidate operations, as visualized in Figure~\ref{fig_u_decouple}. Similarly, when their configuration is exported, the weight strategy controls which candidates should be part of the finalized network architecture. In another use case, some modules behave differently in super-networks and finalized architectures. For example, Linear Transformers~\citep{ScarletNAS} supplement skip connections with linear 1$\times$1 convolutions in super-networks to stabilize the training with variable network depths. When the network topology is finalized, it suffices to simply export the configuration of a skip connection instead of their own. Another practical way of rebuilding code structures is available through the argument tree configuration, which defines every detail of an experiment (see Section~\ref{u_argtrees_config}). Parsing the network design and loading the trained weights of a previous experiment requires no further user interaction than specifying its save directory. This specific way of recreating experiment environments is used extensively in \textit{Single-Path One-Shot} tasks. In the first step, a super-network is trained to completion. Afterward, when the super-network is used to make predictions for a hyper-parameter optimization method (such as Bayesian optimization or evolutionary algorithms), the entire environment of its training can be recreated. This includes the network design and the dataset, data augmentations, which parts were reserved for validation, regularization techniques, and more. \section{Discussion and Conclusions} \label{u_conclusions} We presented the underlying concepts of UniNAS, a PyTorch-based framework with the ambitious goal of unifying a variety of NAS algorithms in one codebase. Even though the use cases for this framework changed over time, mostly from DARTS-based to SPOS-based experiments, its underlying design approach made reusing old code possible at every step. However, several technical details could be changed or improved in hindsight. Most importantly, configuration files should reflect the hierarchy levels (see Section~\ref{u_argtrees_config}) for code simplicity and to avoid concerns about using module types multiple times. The current design favors readability, which is now a minor concern thanks to the graphical user interface. Other considered changes would improve the code readability but were not implemented due to a lack of necessity and time. In summary, the design of UniNAS fulfills all original requirements. Modules can be arranged and combined in almost arbitrary constellations, giving the user an extremely flexible tool to design experiments. Furthermore, using the graphical user interface does not require writing even a single line of code. The resulting configuration files contain only the relevant information and do not suffer from a framework with many options. These features also enable an almost arbitrary network design, combined with any NAS optimization method and any set of candidate operations. Despite that, networks can still be saved, loaded, and changed in various ways. Although not covered here, several unit tests ensure that the essential framework components keep working as intended. Finally, what is the advantage of using argument trees over writing code with the same results? Compared to configuration files, code is more powerful and versatile but will likely suffer from problems described in Section~\ref{u_introduction_available}. Argument trees make any considerations about which parameters to expose unnecessary and can enforce the use of specific module types and subsets thereof. However, their strongest advantage is the visualization and manipulation of the entire experiment design with a graphical user interface. This aligns well with Automated Machine Learning (AutoML), which is also intended to make machine learning available to a broader audience. {\small \bibliographystyle{iclr2022_conference}
2024-02-18T23:39:40.040Z
2021-12-06T02:16:42.000Z
algebraic_stack_train_0000
11
5,863
proofpile-arXiv_065-93
"\\section{Introduction}\r\n\t\\label{sec1}\r\n\tThe quark model is a successful theory, with physic(...TRUNCATED)
2024-02-18T23:39:40.165Z
2022-08-02T02:16:02.000Z
algebraic_stack_train_0000
15
11,791
proofpile-arXiv_065-234
"\\section{Introduction}\nIt is well known that in certain disordered media wave propagation can be (...TRUNCATED)
2024-02-18T23:39:40.767Z
2020-10-16T02:04:59.000Z
algebraic_stack_train_0000
42
6,370
proofpile-arXiv_065-258
"\\section{Introduction}\nWith technological advancements in the automotive industry in recent times(...TRUNCATED)
2024-02-18T23:39:40.868Z
2020-07-22T02:06:02.000Z
algebraic_stack_train_0000
50
7,840
proofpile-arXiv_065-565
"\\section*{Acknowledgement}\nWe are grateful to D. Ebert, A.V. Efremov, E.A. Kuraev and \nL.N. Lipa(...TRUNCATED)
2024-02-18T23:39:41.918Z
1996-12-04T12:13:03.000Z
algebraic_stack_train_0000
107
131
proofpile-arXiv_065-679
"\\subsection*{1. Introduction}\n\nThe connection of positive knots with transcendental numbers, via(...TRUNCATED)
2024-02-18T23:39:42.282Z
1996-11-18T11:27:03.000Z
algebraic_stack_train_0000
131
5,534
proofpile-arXiv_065-707
\section*{References}
2024-02-18T23:39:42.343Z
1996-09-17T18:26:05.000Z
algebraic_stack_train_0000
138
4
proofpile-arXiv_065-726
"\\section{Introduction}\nThe Feynman diagrammatic technique has proven quite useful in order to\npe(...TRUNCATED)
2024-02-18T23:39:42.405Z
1998-05-13T10:38:52.000Z
algebraic_stack_train_0000
142
8,428
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
28
Edit dataset card