forum_id
stringlengths 9
13
| raw_ocr_text
stringlengths 4
631k
|
---|---|
M1V498MXelq | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023Regularization by Denoising Diffusion Process for MRIReconstructionBatu Ozturkler1ozt@stanford.edu1Stanford UniversityMorteza Mardani2mmardani@nvidia.com2NVIDIAArash Vahdat2avahdat@nvidia.comJan Kautz2jkautz@nvidia.comJohn Pauly1pauly@stanford.eduEditors: Accepted for publication at MIDL 2023AbstractDiffusion models have recently delivered state-of-the-art performance for MRI reconstruc-tion with improved robustness. However, these models still fail when there is a largedistribution shift, and their long inference times impede their clinical utility. In this paper,we present regularization by denoising diffusion processes for MRI reconstruction (RED-diff). RED-diff formulates sampling as stochastic optimization, and outperforms diffusionbaselines in PSNR/SSIM with 3 ×faster inference while using the same amount of memory.Keywords: Diffusion models, Regularization by denoising (RED), MRI reconstruction.1. IntroductionMagnetic Resonance Imaging (MRI) is a widely used non-invasive imaging technique dueto its ability to generate high-quality images, but acquiring clinical MRI data requires longscan times. Imaging can be sped up by using multiple receiver coils, and by reducingthe amount of captured data with Fourier domain (k-space) undersampling (Lustig et al.,2007; Pruessmann et al., 1999). Generative diffusion models gained popularity for MRIreconstruction due to their high sample quality, improving robustness over unrolled methodsunder distribution shifts (Chung and Ye, 2021; Jalal et al., 2021; Song et al., 2023). Diffusionmodels can be pretrained for MRI to serve as the data prior and the pretrained model canbe used in a plug-and-play fashion by incorporating the forward model at inference time foruniversally solving downstream reconstruction tasks without the need for re-training or fine-tuning. However, diffusion models still fail dramatically under large distribution shifts suchas scan parameter change, or anatomy change between training and testing. Furthermore,inference time for diffusion models is much larger than end-to-end approaches due to thesequential denoising procedure during reverse diffusion, impeding their clinical utility.Recently, (Mardani et al., 2023) proposed regularization by denoising diffusion (RED-diff) for solving generic inverse problems. RED-diff uses a variational sampler based ona measurement consistency loss and a score matching regularization. In this paper, forthe first time, we propose RED-diff for MRI reconstruction. We evaluate RED-diff forMRI reconstruction on FastMRI and Mridata, and show that it achieves state-of-the-artperformance across different acceleration rates and anatomies.©2023 CC-BY 4.0, B. Ozturkler, M. Mardani, A. Vahdat, J. Kautz & J. Pauly.Ozturkler Mardani Vahdat Kautz PaulyAlgorithm 1 RED-diff: regularization by denoising diffusion process for MRIInput: k-space data y; acquisition model A= ΩFS;{αt, σt, λt}Tt=1Initialize: μ=xzf=A−1y1:fort=T, ..., 1do2:ε∼ N(0, I)3:xt=αtμ+σtε4:loss=∥Aμ−y∥2+λt(sg[εθ(xt;t)−ε])Tμ5:μ←OptimizerStep( loss)6:end for7:return μ2. MethodsAccelerated MRI. The forward model for accelerated MRI is given by y= ΩFSx +νwhere yis the measurement, xis the real image, Sare sensitivity maps, Fis the Fouriertransform, Ω is the subsampling mask, νis noise, and A= ΩFSis the forward model.Diffusion models. Diffusion models consist of two processes: a forward process thatgradually adds noise to input images and a reverse process that learns to generate imagesby iterative denoising. A popular class of diffusion models uses the variance preservingstochastic differential equation (VP-SDE) (Song et al., 2020). The forward and reverseprocess is characterized by the noise schedule β(t) with t∈[0, T] where tis the timestep.β(t) is designed such that the final distribution of xTat the end of the process converges toa standard Gaussian distribution. The reverse generative process requires estimating thescore function ∇xtlogp(xt), which denotes the score function of diffused data at time t.∇xtlogp(xt) can be estimated by training a joint neural network, denoted as εθ(xt;t), viadenoising score matching (Vincent, 2011). For denoising score matching, diffused samplesare generated by xt=αtx0+σtεwhere ε∼ N (0, I)x0∼pdatais the data distribution,σt= 1−e−Rt0β(s)ds, and αt=p1−σ2t, and εθ(xt;t)≈ −σt∇xtlogp(xt).RED-diff. (Mardani et al., 2023) proposes a variational inference approach based on KLminimization that corresponds to minimizing a measurement consistency loss equipped witha score-matching regularization term imposed by the diffusion prior. For MRI reconstruc-tion, we consider the following minimization problemminμ∥Aμ−y∥2+Et,ε[w(t)∥εθ(xt;t)−ε∥22] (1)where xt=αtμ+σtε, and w(t) is a time-dependent weighting mechanism. To search forμ, we use first-order stochastic optimization. We define the loss per timestep based on theinstantaneous gradient by detaching it at each timestep. Then, we can form the loss at timestep t as ∥Aμ−y∥2+λt(sg[εθ(xt;t)−ε])Tμwhere λtis the weighting term, and sg denotesstopped-gradient, indicating that score is not differentiated during the optimization. Wesetλt=λσt/αt, where λis a hyperparameter. Our full method is described in Alg. 1.3. Results and DiscussionWe use the multi-coil fastMRI brain dataset (Zbontar et al., 2018) with 1D equispacedundersampling, and the fully-sampled 3D fast-spin echo multi-coil knee MRI dataset from2Regularization by Denoising Diffusion Process for MRI ReconstructionAnatomy Brain Knee TimingR R= 4 R= 12 R= 16 (sec/iter)Zero-filled 27.8/0.81 24.5/0.63 24.0/0.60 -CSGM-Langevin 36.3/0.78 31.4/0.82 31.8/0.79 0.344RED-diff 37.1/0.83 33.2/0.78 32.7/0.77 0.114Table 1: Reconstruction PSNR/SSIM for fastMRI brain and Mridata knee dataset.Zero-FilledGround TruthCSGM-LangevinRED-diffPSNR: 23.62SSIM: 0.726PSNR: 35.47SSIM: 0.816PSNR: 36.15SSIM: 0.856PSNR: 23.85SSIM: 0.591PSNR: 26.93SSIM: 0.831PSNR: 32.25SSIM: 0.766Figure 1: Example reconstruction for brain at R= 4, and knee at R= 12.(Ong et al., 2018) with 2D Poisson Disc undersampling mask, as in (Jalal et al., 2021). Weused 6 validation volumes for fastMRI, and 3 volumes for Mridata by selecting 32 middleslices from each volume. Both datasets had a total of 96 test slices. For RED-diff, we uselinear schedule for β(t) from 0 .0001 to 0 .02, and T= 1000. We adopt Adam optimizerwith initial learning rate 0 .1 and no weight decay regularization, and set the momentum to(0.9,0.99) where λ= 0.25. We compare RED-diff with CSGM-Langevin (Jalal et al., 2021).For CSGM-Langevin and RED-diff, we use the score function from (Jalal et al., 2021) whichwas trained on a subset of the FastMRI multi-coil brain dataset. We evaluate the methods ini) the in-distribution setting on brain at R= 4, ii) the out-of-distribution setting with kneeatR={12,16}. Table 1 shows comparison of reconstruction methods for FastMRI brain,and Mridata knee datasets. RED-diff outperforms CSGM-Langevin in most cases, with aPSNR improvement of +0 .7dB for brain, +1 .8dB for knee, and an SSIM improvement of+0.05 for brain, while having 3 ×faster inference time using same amount of memory. Fig.1shows example reconstructions for brain at R= 4, and knee at R= 12. RED-diff produceshigher quality reconstruction in both cases. Crucially, it is observed that CSGM-Langevinis sensitive in the out-of-distribution setting and produces hallucination artifacts, whereasRED-diff mitigates these artifacts and produces a reconstruction with no hallucinations. Inconclusion, RED-diff improves reconstruction quality for MRI reconstruction, and speedsup inference by at least 3 ×while using the same inference memory.3Ozturkler Mardani Vahdat Kautz PaulyReferencesHyungjin Chung and Jong Chul Ye. Score-based diffusion models for accelerated MRI.arXiv , 2021.Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G Dimakis, andJonathan I Tamir. Robust compressed sensing mri with deep generative priors. Advancesin Neural Information Processing Systems , 2021.Michael Lustig, David Donoho, and John M Pauly. Sparse mri: The application of com-pressed sensing for rapid mr imaging. Magnetic resonance in medicine , 58(6):1182—1195,December 2007. ISSN 0740-3194.Morteza Mardani, Jiaming Song, Jan Kautz, and Arash Vahdat. A variational perspectiveon solving inverse problems with diffusion models. arXiv preprint arXiv:2305.04391 , 2023.F Ong, S Amin, S Vasanawala, and M Lustig. Mridata. org: An open archive for sharingmri raw data. In Proc. Intl. Soc. Mag. Reson. Med , volume 26, 2018.Klaas P. Pruessmann, Markus Weiger, Markus B. Scheidegger, and Peter Boesiger. Sense:Sensitivity encoding for fast mri. Magnetic Resonance in Medicine , 42(5):952–962, 1999.Jiaming Song, Arash Vahdat, Morteza Mardani, and Jan Kautz. Pseudoinverse-guideddiffusion models for inverse problems. In International Conference on Learning Repre-sentations , 2023. URL https://openreview.net/forum?id=9_gsMA8MRKQ .Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon,and Ben Poole. Score-based generative modeling through stochastic differential equations.arXiv preprint arXiv:2011.13456 , 2020.Pascal Vincent. A connection between score matching and denoising autoencoders. Neuralcomputation , 23(7):1661–1674, 2011.Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew JMuckley, Aaron Defazio, Ruben Stern, Patricia Johnson, Mary Bruno, et al. fastMRI:An open dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839 ,2018.4 |
gpsfGAOUs58 | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionCaption generation from histopathology whole-slide imagesusing pre-trained transformersBryan Cardenas Guevara1bryan.cardenasguevara@surf.nl1SURF, Amsterdam, The NetherlandsNiccol` o Marini2niccolo.marini@hevs.ch2University of Applied Sciences Western Switzerland, Sierre (HES-SO Valais)Stefano Marchesin3stefano.marchesin@unipd.it3University of Padua, Padua, ItalyWitali Aswolinskiy4Witali.Aswolinskiy@radboudumc.nl4Radboud University Medical Center, Nijmegen, The NetherlandsRobert-Jan Schlimbach1robert-jan.schlimbach@surf.nlDamian Podareanu1damian.podareanu@surf.nlFrancesco Ciompi4Francesco.Ciompi@radboudumc.nlEditors: Under Review for MIDL 2023AbstractThe recent advent of foundation models andlarge language models has enabled scientiststo leverage large-scale knowledge of pretrained (vision) transformers and efficiently tailor itto downstream tasks. This technology can potentially automate multiple aspects of cancerdiagnosis in digital pathology, from whole-slide image classification to generating pathologyreports while training with pairs of images and text from the diagnostic conclusion. In thiswork, we orchestrate a set of weakly-supervised transformer-based models with a first aimto address both whole-slide image classification and captioning, addressing the automaticgeneration of the conclusion of pathology reports in the form of image captions . We reportour first results on a multicentric multilingual dataset of colon polyps and biopsies. Weachieve high diagnostic accuracy with no supervision and cheap computational adaptation.Keywords: Whole slide images, histopathology, multi-modal training, caption generation1. IntroductionRecent advances in the field of deep learning are showing increasing capability of bridgingthe gap between language understanding and vision . Such technology is particularly suitedfor the field of medical imaging, where multimodal data with pairs of images and text fromelectronic health records are clinically available. With the adoption of digital pathologyworkflows, an increasing amount of gigapixel whole-slide images is produced clinically, con-taining a wealth of information for deep learning development. However, the promise ofcomputer algorithms as a support for pathology diagnosis and potentially aiding the gener-ation of pathology reports often relies on supervised learning, presenting a challenge due tothe substantial amount of labeled data required, together with the time-consuming interpre-tation of histopathology whole slide images (WSIs). The authors in (Gamper and Rajpoot,2021) provide evidence that models pre-trained on digital pathology images learn highlyinformative representations for caption generation. Nevertheless, their proposed method©2023 CC-BY 4.0, B.C. Guevara et al.Guevara Marini Marchesin Aswolinskiy Schlimbach Podareanu Ciompi(a)PubMedBERT Villous adenoma with severe dysplasia Villous adenoma with severe dysplasia WSIs HIPT Transformer WSIEncoder (b) SOT The quick ... brown The quick ... brown Decoder Block Decoder Block Decoder Block BIO-GPT-2 cross-attention WSIs HIPT Transformer WSIEncoder Figure 1: In the first stage (a) we perform Contrastive WSI-caption pre-training, while in(b), the decoder blocks are conditioned on the WSI embeddings trained in (a).Original Caption GPT-3.5 Cleaned Generated Captionbiopsies distal colon: chronic inflammation, inpartially active and slightly histiocytary. nospecific characteristics. the microscopicpreparations from elsewhere have beenrequested for revision.chronic inflammation, nospecific characteristics.no abnormalities, nodysplasia or malignancy.cyclic inflammation.biopt colon transversum: adenocarcinoma. adenocarcinoma. Metastasis ofadenocarcinoma best suitedto primary process.1) fragments of tubular adenoma with highdegree dysplasia.tubular adenoma with highdegree dysplasia.adenocarcinoma on villousadenoma. no lymphovascularinvasion is identified. encedenced enced ED ED ED EDTable 1: The last example shows a failed caption generation.involves an exhausting effort to extract and process captions in figures from text books.Similarly, previous studies (Zhang et al., 2020; Tsuneki and Kanavati, 2022) show that cap-tion generation is viable in digital pathology but the authors do not apply self-supervisedor pre-trained models. Motivated by this, we demonstrate the benefit from fine-tuningpre-trained weakly supervised transformers on the task of pathology caption generation.We orchestrate a two-stage pipeline where we first learn highly informative image and textrepresentations using the CLIP training regime (Radford et al., 2021)1. In the second stage,we utilize extracted WSI representations from the first stage to condition a pre-trained bio-gpt-2 (Luo et al., 2022) language model to generate captions. Moreover, pathology captionsmay include irrelevant or noisy information, such as running text unrelated to any observedlesion in the WSI. To address this, we explore the use of GPT-3.5-turbo (Ouyang et al.,2022) to pre-process the captions and remove extraneous information.2. MethodData We collected 5729 gigapixel-size whole slide images of colon polyps and biopsiesscanned at 0.25 micron per pixel spacing originating from two labs from two countries.Each WSI-caption pair was labelled with one of five diagnostic labels: normal, hyperpla-sia, low-grade dysplasia, high-grade dysplasia, or adenocarcinoma. These labels were notused during training of the pipeline. A subset of 569 patient-split WSI-caption pairs wasreserved for testing, which served as the basis for evaluating our results. The captions were1. Our code is available on github2Short TitleUnpretrainedcaption modelPre-trained captionmodelGPT-3.5 cleanedpre-trained captionmodelWSI supervisedclassifier0.65 (±0.20) 0.70 (±0.21) 0.73 (±0.15) 0.76 (±0.16)Table 2: Mean F1-scores over the five diagnostic classes for each model.machine translated (Tiedemann and Thottingal, 2020) from two languages to English. Thecaptions were subsequently pre-processed using GPT-3.5-turbo, which was prompted withten examples of how to restructure the captions. These processed captions are then usedfor training in the two-stage pipeline. Three examples are shown in Table 1.Architecture In the first stage of our pipeline, we train a CLIP model, which consistsof a HIPT model (Chen et al., 2022) to encode WSIs and PubmedBERT (Gu et al., 2020)to encode medical text. The HIPT model is a hierarchical transformer model trained usingDINO (Caron et al., 2021) on TCGA (Liu et al., 2018). HIPT encodes a 4096x4096 WSIregion to a vector of size 192. In this manner, we extract a sequence of embeddings thatrepresents one (packed) WSI. Subsequently, we train a transformer encoder on this sequenceto pool the features and map them to the same dimensionality as the caption embeddings.We kept both the image and language pre-trained models frozen and only extract the WSIand caption embeddings. In the second stage, we extract the WSI embeddings from theprevious stage and condition decoder layers on top of a pre-trained bio-gpt-2 model.Evaluation We evaluated the generated captions by assessing their diagnostic abilityby manually classifying the captions to one of the five diagnostic labels. We could thencompare the mean F1-score of four distinct models: (1) A supervised WSI classifier thatwe treat as a baseline, (2) a caption model without a pre-trained bio-gpt-2 decoder, (3) apre-trained bio-gpt-2 caption model and (4) a pre-trained bio-gpt-2 caption model trainedon the processed gpt-3.5-turbo caption data.3. Results and DiscussionThe application of GPT-3.5-turbo to clean the captions results in a significant improvementover the baseline models in terms of diagnostic accuracy and the quality of the generatedcaptions. Our captioning model has a diagnostic accuracy close to a supervised classifierwhile having the weakly-supervised advantage. The caption templates in which the captionsare written by the pathologists differ between the two labs and by prompt-style captionpre-processing we are able to normalize them. The structure of the original captions differsbetween the labs and by prompt-style caption pre-processing we are able to normalize them.Despite using large transformer models, we fine-tuned our pipeline on a single A100 (40GB)GPU in 20 minutes. Our work highlights the need for large scale pre-trained models in thefield of digital pathology.AcknowledgmentsThis project has received funding from the European Union’s Horizon 2020 research and in-novation programme under grant agreement No 825292 (ExaMode, htttp://www.examode.eu/)3Guevara Marini Marchesin Aswolinskiy Schlimbach Podareanu CiompiReferencesMathilde Caron, Hugo Touvron, Ishan Misra, Herv’e J’egou, Julien Mairal, Piotr Bo-janowski, and Armand Joulin. Emerging properties in self-supervised vision transformers.2021 IEEE/CVF International Conference on Computer Vision (ICCV) , pages 9630–9640, 2021.Richard J. Chen, Chengkuan Chen, Yicong Li, Tiffany Y. Chen, Andrew D. Trister,Rahul G. Krishnan, and Faisal Mahmood. Scaling vision transformers to gigapixel imagesvia hierarchical self-supervised learning, 2022.Jevgenij Gamper and Nasir Rajpoot. Multiple instance captioning: Learning represen-tations from histopathology textbooks and articles. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR) , pages 16549–16559,June 2021.Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, TristanNaumann, Jianfeng Gao, and Hoifung Poon. Domain-specific language model pretrainingfor biomedical natural language processing, 2020.Jianfang Liu, Tara M. Lichtenberg, Katherine A. Hoadley, Laila M. Poisson, Alexander J.Lazar, Andrew D. Cherniack, Albert J. Kovatich, Christopher C. Benz, Douglas A.Levine, Adrian V. Lee, Larsson Omberg, Denise M. Wolf, Craig D. Shriver, V ́ esteinnThorsson, and Hai Hu. An integrated tcga pan-cancer clinical data resource to drivehigh-quality survival outcome analytics. Cell, 173:400 – 416.e11, 2018.Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon,and Tie-Yan Liu. Biogpt: Generative pre-trained transformer for biomedi-cal text generation and mining. Briefings in Bioinformatics , 23(6), Novem-ber 2022. URL https://www.microsoft.com/en-us/research/publication/biogpt-generative-pre-trained-transformer-for-biomedical-text-generation-and-mining/ .Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin,Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, JacobHilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder,Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. Training language models tofollow instructions with human feedback. ArXiv , abs/2203.02155, 2022.Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar-wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, andIlya Sutskever. Learning transferable visual models from natural language supervision.CoRR , abs/2103.00020, 2021. URL https://arxiv.org/abs/2103.00020 .J ̈ org Tiedemann and Santhosh Thottingal. OPUS-MT — Building open translation servicesfor the World. In Proceedings of the 22nd Annual Conferenec of the European Associationfor Machine Translation (EAMT) , Lisbon, Portugal, 2020.Masayuki Tsuneki and Fahdi Kanavati. Inference of captions from histopathological patches.InInternational Conference on Medical Imaging with Deep Learning , 2022.4Short TitleRenyu Zhang, Christopher Weber, Robert Grossman, and Aly A Khan. Evaluating and in-terpreting caption prediction for histopathology images. In Machine Learning for Health-care Conference , pages 418–435. PMLR, 2020.5 |
Dtz_iaUpGc | Medical Imaging with Deep Learning 2023Automatic quantification of TSR as a prognostic marker forpancreatic cancer.Pierpaolo Vendittelli1pierpaolo.vendittelli@radboudumc.nlJohn-Melle Bokhorst1Esther M.M.Smeets1Valentyna Kryklyva1Lodewijk Brosens1Caroline Verbeke2Geert Litjens11Department of Pathology, Radboudumc, Nijmegen, The Netherlands2Department of Pathology, Oslo University Hospital, Oslo, NorwayAbstractThe current diagnostic and outcome prediction methods for pancreatic cancer lackprognostic power. As such, identifying novel biomarkers using machine learning has be-come of increasing interest. In this study, we introduce a novel method for estimating thetumor-stroma ratio (TSR) in whole slide images (WSIs) of pancreatic tissue and assess itspotential as a prognostic biomarker. A multi-step strategy for estimating TSR is proposed,including epithelium segmentation based on an immunohistochemical reference standard, acoarse pancreatic cancer segmentation, and a post-processing pipeline for TSR quantifica-tion. The resultant segmentation models are validated on external test sets using the Dicecoefficient, and additionally, the TSR’s potential as a prognostic factor is assessed usingsurvival analysis, resulting in a C-index of 0.61.Keywords: computational pathology, pancreatic cancer, survival, tumor-stroma ratio.1. IntroductionDespite histopathology analysis of biopsies being the gold standard for Pancreatic ductaladenocarcinoma (PDAC) diagnosis, current histopathological biomarkers have limited pre-dictive ability in determining prognosis. Furthermore, the TNM staging system itself, basedon grading the depth of the invasion, number of metastatic nodes, and the status of otherdistant metastases, suffers from the same issues since patients with the same TNM stagepresent different prognoses (Edge and Compton, 2010; Song et al., 2018). Therefore, thereis a need to develop reliable biomarkers that can better correlate tumor characteristics withsurvival to allow for better patient management.The tumor-stroma ratio (TSR) represents the relative amount of tumor cells and intra-tumoral stroma and is a widely studied prognostic factor. In various solid tumors (Roekeet al., 2017; Zhang et al., 2015; Geessink et al., 2019; Scheer et al., 2017), TSR was iden-tified as an independent prognostic factor. However, in pancreatic cancer, the role of TSRin predicting survival has been inconsistent with Lepp ̈ anen et al. (2019) stating that it isnot a reliable biomarker, and Li et al. (2020) assessing instead its predictive power. A©2023 CC-BY 4.0, G. Litjens1.Figure 1: Flowchart highlighting different pipeline steps: (a) Epithelium segmentation and(b) tumor epithelium segmentation. Through the process of staining-destaining of pairedH&E and IHC slides, epithelium annotations are obtained, which are then used to trainan epithelium segmentation network. This network annotates the rest of the slides. Subse-quently, a tumor epithelium segmentation network is trained on the segmented epitheliumcombined with annotated tumor area. Based on tumor epithelium segmentation, the tumorbulk is determined using the convex hull (c), on which TSR is calculated.key point for this might be partly due to the variability in estimating the TSR by humanobservers. Geessink et al. (2019) have previously shown that using machine learning meth-ods to estimate TSR in colorectal cancer had the potential to achieve more reproducibleTSR estimates with prognostic power. This article proposes to assess the prognostic powerof TSR in pancreatic cancer using a multi-step CNN-based pipeline for automatic tumorsegmentation and TSR estimation (Figure 1).2. MethodsOur study aimed to complete two main tasks: 1) pancreatic tumor segmentation and sep-aration into epithelial and stromal components, and 2) TSR quantification and relating itto patient survival.To complete these tasks, we used multiple datasets, including two internal datasets fromRadboudumc, a publicly available dataset from TCGA-PAAD, and a private multicentricdataset gathered in collaboration with 24 other centers. A complete overview of the datasetscontaining origin, number of slides and number of cases is reported in Table 1.For Task 1, we developed a two-step method for automatic tumor segmentation in WSIof the pancreas. In the first step, we trained a U-Net model (Iakubovskii, 2019) with adepth of five for epithelium segmentation on H&E slides. The model was trained using areference obtained through a stain-restain procedure. The slide was first stained with H&Eand digitized, then restained with cytokeratin (ck8/18) and digitized again at a resolotuonof 0.25μm. A color deconvolution technique was used to segment the epithelium, which was2TSR as prognostic markerthen mapped to the H&E slide using a registration algorithm. The U-Net was then trainedon H&E with the cytokeratin reference standard (Figure 1a) at a resolution of 1 .0μm.To subsequently obtain the separation between cancerous epithelium and other tissuewe combined the epithelium segmentation results with coarsely drawn tumor annotations.This resulted in detailed annotations of the tumor epithelium. We trained another five-depth U-Net model to segment the tumor epithelium and evaluated its performance usingDice coefficient on both the internal/external datasets and the TCGA dataset. Last, byapplying the alphahull algorithm to the resultant tumor epithelium segmentation we obtainboth the detailed segmentation of the tumor cells and the full tumor area.Dataset Source Patients [Slides] Epi. Segm. Tumor Epi. Segm. Tumor Segm.A Radboudumc 16 [16] 0.749 (0.3)B Multicentric 162 [162] 0.642 (0.254) 0.7 (0.27)C Radboudumc 29 [29] 0.751 (0.15) 0.76 (0.109)D TCGA-PAAD 161 [187] 0.717 (0.33) 0.726 (0.25) 0.863 (0.174)Table 1: Dataset description with Dice coefficients for the various tasks.For Task 2, we applied a multi-class tissue segmentation network (Bokhorsta et al.,2021), pre-trained on a colorectal tissue dataset, to the tumor area, which, among others,specifically segments stroma. By combining the full tumor area with the resultant stromalsegmentation, we could calculate the TSR for each slide, and we quantified it by calculatingthe ratio of stromal components with respect to the whole tumor area. We performed thensurvival analysis by training a Logistic Regression model incorporating TSR and clinicalvariables in a five-fold cross-validation fashion on the TCGA dataset. We validated themodel using the C-index, predicting patient survival at six month post-surgery. The clinicalvariables we considered for combining with the TSR were Age, Gender, Origin of the tumor,Primary diagnosis and Prior malignancy.3. Results and DiscussionTable 1 shows a summary of the results for Task 1 for each of the datasets. The results ofthe segmentation show the robustness of both the epithelium segmentation and the tumorepithelium segmentation. The average median Dice for the epithelium segmentation is0.733 (0 .3 IQR), while the tumor epithelium segmentation has an average median Dice of0.71 (0 .22 IQR). The results for the tumor segmentation also show good performance, withan average median Dice of 0 .77 (0 .18 IQR). Survival analysis shows that baseline (clinicalvariables alone) have an AUC of 0 .60±0.12, while TSR in combination with clinical featuresimproves performances in estimating 6-month survival, with an AUC of 0 .61±0.12.In our study, we developed a fully automated method for TSR quantification througha multi-step CNN. Results of the various tasks show reliable performances and survivalanalysis shows evidence of prognostic power for the TSR. Li et al. (2020) proposed a similarmethod, but in comparison to them, we were able to fully automate the process, reducingthe need for human supervision. Despite the promising results in the cross-validation, infuture research, we will fully validate the prognostic relevance of the TSR in expandedcohorts of the patients.3AcknowledgmentsThis project has received funding from the European Union’s Horizon 2020 research andinnovation programme under grant agreement no 101016851, project PANCAIM. Authorswould like to thank Stan Noordman for its contribution in the creation of Figure 1.ReferencesJohn-Melle Bokhorsta, Iris D Nagtegaal, Filippo Fraggetta, Simona Vatrano, Wilma Mesker,Michael Vieth, Jeroen van der Laak, and Francesco Ciompi. Automated risk classificationof colon biopsies based on semantic segmentation of histopathology images. arXiv preprintarXiv:2109.07892 , 2021.Stephen B Edge and Carolyn C Compton. The american joint committee on cancer: the7th edition of the ajcc cancer staging manual and the future of tnm. Annals of surgicaloncology , 17(6):1471–1474, 2010.Oscar GF Geessink, Alexi Baidoshvili, Joost M Klaase, Babak Ehteshami Bejnordi, Geert JSLitjens, Gabi W van Pelt, Wilma E Mesker, Iris D Nagtegaal, Francesco Ciompi, andJeroen AWM van der Laak. Computer aided quantification of intratumoral stroma yieldsan independent prognosticator in rectal cancer. Cellular oncology , 42(3):331–341, 2019.Pavel Iakubovskii. Segmentation models pytorch. https://github.com/qubvel/segmentation_models.pytorch , 2019.Joni Lepp ̈ anen, Ville Lindholm, Joel Isohookana, Kirsi-Maria Haapasaari, Peeter Karih-tala, Petri P Lehenkari, Juha Saarnio, Joonas H Kauppila, Tuomo J Karttunen, OlliHelminen, et al. Tenascin c, fibronectin, and tumor-stroma ratio in pancreatic ductaladenocarcinoma. Pancreas , 48(1):43, 2019.Bo Li, Yang Wang, Hui Jiang, Baoming Li, Xiaohan Shi, Suizhi Gao, Canrong Ni, ZelinZhang, Shiwei Guo, Jun Xu, et al. Pros and cons: high proportion of stromal componentindicates better prognosis in patients with pancreatic ductal adenocarcinoma—a researchbased on the evaluation of whole-mount histological slides. Frontiers in oncology , 10:1472, 2020.Toni Roeke, Marcelo Sobral-Leite, Tim JA Dekker, Jelle Wesseling, Vincent THBM Smit,Rob AEM Tollenaar, Marjanka K Schmidt, and Wilma E Mesker. The prognostic valueof the tumour-stroma ratio in primary operable invasive cancer of the breast: a validationstudy. Breast Cancer Research and Treatment , 166(2):435–445, 2017.Ren ́ e Scheer, Alexi Baidoshvili, Shorena Zoidze, Marloes AG Elferink, Annefleur EM Berkel,Joost M Klaase, and Paul J van Diest. Tumor-stroma ratio as prognostic factor for sur-vival in rectal adenocarcinoma: A retrospective cohort study. World journal of gastroin-testinal oncology , 9(12):466, 2017.Wei Song, Dong-Liu Miao, and Lei Chen. Nomogram for predicting survival in patientswith pancreatic cancer. OncoTargets and therapy , 11:539, 2018.4TSR as prognostic markerTiehong Zhang, Jun Xu, Hongchang Shen, Wei Dong, Yang Ni, and Jiajun Du. Tumor-stroma ratio is an independent predictor for survival in nsclc. International journal ofclinical and experimental pathology , 8(9):11348, 2015.5 |
O4f3k8zIZe9 | Medical Imaging with Deep Learning 2023Deep learning-based segmentation of rabbit fetal skull withlimited and sub-optimal training labelsRajath Soans1soans@merck.comAlexa Gleason1alexa gleason@merck.comTosha Shah1tosha shah@merck.comCorey Miller1corin miller@merck.comBarbara Robinson1barbara robinson@merck.comKimberly Brannen1kimberly.brannen@merck.comAntong Chen1antong.chen@merck.com1Merck & Co., Inc., Rahway, NJ, USAAbstractIn this paper, we propose a deep learning-based method to segment the skeletal structuresin the micro-CT images of Dutch-Belted rabbit fetuses which can assist in the detection ofskeletal abnormalities in nonclinical developmental toxicity studies. Our strategy leveragessub-optimal segmentation labels of 22 skull bones from 26 micro-CT volumes and mapsthem to 250 unlabeled volumes on which a deep CNN-based segmentation model is trained.In the experiments, our model was able to achieve an average Dice Similarity Coefficient(DSC) of 0.89 across all bones on the testing set, and 14 out of the 26 skull bones reachedaverage DSC >0.93. Our next steps are segmenting the whole body followed by developinga model to identtify abnormalities.Keywords: U-Net, nonclinical drug safety assessment, DART, micro-CT, rabbit fetus,sub-optimal ground truth training label, sparse label map1. IntroductionA common component of nonclinical safety assessments for new pharmaceuticals is the eval-uation of potential effects on prenatal development, including an assessment of fetal skeletaldevelopment, most often in rats and rabbits. This assessment is usually accomplished byvisual inspection of a specimen stained with Alizarin Red S, but alternative methods to usemicro-computed tomography (CT) with inspection of a 3-D reconstructed image have beendeveloped (example shown in Figure 1) (Winkelmann and Wise, 2009)Figure 1: Dutch-belted rabbit fetus. (left to right) alizarin red staining; Rendering ofthe skeletal structure from micro-CT image; Color coded label maps; table illustrating 22bone segments of the skullAutomation of such processes would require segmentation of each bone from the skele-ton, however, training a segmentation model is challenged by a) lack of annotated data©2023 CC-BY 4.0, R. Soans, A. Gleason, T. Shah, C. Miller, B. Robinson, K. Brannen & A. Chen.Soans Gleason Shah Miller Robinson Brannen Chenand b) sub-optimal quality of annotations (Tajbakhsh et al., 2020). Acquiring sufficientand accurate manual annotations on complicated skeletal structures is expensive and im-practical. In our work, we leverage annotations that are poorly delineated and availableonly in a limited quantity. We use image registration to map these annotations to a largerdataset which is then used to train a deep convolutional neural network (CNN) to performautomated segmentation.2. Materials and MethodsMicro-CT images were acquired using GE Locus Ultra micro-CT scanner with a polystyreneholder bucket containing up to 9 rabbit fetuses in each scan. Image volumes were recon-structed with voxel size of 0 .1×0.1×0.1mm3and scaled to Hounsfield units (HU).To analyze the fetus skull, we first cropped a sub-volume of size 320 ×320×250 con-taining the skull region with 250 slices on the z-direction. From a legacy set of 513 volumessegmented using a previously proposed automated segmentation pipeline (Dogdas et al.,2015), although the segmentation labels are sub-optimal, we inspected them and selected26 volumes with relatively more accurate and complete segmentation labels to be the atlasesfor a multi-atlas segmentation (MAS) strategy shown in Figure 2.Figure 2: MAS workflow. Registration from source to target are performed in this order-global rigid →global non-rigid →local non-rigid using ANTS suite (Avants et al., 2009).Fusion weights are proportional to local intensity correlation.Although the MAS strategy is effective, the execution of the registration workflow istime consuming and can absorb substantial amount of computing resource. Therefore,we elect to leverage the MAS strategy to create a dataset to train a U-Net segmentationmodel (Ronneberger et al., 2015). Specifically, the MAS strategy is used in obtainingsegmentation maps for a set of 250 un-annotated images which is then partitioned into220 training and 30 testing images. The segmentation maps representing just a singlebone segment tend to pose difficulty in training due to its sparse nature. To overcomethis challenge, we obtained distance transform of the segmentation maps and used it inguiding the model to convergence. This was realized by designing the loss function using acombination of a normalized distance regression loss (Ma et al., 2020) and the Dice SimilarityCo-efficient (DSC) as shown in 1.L=α∗Dice loss+β∗1N|Ω|XΩSDM (ground truth )o predicted map (1)where SDM () is the function to obtain Signed Distance Map as defined in (Xue et al., 2020),Nis the normalization factor to scale both losses to the same range, Ω is the grid on which2Micro-CT segmentation with limited sub-optimal annotationsimage is defined, αandβare the co-efficients, and ois the Hadamard product. Training isinitialized with higher β(0.8 in our experiments) and every 10 epochs it is reduced by 10%with an equal increase in α. Our overall pipeline is illustrated in Figure 3.Figure 3: U-Net segmentation pipeline. 22 models are trained targeting one bonesegment per model.3. Results and ConclusionDSC profile for U-Net segmentations on 30 test images is shown in the left panel of Fig-ure 4. To make an intuitive assessment, we used the U-Net based approach to regeneratesegmentations on the original 26 atlases to compare with the sub-optimal ground truthlabels. Example cases are shown in the right panel of Figure 4.Figure 4: U-Net segmentation results. (left) DSC boxplot and (right) visualiza-tion. U-Net predictions has a DSC >0.9 for most bones. Smaller and thinner bones e.g.Tympanic Rings are challenging to segment, yielding low DSC. Example segmentations onthe original 26 atlases (yellow: U-Net predictions (top row); red: ground truth (bottomrow)) illustrating improvement over the ground truth on the atlases, showing robustness ofour MAS+U-Net based approach and its ability to overcome sub-optimal labels.Our proposed segmentation strategy is effective and can function as the initial step inidentifying anomalies in rabbit fetus skull bones. We will further explore segmentationof the whole body skeleton which is relatively more challenging due to higher degree ofinter-specimen variability.3Soans Gleason Shah Miller Robinson Brannen ChenReferencesBrian B Avants, Nick Tustison, Gang Song, et al. Advanced normalization tools (ants).Insight j , 2(365):1–35, 2009.Belma Dogdas, Antong Chen, Saurin Mehta, Tosha Shah, Barbara Robinson, Dahai Xue,Alexa Gleason, L David Wise, Randy Crawford, Irene Pak, et al. Characterization ofbone abnormalities from micro-ct images for evaluating drug toxicity in developmentaland reproductive toxicology (dart) studies. In 2015 IEEE 12th International Symposiumon Biomedical Imaging (ISBI) , pages 671–674. IEEE, 2015.Jun Ma, Zhan Wei, Yiwen Zhang, Yixin Wang, Rongfei Lv, Cheng Zhu, Chen Gaoxiang,Jianan Liu, Chao Peng, Lei Wang, et al. How distance transform maps boost segmentationcnns: an empirical study. In Medical Imaging with Deep Learning , pages 479–492. PMLR,2020.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In Medical Image Computing and Computer-AssistedIntervention–MICCAI 2015: 18th International Conference, Munich, Germany, October5-9, 2015, Proceedings, Part III 18 , pages 234–241. Springer, 2015.Nima Tajbakhsh, Laura Jeyaseelan, Qian Li, Jeffrey N Chiang, Zhihao Wu, and XiaoweiDing. Embracing imperfect datasets: A review of deep learning solutions for medicalimage segmentation. Medical Image Analysis , 63:101693, 2020.Christopher T Winkelmann and L David Wise. High-throughput micro-computed tomog-raphy imaging as a method to evaluate rat and rabbit fetal skeletal abnormalities fordevelopmental toxicity studies. Journal of pharmacological and toxicological methods , 59(3):156–165, 2009.Yuan Xue, Hui Tang, Zhi Qiao, Guanzhong Gong, Yong Yin, Zhen Qian, Chao Huang, WeiFan, and Xiaolei Huang. Shape-aware organ segmentation by predicting signed distancemaps. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 34, pages12565–12572, 2020.4 |
icYc_uKOI6o | Medical Imaging with Deep Learning – 1:1-4 2023 Short Paper – MIDL 2023Implementation considerations for deep learning withdiffusion MRI streamline tractographyLeon Y. Cai1leon.y.cai@vanderbilt.edu1Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USAHo Hin Lee2ho.hin.lee@vanderbilt.edu2Department of Computer Science, Vanderbilt University, Nashville, TN, USANancy R. Newlin2nancy.r.newlin@vanderbilt.eduMichael E. Kim2michael.kim@vanderbilt.eduDaniel Moyer2daniel.moyer@vanderbilt.eduFran ̧ cois Rheault3francois.m.rheault@usherbrooke.ca3Department of Computer Science, Universit ́ e de Sherbrooke, Sherbrooke, Quebec, CanadaKurt G. Schilling4,5kurt.g.schilling.1@vumc.org4Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville,TN, USA5Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, TN, USABennett A. Landman1,2,4,5,6bennett.landman@vanderbilt.edu6Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USAEditors: Accepted for publication at MIDL 2023AbstractOne area of medical imaging that has recently experienced innovative deep learning ad-vances is diffusion MRI (dMRI) streamline tractography with recurrent neural networks(RNNs). Unlike traditional imaging studies which utilize voxel-based learning, these studiesmodel dMRI features at points in continuous space off the voxel grid in order to propagatestreamlines, or virtual estimates of axons. However, implementing such models is non-trivial, and an open-source implementation is not yet widely available. Here, we describe aseries of considerations for implementing tractography with RNNs and demonstrate they al-low one to approximate a deterministic streamline propagator with comparable performanceto existing algorithms. We release this trained model and the associated implementationsleveraging popular deep learning libraries. We hope the availability of these resources willlower the barrier of entry into this field, spurring further innovation.Keywords: Diffusion MRI (dMRI) streamline tractography, deep learning, recurrent neu-ral networks, PyTorch1. IntroductionDeep learning has transformed diffusion MRI (dMRI) processing, with many recent studiesfocusing on streamline tractography with recurrent neural networks (RNNs) (Poulin et al.,2019). Instead of stepping through temporal features to propagate a signal in time, thesestudies step through voxel-based dMRI features to propagate a streamline, or a sequence ofpoints approximating a white matter (WM) tract in the brain, in space . However, imple-menting RNNs to predict sequences of spatial points of arbitrary lengths that may not lieon the voxel-grid with batch-wise backpropagation is non-trivial. Further, an open-sourceimplementation using commonly supported deep learning libraries is not yet widely avail-able. To fill this gap, we detail considerations needed for implementing such a model, assesshow one trained with these implementations performs against traditional tractography al-gorithms, and release the model and associated code implemented in PyTorch (v1.12).©2023 L.Y. Cai, H.H. Lee, N.R. Newlin, M.E. Kim, D. Moyer, F. Rheault, K.G. Schilling & B.A. Landman.Cai Lee Newlin Kim Moyer Rheault Schilling Landman2. MethodsDefining and computing ground truth labels and losses. We define a batch of Kstreamlines, S=s1, ..., sK, as a list of streamlines of non-uniform length. Specifically, wedefine streamline skof length nkas a list of points, sk=xk1, ...,xknk, where xkiis a point incontinuous 3-dimensional voxel space. We define labels for xkias the Cartesian unit vector∆xki=xki+1−xki||xki+1−xki||. We remove the last point from each streamline so that inputs and labelshave the same length, setting nk=nk−1. However, as unit vectors have two degrees offreedom, we do not have the RNN directly predict the labels in Cartesian space. Rather,we predict the labels in spherical coordinates as ∆ ˆxki= (φki, θki) and convert to Cartesianas ∆ ˆxki= (sin φkicosθki,sinφkisinθki,cosφki) prior to loss computation. We utilize a cosinesimilarity loss for each point of sk,L(∆ˆxki,∆xki) = 1 −⟨∆ˆxki,∆xki⟩||∆ˆxki||||∆xki||. Streamlines can bepropagated from the ith point to the next as ˆxki+1=xki+γ∆ˆxkiwhere γis the step size.Differentiably sampling dMRI features off the voxel grid. xki, defined as a 3-dimensional coordinate in voxel space, provides little utility for efficiently querying dMRIinformation at its location off the voxel grid. Thus, we instead convert each xkitocki, an11-dimensional vector. Considering xkias an off-grid point contained within a lattice of 8on-grid points, the first 3 elements of ckiare the distance of xkifrom the lowest lattice pointalong all 3 spatial axes in voxel space, xki− ⌊xki⌋. The remaining 8 elements are the linearindices of the 8 on-grid points in the image volume. With these 11 values, the lattice valuescan be queried and interpolated trilinearly to obtain off-grid features for each point in skasqki=dMRI (cki) (Kang, 2006). As trilinear interpolation is differentiable, this allows forend-to-end training between input voxel grids and output losses at points off the grid.Organizing data during training. As an example, we assume each qkiis a 45-dimensional feature vector, as is commonly the case if the dMRI grid is a grid of fiberorientation distribution (FOD) spherical harmonic (SH) coefficients. Thus, Scan be repre-sented as a list of length Kwhere each skis a matrix of size nk×45. However, the variabilityofnkacross Sis inefficient for the tensor-based parallelization frameworks utilized by deeplearning libraries. Thus, we convert Sinto a ”padded packed” tensor for training.When aligned by the first element of each sk,Scan be ”padded” with zeros to a tensorof size M×K×45, where M= max( n1, ..., nK) is the length of the longest streamlinein the batch. This padded tensor can then be ”packed” to a tensor of size N×45, whereN=PKk=1nk. The packed formulation allows for batch-wise steps in recurrent neuralnetworks for input sequences of different lengths, and the padded formulation allows foreasier querying of specific points in their corresponding streamlines for loss aggregation.Both these operations and their inverses are natively supported in PyTorch.The network predictions are also packed tensors of size N×3 after conversion fromspherical to Cartesian coordinates. To compute the batch-wise loss, we convert the packedpredictions to padded representations of size M×K×3, use a mask to ignore the padding,and average the loss across all the streamline points as1NPKk=1Pnki=1L(∆ˆxki,∆xki). Forefficiency, we compute masks and save the labels in padded form before training.Parallelizing inference. Unlike traditional tractography algorithms which parallelizetracking on the streamline level, RNNs must parallelize on the point level. In other words,each step of the RNN must advance all streamlines in a batch, as outlined in algorithm 1.2Implementing deep learning for tractographyAlgorithm 1: Parallelizing inference with a padded tensor where M= 11.x1i, ...,xKi(size 1 ×K×3) are the heads of Kactively propagating streamlinesin a padded tensor. These points are seeded arbitrarily when i= 1.2. Convert x1i, ...,xKitoc1i, ...,cKi(size 1 ×K×11).3. Sample q1i, ...,qKi(size 1 ×K×45) off-grid from c1i, ...,cKi.4. Compute ∆ ˆx1i, ...,∆ˆxKi(size 1 ×K×3) with the RNN from q1i, ...,qKi.5. Compute ˆx1i+1, ...,ˆxKi+1=x1i+γ∆ˆx1i, ...,xKi+γ∆ˆxKi(size 1 ×K×3).6. Set x1i, ...,xKi=ˆx1i+1, ...,ˆxKi+1and repeat.This approach allows arbitrary stopping criteria to be evaluated for each streamline headindependently, after which it can be taken off the tensor, speeding up propagation for theremaining streamlines. Since batches have a set size K, once all streamlines meet criteria,new batches can be initialized and propagated until the desired number of streamlines aregenerated. Last, Kcan vary, making this approach adaptable to different GPU capacities.3. Results and DiscussionWith these considerations, we train an RNN streamline propagator on dMRI data fromthe Human Connectome Project to approximate the deterministic SDStream tractographyalgorithm (Tournier et al., 2007) as described by Cai et al. (2023). Briefly, we use a multi-layer perceptron- and gated recurrent unit-based architecture with 4.2 million parameters,taking dMRI FODs represented on the voxel grid with 45 even-order SH coefficients as input.Compared to SDStream, we find similar recovery of WM bundles between our method andthe iFOD2 probabilistic propagator (Tournier et al., 2010) (Figure 1).We release this model and the associated code ( github.com/MASILab/STrUDeL ) to spurfurther innovations in this field. We note these implementations are currently limited todeterministic propagators, and probabilistic ones would require reparameterization.Figure 1: Compared to reference, representative iFOD2 and RNN left arcuate fasciculii arevisually similar as are the median Dice coefficients across subjects per bundle.AcknowledgmentsThis work was supported by ACCRE at Vanderbilt; NSF 2040462; and NIH intramu-ral and 5R01EB017230, U34DK123895, P50HD103537, U54HD083211, K01EB032898, andT32GM007347; and does not necessarily represent the official views of the NIH or NSF.3Cai Lee Newlin Kim Moyer Rheault Schilling LandmanReferencesLeon Y Cai, Ho Hin Lee, Nancy R Newlin, Cailey I Kerley, Praitayini Kanakaraj,Qi Yang, Graham W Johnson, Daniel Moyer, Kurt G Schilling, Francois Rheault, et al.Convolutional-recurrent neural networks approximate diffusion tractography from t1-weighted mri and associated anatomical context. bioRxiv , pages 2023–02, 2023.Henry R Kang. Three-dimensional lookup table with interpolation. In Computational ColorTechnology , pages 151–159. SPIE press, 2006.Philippe Poulin, Daniel J ̈ orgens, Pierre Marc Jodoin, and Maxime Descoteaux. Tractog-raphy and machine learning: Current state and open challenges. Magnetic ResonanceImaging , 64(January):37–48, 2019. ISSN 18735894. doi: 10.1016/j.mri.2019.04.013. URLhttps://doi.org/10.1016/j.mri.2019.04.013 .J. Donald Tournier, Fernando Calamante, and Alan Connelly. Robust determination ofthe fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution. NeuroImage , 35(4):1459–1472, 2007. ISSN 10538119.doi: 10.1016/j.neuroimage.2007.02.016.J Donald Tournier, Fernando Calamante, Alan Connelly, et al. Improved probabilisticstreamlines tractography by 2nd order integration over fibre orientation distributions. InProceedings of the international society for magnetic resonance in medicine , volume 1670.Ismrm, 2010.4 |
O8RJGtdACWs | Medical Imaging with Deep Learning 2023Exploring the Role of Explainability for Uncovering Bias inDeep Learning-based Medical Image AnalysisEmma A.M. Stanley1,2,3,4emma.stanley@ucalgary.caMatthias Wilms2,4,5,6matthias.wilms@ucalgary.caPauline Mouches1,2,3,4pauline.mouches@ucalgary.caNils D. Forkert1,2,3,4,7nils.forkert@ucalgary.ca1Department of Radiology, University of Calgary, Canada2Hotchkiss Brain Institute, University of Calgary, Canada3Department of Biomedical Engineering, University of Calgary, Canada4Alberta Children’s Hospital Research Institute, University of Calgary, Canada5Department of Pediatrics, University of Calgary, Canada6Department of Community Health Sciences, University of Calgary, Calgary, Canada7Department of Clinical Neurosciences, University of Calgary, Calgary, CanadaAbstractFairness and bias are critical considerations for the effective and ethical use of deep learningmodels for medical image analysis. Despite this, there has been minimal research on howexplainable artificial intelligence (XAI) methods can be leveraged to better understandunderlying causes of bias in medical image data. To study this, we trained a convolutionalneural network on brain magnetic resonance imaging (MRI) data of 4547 adolescents topredict biological sex. Performance disparities between White and Black racial subgroupswere analyzed, and average saliency maps were generated for each subgroup based onsex and race. The model showed significantly higher performance in correctly classifyingWhite males compared to Black males, and slightly higher performance for Black femalescompared to White females. Saliency maps indicated subgroup-specific differences in brainregions associated with pubertal development, an established confounder in this task, whichis also associated with race. These findings suggest that models demonstrating performancedisparities can also lead to varying XAI outcomes across subgroups, offering insights intopotential sources of bias in medical image data.Keywords: Fairness, bias, explainability1. IntroductionRecently, there has been growing concern about bias and fairness issues in deep learningmodels for medical image analysis. Prominent examples have included observed perfor-mance disparities between different sociodemographic groups in chest X-ray classification(Seyyed-Kalantari et al., 2021) and cardiac segmentation (Puyol-Ant ́ on et al., 2021). Al-though many studies have shown that deep learning models can produce disparate outcomes,little research has been done to understand the root cause of biases related to performanceand how they manifest in such models. While explainable AI (XAI) methods have beencommonly applied to understand black-box deep learning model decisions, they have notbeen used extensively to better understand bias and fairness in medical imaging models.This study aims to evaluate if XAI is a feasible technique for better understanding potential©2023 CC-BY 4.0, E.A. Stanley, M. Wilms, P. Mouches & N.D. Forkert.Stanley Wilms Mouches Forkertsources of bias that result in subgroup-specific performance disparities, using a well-defineddeep learning classification task. More precisely, a convolutional neural network (CNN) istrained to classify biological sex of adolescents, in which a previous study identified thestage of pubertal development as a significant confounding factor (Adeli et al., 2020). Dueto established differences in the onset of pubertal development between different races andsexes (Herman-Giddens et al., 2012; Wu et al., 2002), we hypothesized that this modelcould produce performance disparities between Black and White subgroups. We posit thatXAI could then provide clues to sources of bias in medical imaging data if brain regionsassociated with the known confounder of pubertal development are identified as salient forthe model’s predictions. This short paper summarizes the study in (Stanley et al., 2022b).2. MethodsThis study used T1-weighted brain MRI from 4547 participants aged 9-10 from the 3.0release of the ABCD study1. The biological sex (defined as sex assigned at birth) andrace information of the participants were collected from surveys completed by a parent orguardian. Out of the total number of participants, 3,008 were identified as White and 390were identified as Black. The remaining participants of other races were included in modeltraining, but not in subgroup analyses. A CNN based on the Simple Fully ConvolutionalNetwork proposed by (Peng et al., 2021) was used for the sex classification task, with a five-fold cross-validation scheme. Full model implementation details are available in (Stanleyet al., 2022b). Saliency maps were computed by averaging registered SmoothGrad (Smilkovet al., 2017) results from 20 correctly classified subjects within each demographic subgroup.Weighted saliency scores were computed by multiplying the percent of salient voxels withineach brain region defined by the CerebrA atlas (Manera et al., 2020) by a weighting factoraccounting for the mean saliency intensity value within each region. To evaluate differencesin model performance between White and Black subgroups, a two-tailed Student’s t-testwith a significance level of 0.05 was used.3. Results and DiscussionThe sex classification model achieved an overall accuracy of 87.8%, comparable to theresults reported by (Adeli et al., 2020). While classification accuracy within the Whitefemale subgroup was lower but not statistically significantly different from the Black femalesubgroup (86.5% vs. 89.3%, p=0.260), the White male subgroup accuracy was 9.2% higherthan Black male subgroup accuracy, which was significant (90.3% vs. 81.1%, p=0.03).These results, similar to those reported by (Seyyed-Kalantari et al., 2021) and (Puyol-Ant ́ on et al., 2021), highlight the importance of not relying solely on high overall accuracyto evaluate model performance, as disparities may exist within sensitive subgroups andshould be reported. Although this study used race as a grouping factor, a major challengefor evaluating model fairness is that other hidden disparities may be present within sensitiveattributes not explicitly analyzed, or within intersections of sensitive groups (Stanley et al.,2022a).1. https://abcdstudy.org/2Explainability for Uncovering BiasFigure 1: Weighted saliency scores in top brain regions (RH = right hemisphere, LH = lefthemisphere).Brain regions highlighted in the saliency maps included the cerebellum, amygdala, lateralventricles, temporal lobes, and entorhinal cortex, with the cerebellum showing the highestsaliency activation. This region was also identified as the most significant confounder relatedto pubertal development stage for sex classification in (Adeli et al., 2020). The weightedsaliency scores for each subgroup are presented in Fig 1, with some brain regions showingdifferences between sexes and races. For example, the right hemisphere (RH) cerebellumwhite matter and left hemisphere (LH) cerebellum gray matter show sex-specific trends,and the RH vermal lobules VIII to X and LH entorhinal cortex show sex-specific trends byrace. The amygdala and medial temporal lobe, which have been linked to morphologicalchanges associated with pubertal development (Bramen et al., 2011), also demonstratesubgroup differences in saliency scores. These varying saliency scores within brain regionsmay be due to the model using morphological information related to pubertal developmentstage differently for each subgroup, potentially contributing to performance disparities.While subgroup saliency maps may help link model performance to bias and confounders indatasets, it should also be noted that these results have implications on the use of XAI forbiomarker detection in clinical tasks. If saliency maps show appreciable differences betweendemographic groups, conclusions based on aggregate saliency maps may not be generalizableto distinct subpopulations.ReferencesEhsan Adeli, Qingyu Zhao, Natalie M. Zahr, Aimee Goldstone, Adolf Pfefferbaum, Edith V.Sullivan, and Kilian M. Pohl. Deep learning identifies morphological determinants of sexdifferences in the pre-adolescent brain. Neuroimage , 223:117293, 2020.3Stanley Wilms Mouches ForkertJennifer E. Bramen, Jennifer A. Hranilovich, Ronald E. Dahl, Erika E. Forbes, Jessica Chen,Arthur W. Toga, Ivo D. Dinov, Carol M. Worthman, and Elizabeth R. Sowell. PubertyInfluences Medial Temporal Lobe and Cortical Gray Matter Maturation Differently inBoys Than Girls Matched for Sexual Maturity. Cerebral Cortex , 21(3):636–646, 2011.Marcia E. Herman-Giddens, Jennifer Steffes, Donna Harris, Eric Slora, Michael Hussey,Steven A. Dowshen, Richard Wasserman, Janet R. Serwint, Lynn Smitherman, and Ed-ward O. Reiter. Secondary sexual characteristics in boys: data from the Pediatric Re-search in Office Settings Network. Pediatrics , 130(5):e1058–1068, 2012.Ana L. Manera, Mahsa Dadar, Vladimir Fonov, and D. Louis Collins. CerebrA, registrationand manual label correction of Mindboggle-101 atlas for MNI-ICBM152 template. SciData , 7(1):237, 2020.Han Peng, Weikang Gong, Christian F. Beckmann, Andrea Vedaldi, and Stephen M. Smith.Accurate brain age prediction with lightweight deep neural networks. Medical ImageAnalysis , 68:101871, 2021.Esther Puyol-Ant ́ on, Bram Ruijsink, Stefan K. Piechnik, Stefan Neubauer, Steffen E. Pe-tersen, Reza Razavi, and Andrew P. King. Fairness in Cardiac MR Image Analysis: AnInvestigation of Bias Due to Data Imbalance in Deep Learning Based Segmentation. InMedical Image Computing and Computer Assisted Intervention – MICCAI 2021 , LectureNotes in Computer Science, pages 413–423, Cham, 2021. Springer International Publish-ing.Laleh Seyyed-Kalantari, Haoran Zhang, Matthew B. A. McDermott, Irene Y. Chen, andMarzyeh Ghassemi. Underdiagnosis bias of artificial intelligence algorithms applied tochest radiographs in under-served patient populations. Nat Med , 27(12):2176–2182, 2021.Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi ́ egas, and Martin Wattenberg.SmoothGrad: removing noise by adding noise. June 2017. URL http://arxiv.org/abs/1706.03825 . arXiv: 1706.03825.Emma A. M. Stanley, Matthias Wilms, and Nils D. Forkert. Disproportionate SubgroupImpacts and Other Challenges of Fairness in Artificial Intelligence for Medical ImageAnalysis. In Ethical and Philosophical Issues in Medical Imaging, Multimodal Learningand Fusion Across Scales for Clinical Decision Support, and Topological Data Analysisfor Biomedical Imaging , Lecture Notes in Computer Science, pages 14–25, Cham, 2022a.Springer Nature Switzerland.Emma A. M. Stanley, Matthias Wilms, Pauline Mouches, and Nils D. Forkert. Fairness-related performance and explainability effects in deep learning models for brain imageanalysis. JMI, 9(6):061102, 2022b.Tiejian Wu, Pauline Mendola, and Germaine M. Buck. Ethnic differences in the presence ofsecondary sex characteristics and menarche among US girls: the Third National Healthand Nutrition Examination Survey, 1988-1994. Pediatrics , 110(4):752–757, 2002.4 |
kNQvCJC0fad | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023Towards Robust Computation of Cardiothoracic Ratio fromChest X-rayMatilde Bodritti1,2∗matildebodritti98@gmail.com1Agfa Radiology Solutions, Agfa NV, Belgium2Ghent University, Ghent, BelgiumAdriyana Danudibroto1adriyana.danudibroto@agfa.comJan Aelterman3,4,5jan.aelterman@ugent.be3Ghent University Centre for X-Ray Tomography (UGCT), Proeftuinstraat 86/N12, Ghent, Belgium4IPI-TELIN-IMEC, Ghent University, Ghent, Belgium5Radiation Physics Research Group, Department of Physics and Astronomy, Ghent University,Proeftuinstraat 86, Ghent, BelgiumAbstractThe cardiothoracic ratio (CTR) plays an important role in early detection of cardiac en-largement related diseases in chest X-ray (CXR) examinations. Since its measurementwould be time-consuming, its evaluation in clinical practice is done by a visual assessment:it is highly subjective and its robustness is undermined by some acquisition issues such aslung clipping or patient orientation variation. No work addressing the problem of clippedlungs in the CTR estimation has been found in the literature. For these reasons, aiming fora robust method, we firstly proposed a segmentation-based approach for automatic mea-surement of the CTR (based only on the lung segmentation mask) able to handle clippedanatomy cases. Secondly, the proposed method was validated on a large dataset allowingus to corroborate earlier research results with manual CTR computation in which the meanCTR increases with the age of the patients and there is a noticeable difference betweenmen and women’s CTR. Lastly, a new rotational invariant metric was proposed, showingit to be more robust to different patient orientations.Keywords: cardiothoracic ratio, chest anatomy segmentation, chest X-ray1. IntroductionChest X-ray (CXR) is the most commonly performed diagnostic X-ray examination. How-ever, its low diagnostic sensitivity (when compared to cross-sectional techniques) needs tobe counterbalanced by an accurate and time-consuming radiologist interpretation. This canbe helped by computer-aided technologies which instead of outputting directly the diseaseinferred from the CXR, they can output some measurements (as objective as possible) thatwill help the clinician to formulate the diagnosis. An example of objective measurement isthe cardiothoracic ratio (CTR): a screening tool to evaluate the size of the heart’s silhouetteand thus the presence of cardiomegaly from CXR. The theoretical definition of the CTRinvolves calculating the ratio between the maximum horizontal heart diameter (Dheart)and the maximum horizontal thoracic diameter (Dthorax). In the literature, almost all ap-proaches to automatically extract the CTR have the underlying assumption that the images∗The work was conducted during an internship and M.Sc. training at 1 and 2.©2023 M. Bodritti, A. Danudibroto & J. Aelterman.Bodritti Danudibroto Aeltermanare taken from correct acquisitions. However, if part of the lung area is outside image’s fieldof view, the measurement of Dthorax can be affected: this is one issue to take into accountfor a robust estimation. For this reason, we choose to explore the computation of CTR incase of clipped anatomy. Even if multiple CXR datasets are publicly available, only few ofthem has lung and heart masks annotations: we choose to extract heart shape informationfrom only the contour of the lungs, unlike most of the works in literature that rely on bothheart and lungs mask segmentations (Gupte et al., 2021).2. Materials and MethodsStarting with the CXR image, the lung segmentation mask is extracted. From the lungsegmentation mask, both Dheart and Dthorax are extracted and the CTR is calculated.Dheart is defined as the maximum horizontal distance between the two lungs, above thevertex of the cardiophrenic angle, as shown in Figure 1.A. For the segmentation task wemodified the U-Net with variational autoencoder by Selvan et. al (Selvan et al., 2020). Themodification allows an output with a field of view 128 pixels wider on each side than theinput image to handle cases in which anatomy is partially clipped out of the image. Theimplementation details and dataset used can be found on the GitHub repository.The proposed method was then applied to a large dataset to be validated. These typesof population studies are usually difficult to carry out on a large scale, because of the needof clear and structured radiologists annotations for each CXR. The automatic calculation ofCTR can make this process faster and easily accessible. A subset of the CheXpert dataset(Irvin et al., 2019) was selected, resulting in 25,369 CXRs with theoretically normal valuesof CTRs, comprised of 34% female and 66% male, from 18 to 90-year-old.Since the evaluation of the CTR is used in everyday clinical practice, we also wanted toevaluate the robustness of this metric. The previously described CTR calculation methodis highly dependent on the orientation of the performed acquisition and for this reason, adifferent metric, strongly related to CTR has been proposed: the rotational invariant CTR(RICTR). This method also involved the estimation of heart’s contour and the detailsof the implementation can be found on the GitHub repository. Dheart is now defined asthe diameter of the maximum circle inscribed in the heart masks, while Dthorax is themaximum horizontal width of the rotated lungs, as shown on Figure 1.B. The orientationof the lung mask is derived by finding the major axis of the mask. Then, it is oriented to 0degrees to obtain consistent orientation. The performance of this method have been testedon lung and heart masks from clipped and non-clipped test sets in terms of absolute error,root mean square error and correlation coefficient.3. Results and DiscussionAs a baseline, the CTR calculated using the segmentation model by Selvan et. al (Sel-van et al., 2020), resulted in an absolute error of 0.074 ±0.090 on a test set with clippedlungs. The proposed method for CTR estimation reported an absolute error of 0.058 ±0.057 on the same test set. The performance of the proposed method was in the same order2Towards Robust Computation of Cardiothoracic Ratio from Chest X-rayof magnitude compared to other state-of-the-art method that computes CTRs from lungssegmentations (Dallal et al., 2017).The CTR values obtained by the application of the method on the CheXpert subset areshown on Figure 1.C. The variation of mean CTR with age and gender reflects the obser-vations of previous studies based on manually annotated CTRs (Brakohiapa et al., 2021).They reported a significant difference in the overall CTR between men and women, witha slightly higher mean CTR and a higher increase in mean CTR values for women as ageincreases. Both trends are reflected in our results. This suggests that the proposed methodcould be suitable for such population studies.Moreover, the proposed RI CTR shows a much higher correlation coefficient (CC) withthe RI CTR calculated from ground truth segmentation masks when compared to the CTR,and it also shows lower absolute error (AE) and root mean square error (RMSE). For theCTR method we reported a CC of 0.558, an AE of 0.062 ±0.059 and a RMSE of 0.086,while for the RI CTR method we obtained a CC of 0.754, an AE of 0.024 ±0.021 and aRMSE of 0.031. The lower errors identified for the RI CTR method compared to the CTRmethod may indicate that it is indeed much more robust as a metric, yet future research isneeded to establish RI CTR as an alternative clinical metric.Figure 1: A)Illustration of CTR calculation from lung segmentation mask. B)Illustrationof RI CTR calculation from lung and heart segmentation masks. The yellowarrow represents the major axis of the lung. C)Predicted CTR as a function ofpatient age on men and women’s CXRs from CheXpert subset.ReferencesEdmund Kwakye Brakohiapa et al. Gender and age differences in cardiac size parametersof ghanaian adults: Can one parameter fit all? part two. Ethiopian Journal of HealthSciences , 31(3), 2021.3Bodritti Danudibroto AeltermanAhmed H Dallal, Chirag Agarwal, et al. Automatic estimation of heart boundaries andcardiothoracic ratio from chest x-ray images. In Medical Imaging 2017: Computer-AidedDiagnosis , volume 10134, 2017.Tanveer Gupte et al. Deep learning models for calculation of cardiothoracic ratio from chestradiographs for assisted diagnosis of cardiomegaly. In 2021 International Conference onArtificial Intelligence, Big Data, Computing and Data Communication Systems , 2021.Jeremy Irvin et al. Chexpert: A large chest radiograph dataset with uncertainty labelsand expert comparison. In Proceedings of the AAAI conference on artificial intelligence ,volume 33, 2019.Raghavendra Selvan et al. Lung segmentation from chest x-rays using variational dataimputation, 2020.4 |
YEMH26an2bM | Medical Imaging with Deep Learning 2023Transforming Radiology Workflows: Pretraining forAutomated Chest X-ray Report GenerationShashank Gupta∗shashank.gupta@uky.eduYuhang Jiang∗yuhang.jiang@uky.eduAbdullah-Al-Zubaer Imran aimran@uky.eduUniversity of Kentucky, Lexington, KY, USAAbstractAutomated chest X-ray report generation using machine learning has emerged as apromising technology for improving the accuracy and efficiency of chest X-ray interpretation.In this paper, we present a novel approach for automated report generation that combinesthe power of vision transformers for image information encoding and PubMedBERT for textdecoding. Our model extracts image features using a vision transformer and text featuresusing PubMedBERT. The encoded features are then fed into a text decoder to generatestandardized reports. We trained our model on a dataset of chest X-rays and correspondingreport findings (a subset of the MIMIC-CXR dataset) and evaluated its performance on asmall subset of the IU dataset.Keywords: Chest X-ray, BLIP, PubMedBERT, ViT, Pre-Training.1. IntroductionChest X-rays are widely used for diagnosing chest-related conditions but require specializedexpertise for interpretation, which can be time-consuming and subject to errors. Manuallywriting every report is also costly, prone to variability, and may delay treatment. Healthcareprofessionals may interpret the same image differently, leading to inconsistent diagnoses anddelays in treatment. Recent advancements in machine learning may improve the efficiencyand accuracy of chest X-ray interpretation by automating the report-generating processwhich can be helpful for reducing the workload of radiologists and facilitating quick diagnoses.This could reduce wait times for patients, minimize errors, and make interpretation moreaccessible while also being cost-effective.In this research paper, we present a novel machine-learning model for generating chestX-ray reports. Our model utilizes a vision transformer (ViT) (Dosovitskiy et al., 2021)to extract features from the chest X-ray images, followed by a text decoder to generatestandardized reports. The reports include key features of the X-ray image, such as lungfunction, the presence of any abnormalities, and a differential diagnosis based on the identifiedfeatures. We train our model on a subset of the MIMIC-CXR dataset of chest X-rays andcorresponding reports and evaluated it on the IU dataset.Similar work has been performed by researchers in the past. (Wu et al., 2022) presentsDeltaNet for automatically generating medical reports which applies a conditional generationprocess. (Najdenkoska et al., 2021) proposes variational topic inference for automatic reportgeneration by introducing a set of topics as latent variables to guide sentence generationby aligning image and language modalities in a latent space. (Liu et al., 2021) proposes aContrastive Attention (CA) model for X-ray report generation.∗Contributed equally©2023 CC-BY 4.0, S. Gupta, Y. Jiang & A.-A. Imran.Gupta Jiang ImranFigure 1: The encoder-decoder architecture of our image-generation framework2. MethodologyWe adopt the architecture from BLIP (Li et al., 2022), a bootstrapping language-imagemodel pre-trained for both understanding-based and generation-based objectives. The modelpre-training takes the input of pairs of images and the corresponding text and afterwardgenerates radiology reports for given X-ray images. The framework uses a multimodalmixture of encoder-decoder (MED) model architecture that enables effective multi-taskpre-training. MED can operate as a unimodal encoder, an image-grounded text encoder, oran image-grounded text decoder. Our model is jointly pre-trained with three vision-languagelosses: image-text contrastive learning, image-text matching, and image-conditioned languagemodeling. For image-text contrastive learning loss and image-text matching loss, we followby (Li et al., 2021). We use the language modeling loss which is a cross-entropy loss fortraining the model to maximize the likelihood of the next token in the text.2.1. LossesIn this work, we pre-train a BLIP model from the beginning with X-ray images andcorresponding findings from reports. Our model architecture employs a visual transformer(ViT) as its image encoder, which divides an input image into patches and encodes themas a sequence of embeddings. Additionally, a [CLS] token is used to represent the globalimage feature. The text encoder is initialized with PubMedBERT(Gu et al., 2020) which ispre-trained on PubMed abstracts. To serve as a framework for text generation, our modelreplaces the bi-directional self-attention layers in PubMedBERT with causal self-attentionlayers that can operate as an image-grounded text decoder. At inference time, our modelprovides an encoder-decoder architecture for generating radiology reports with a given X-rayimage, which is shown in Figure 1.2Pretraining for Automated Chest X-ray Report GenerationTable 1: Scores calculated on IU dataset.Method Model Jaccard ROUGE-2 ROUGLE-l METEORFine TunedDeltaNet-3C(Wu et al., 2022) - - 0.379 -TieNet (Wang et al., 2018) - - 0.226 -CvT-212DistilGPT2(Nicolson et al., 2022) - - 0.376 0.200Zero Shot ViT + PubMedBERT (ours) 0.14 0.07 0.24 0.233. Experiments and ResultsWe utilize the MIMIC-CXR dataset (Mechanical Ventilation, Vital Signs, and Clinical DataChest X-Ray)(Johnson et al., 2019), which is a large, publicly available dataset of chestX-ray images with corresponding radiology reports to pre-train our model. We sampled fromthe original dataset, and the resulting dataset consists of 12,676 images with correspondingreports. To create captions for our images, we use the findings from the report, rather thanthe impression, as they provide a more objective description. We select frontal X-ray imagesfor pre-training as they contain more informative features. Our model was trained usingimage-text contrastive loss, image-text matching loss, and language modeling loss, with thesame objective as BLIP to improve language generation. We use ViT-base as our imageencoder and PubMedBERT-base as our text decoder. We resize all the images to 224 X 224and pre-train our model for 100 epochs with batch size and initial learning rate of 3e-4 with3000 warm-up steps.To evaluate the performance of our pre-trained model, we selected the IU dataset (OpenI)which contains 3,307 frontal images and corresponding findings which can be obtained fromKaggle. We then compared the system-generated findings with the original findings in thereports and calculated various metrics, including Jaccard similarity, ROUGE, and METEORscores to measure the accuracy and quality of the generated reports. The scores are displayedin Table 1 compared to other fiine tuned methods.4. ConclusionsOur pre-trained model is aimed at generating X-ray reports, which can be helpful for reducingthe workload of radiologists and facilitating quick diagnoses. To this end, we employed theBLIP architecture, which is known for its high accuracy and efficiency. The image encoderwe used is a Vision Transformer, which has shown promising results in computer visiontasks, while the language encoder we used is PubMedBERT, a pre-trained language modelspecifically designed for biomedical applications.While our current pre-trained model has shown some promise, its performance is limiteddue to the small size of the pre-training dataset. However, we believe that using the fullMIMIC-CXR dataset for pre-training will greatly improve our model’s performance andaccuracy.By utilizing the full MIMIC-CXR dataset, which will provide us with a much larger andmore diverse set of training data, we hope to achieve higher accuracy and more robustness inour model, which will make it a more useful tool for radiologists and medical professionals.3Gupta Jiang ImranReferencesAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly,Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers forimage recognition at scale, 2021.Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, TristanNaumann, Jianfeng Gao, and Hoifung Poon. Domain-specific language model pretrainingfor biomedical natural language processing, 2020.Alistair EW Johnson, Tom J Pollard, Seth J Berkowitz, Nathaniel R Greenbaum, Matthew PLungren, Chih-ying Deng, Roger G Mark, and Steven Horng. MIMIC-CXR, a de-identifiedpublicly available database of chest radiographs with free-text reports. Scientific data , 6(1):317, 2019.Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, CaimingXiong, and Steven Hoi. Align before fuse: Vision and language representation learningwith momentum distillation, 2021.Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-imagepre-training for unified vision-language understanding and generation, 2022.Fenglin Liu, Changchang Yin, Xian Wu, Shen Ge, Ping Zhang, and Xu Sun. Contrastiveattention for automatic chest X-ray report generation. In Findings of the Associationfor Computational Linguistics: ACL-IJCNLP 2021 , pages 269–280, Online, August 2021.Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-acl.23. URLhttps://aclanthology.org/2021.findings-acl.23 .Ivona Najdenkoska, Xiantong Zhen, Marcel Worring, and Ling Shao. Variational topicinference for chest X-ray report generation, 2021.Aaron Nicolson, Jason Dowling, and Bevan Koopman. Improving chest x-ray reportgeneration by leveraging warm-starting, 2022.OpenI. Indiana university - chest X-rays (png images). URL https://openi.nlm.nih.gov/faq.php .Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, and Ronald M Summers. Tienet: Text-image embedding network for common thorax disease classification and reporting in chestx-rays. In Proceedings of the IEEE conference on computer vision and pattern recognition ,pages 9049–9058, 2018.Xian Wu, Shuxin Yang, Zhaopeng Qiu, Shen Ge, Yangtian Yan, Xingwang Wu, YefengZheng, S. Kevin Zhou, and Li Xiao. DeltaNet: Conditional medical report gen-eration for COVID-19 diagnosis. In Proceedings of the 29th International Confer-ence on Computational Linguistics , pages 2952–2961, Gyeongju, Republic of Korea,October 2022. International Committee on Computational Linguistics. URL https://aclanthology.org/2022.coling-1.261 .4 |
rfZokeg6UMV | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionImage Entropy and Numeric Representation for MRISemantic SegmentationAuthor name(s) withheld email(s) withheldAddress withheldEditors: Under Review for MIDL 2023AbstractDeep learning has made major strides in medical imaging segmentation in the last severalyears for its automated feature extraction. This model fitting process is susceptible toover-fitting, and can benefit from sparsity. Here, we show theoretical and experimentalpotential of using low-entropy images as sparse input to improve deep learning driventissue segmentation, using tumor and heart segmentation problems as exemplary cases.Keywords: Segmentation, Numerical Representation, Image Entropy, Deep-Learning, U-Net, MRI1. IntroductionDeep neural networks have taken center stage for their ability to take highly complex dataas input, learn their own feature representation, and successfully converge to a solution. Be-cause of this, manual feature engineering techniques are largely ignored in contexts wheredeep neural networks have proven successful. However, the convolutional layers often doingthe automated feature engineering remain high variance models [(Menart, 2020)]. Thereforeit stands to reason that in situations with few samples relative to a very large number ofmodelling parameters (as is often the case in medical imaging) that more effective trainingcould be achieved with some manual feature engineering. Here, we propose reducing thenumeric range of the input as a way to have feature/signal ”sparsity” without hamperingthe network’s automated feature engineering. We argue that sparsity is akin to entropy,and that we can reduce image entropy by constraining the numerical input range of theimages. We study the effect of reducing input range with a couple of standard medicalimaging segmentation problems, firstly tumor segmentation, and afterwards, for compar-ison, segmentation of the left atrium of the heart. We do not present these methods asa catch-all for reducing the image entropy or improving all medical imaging/segmentationproblems, but as a proof-of-principal that when working with deep learning architecturesand small datasets, controlling the entropy of the input image will have an effect on modelperformance and should be considered.2. TheoryWe define sparsity as the ability to concentrate the energy function of a signal or model inas few coefficients as possible. Energy refers to a function in a Gibbs measure, E(x), whichmoves the space of states to real numbers. As the space of states, or energy, is concentratedin fewer coefficients the signal is considered to be more compressible and therefore inherently©2023 CC-BY 4.0, Anonymous.withheldsparser. This is made clear with the following two examples adapted from (Pastor et al.,2015). Consider the random variable Xε{x1, x2}with probability distribution p= (p1, p2).First, assume p1> p 2, with p1, p2>= 0 and ||p||= 1. Then if p1increases, it is obviousthen x1is more likely to appear, so the compressiblity, or sparsity, of pincreases, and theuncertainty, or entropy, of Xmust decrease. Second, assume, p= (p1, p2) = (1 ,0), so thedistribution represents a constant random variable. If p2increases at all, then x1is no longerthe unique possible outcome and the compressibility of pmust decrease with the increase inuncertainty in x. We can now conclude that reducing the image entropy results in a sparserepresentation of the most predicable intensity values. Effectively a smaller input space willhelp control model variability by placing a strong inductive bias over the data, namely thata solution must be found in a low information environment, and to achieve this the inputdata space should be low information with few possible states.3. Material and methods3.1. DataBrain tumor tissue segmentation presents a particularly challenging problem, and a usefultesting ground for experimental methodologies. We used the 2021 BraTS publicly availabletraining dataset. All details regarding the dataset can be found in the latest BraTS summa-rizing paper [(Bakas et al., 2018)]. Each data point was saved as one of three input types:a reference 16-bit image, then a simple numeric reduction normalizing all values between0-255 for an 8-bit image, and an extreme reduction to 3-bits of information, by z-scoringthe image and truncating to the nearest integer. For the second example, we used the heartsegmentation task from the Medical Imaging Decathlon. The target region of interest forthe task is the left atrium of the heart. The data was originally acquired as part of the 2013Left Atrial Segmentation Challenge [(Tobon-Gomez et al., 2015)].3.2. Experimental ProcedureAs high variance models, deep neural networks are sensitive to the sampling variabilityof the training set. To address this, we experiment with only simple 3D U-Net models[(C ̧i ̧ cek et al., 2016)] since the model is small it allows efficient permutation testing ofexperimental conditions. And while simple, the 3D U-Net is still the backbone of moststate-of-the-art medical segmentation techniques [(Siddique et al., 2021)], and should stillgive insight on the effect of entropy on the input. We bootstrapped the dataset to give 10permutations for the tumor segmentation, and 19 for the heart segmentation. For both,the training objective function was a simple Dice co-efficient loss between the output andthe target. After building a confidence interval around the model estimates, we apply aone-way ANOVA for each tissue type combined with non-parametric bootstrapping over thedataset to test the mean differences between our experimental conditions while accountingfor sampling effects in the training set. The models were evaluated using the Dice co-efficientscores between the predicted segmentation and the ground truth labels.2Image Entropy and Numeric Representation for MRI Semantic Segmentation4. ResultsAll tumor tissue ANOVAs showed significant differences between input data representations,atp<0.001. On average the lower the input information the better the model preformed.This is visualized in figure 1. The ANOVA to asses the left atrium of the heart segmentationresulted in a modestly significant difference between input types, with p= 0.037, again withthe most reduced input preforming the best, and displayed in figure 2.5. ConclusionWe shows that reducing image entropy may help with complex segmentation tasks (tumor),but is of less use when the task is already simple (heart). Despite the aforementionedlimitations, this is an exciting result that deserves further investigation as it could be lowhanging fruit to improve data-hungry segmentation methods in medical imaging.Figure 1: Bootstrap generated tissue map segmentation estimates on the independent testdata. Lower input information greatly increased the corresponding model’s Dicecoefficient estimates.Figure 2: Bootstrap generated left atrium segmentation estimates on the independent testdata. Lowering the input information mildly increased Dice coefficient estimates.3withheldReferencesSpyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessan-dro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki,et al. Identifying the best machine learning algorithms for brain tumor segmentation, pro-gression assessment, and overall survival prediction in the brats challenge. arXiv preprintarXiv:1811.02629 , 2018. ̈Ozg ̈ un C ̧i ̧ cek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger.3d u-net: learning dense volumetric segmentation from sparse annotation. In Interna-tional conference on medical image computing and computer-assisted intervention , pages424–432. Springer, 2016.Christopher Menart. Evaluating the variance in convolutional neural network behaviorstemming from randomness. In Automatic Target Recognition XXX , volume 11394, page1139410. International Society for Optics and Photonics, 2020.Giancarlo Pastor, Inmaculada Mora-Jim ́ enez, Riku J ̈ antti, and Antonio J Caamano. Mathe-matics of sparsity and entropy: Axioms core functions and sparse recovery. arXiv preprintarXiv:1501.05126 , 2015.Nahian Siddique, Sidike Paheding, Colin P Elkin, and Vijay Devabhaktuni. U-net andits variants for medical image segmentation: A review of theory and applications. IeeeAccess , 9:82031–82057, 2021.Catalina Tobon-Gomez, Arjan J Geers, Jochen Peters, J ̈ urgen Weese, Karen Pinto, RashedKarim, Mohammed Ammar, Abdelaziz Daoudi, Jan Margeta, Zulma Sandoval, et al.Benchmark for algorithms segmenting the left atrium from 3d ct and mri datasets. IEEEtransactions on medical imaging , 34(7):1460–1473, 2015.4 |
A--Xy77jTa | Medical Imaging with Deep Learning 2023Data-Free One-Shot Federated Regression:An Application to Bone Age AssessmentZhou Zheng1,∗zzheng@mori.m.is.nagoya-u.ac.jpYuichiro Hayashi1yhayashi@mori.m.is.nagoya-u.ac.jpMasahiro Oda1moda@mori.m.is.nagoya-u.ac.jp1Nagoya University, JapanTakayuki Kitasaka2kitasaka@aitech.ac.jp2Aichi Institute of Technology, JapanKensaku Mori1,3,∗kensaku@is.nagoya-u.ac.jp1Nagoya University, Japan3National Institute of Informatics, JapanAbstractWe consider a novel problem setting: data-free one-shot federated regression. This settingaims to prepare a global model through a single round of communication without relying onauxiliary information, e.g., proxy datasets. To address this problem, we propose a practicalframework that consists of three stages: local training, data synthesizing, and knowledgedistillation, and demonstrate its efficacy with an application to bone age assessment. Weconduct validation under independent and identical distribution (IID) and non-IID settingswhile considering both model homogeneity and heterogeneity. Validation results show thatour method surpasses FedAvgOneShot by a large margin and sometimes even outperformsthe proxy-data-dependent approach FedOneShot .Keywords: Federated learning, Regression, One-shot, Data-free.1. IntroductionOne-shot federated learning (FL) (Guha et al., 2019) has emerged as a potential solutionto address concerns regarding the costly inter-node communication and possible privacyleakage in standard FL methods, as it allows for only a single global round between clientsand the central server. While one-shot FL methods typically require additional sources likeproxy datasets for global model training, recent advances in data-free one-shot FL (Zhanget al., 2022; Luz-Ricca et al., 2023) overcomes this limitation, eliminating the need foradditional datasets. Nevertheless, current FL methods have predominantly focused onclassification tasks, with a limited exploration of regression problems.Motivated by these observations, we consider a novel problem setting: data-free one-shot federated regression. Inspired by the work (Zhang et al., 2022) that proposed forclassification, we present a practical framework specialized for regression, which comprisesthree stages: local training ,data synthesizing , andknowledge distillation (KD), andevaluate it with a bone age assessment task (Halabi et al., 2019). Our method is the firstattempt in this setting, and validation results demonstrate its efficacy.∗Send correspondence to Zhou Zheng or Kensaku Mori. This work was supported by JSPS KAKENHIGrant Numbers 21K19898 and 17H00867 and JST CREST Grant Number JPMJCR20D5, Japan.©2023 CC-BY 4.0, Z. Zheng, Y. Hayashi, M. Oda, T. Kitasaka & K. Mori.Zheng Hayashi Oda Kitasaka Mori2. MethodGenerally, let there be Klocal clients {Ci}Ki=1with each client holding a private datasetDi={xi, yi}, where xiare images, and yiare ground truth, e.g., bone ages in our study.First stage: local training. Each client Citrains its local model Mi(·,Θi) with theprivate dataset Diand uploads the model weight to the central server after training.Second stage: data synthesizing. We adopt a generator G(·,Θg) to synthesize im-ages. In our study, bone ages range from 1 to 228 months, and we assume yfollows a discreteuniform distribution p(y) over the set of {1,2,3, ...,228}. To train G(·,Θg), we first samplea batch of random noise vectors z∼N(0,I) and a batch of random values ˆ y∼p(y). After-ward, we input ztoG(·,Θg) to get a batch of produced images ˆ x=G(z,Θg). Next, we feedˆ xinto local models {Mi(·,Θi)}Ki=1to get predictions {Mi(ˆ x,Θi)}Ki=1. By adopting the basicensemble scheme (Mendes-Moreira et al., 2012), we get the ensembled result of local modelsE(ˆ x) =PK1Mi(ˆ x,Θi)/K. We calculate a loss Lsim(E(ˆ x),ˆy) =∥E(ˆ x)−ˆy∥2to expect ˆ xfollowing a similar distribution to x. Besides, to improve the quality of ˆ x, we adopt a featuredistribution regularization term Lfeat(Yin et al., 2020), which enforces feature-level similar-ity and is defined as Lfeat(ˆ x) =PKk=1Pl∥μk,l(ˆ x)−μk,l(x)∥2+∥σk,l(ˆ x)−σk,l(x)∥2/K,where μk,l(·) and σk,l(·) denote the mean and variance of features of l-th batch normaliza-tion layer for Mi(·,Θi). In addition, to ensure G(·,Θg) generates more diverse images, wepropose Ldisto encourage disagreement between local models {Mi(·,Θi)}Ki=1and the globalmodel S(·,Θs), which is written as Ldis(ˆ x) =−∥E(ˆ x)−S(ˆ x,Θs)∥2. To conclude, the to-tal training objective of G(·,Θg) isLgen(ˆ x,ˆy) =Lsim(E(ˆ x),ˆy) +λLfeat(ˆ x) +βLdis(ˆ x).We set λto 0.5 and βto 0.1. Note that local and global models are fixed at this stage.Third stage: knowledge distillation. We update the global model S(·,Θs) byknowledge transfer. Specifically, the fixed generator first synthesizes a batch of imagesˆ xwhen feeding a batch of random noise vectors z. Then ˆ xare input into local mod-els to get ensembled prediction E(ˆ x). We finally utilize a loss Lkd(E(ˆ x), S(ˆ x,Θs)) =∥E(ˆ x)−S(ˆ x,Θs)∥2to enforce the similarity between E(ˆ x) and S(ˆ x,Θs).3. Experiments, Results, and ConclusionsDataset and metric. We applied the public dataset RNSA-BAA (Halabi et al., 2019),which contains 12,611/1,425/200 hand radiographs for training/validation/testing. We re-ported the mean absolute difference (MAD) results on the test set based on three runs.Experimental setup. We maintained four local clients. We divided the training setinto four subsets with bone age values falling within four ranges, as shown in Figure 1(a).To simulate an IID setting among clients, we ensured that each client received a similarnumber of images within the same bone age range by randomly extracting 1/4 of the datafrom each subset without repetition and assigning them to individual clients, as shownin Figure 1(b). Conversely, to form a non-IID setting, we distributed one subset to oneclient, as illustrated in Figure 1(c). We also considered model homogeneity and hetero-geneity. Thus, we introduced four different settings: (1) homo-IID : model homogeneitywith IID. (2) homo-non-IID : model homogeneity with non-IID. (3) hetero-IID : modelheterogeneity with IID. (4) hetero-non-IID : model heterogeneity with non-IID.2Data-Free One-Shot Federated Bone Age Assessment6.35%27.65%57.28%8.71%(a) data division#A: [1, 60)#B: [60, 120)#C: [120, 180)#D: [180, 228]0 1 2 3client IDs#A#B#C#Ddistributions(b) IID0 1 2 3client IDs#A#B#C#Ddistributions(c) non-IIDFigure 1: Details of experimental setup:(a) Training set was divided into four sub-sets with bone age values falling within fourranges. (b) Simulated independent and iden-tically distributed (IID) setting. (c) Simu-lated non-IID setting. Size of each red circleis proportional to number of samples.Table 1: Quantitative comparison among dif-ferent methods under four different settings.Centralization represents the upper-boundaccuracy derived by centralized training. Re-sults are reported as average (standard de-viation) on test set based on three runs.‘↓’: lower values of mean absolute difference(MAD) indicate better performance. ‘-’: re-sults are not applicable.method MAD ↓centralization 10.15 (0.46)homo-IID homo-non-IID hetero-IID hetero-non-IIDFedAvg 11.68 (0.48) 36.80 (1.23) - -FedAvgOneShot 62.15 (2.75) 68.35 (2.75) - -FedNoisyKD 116.41 (1.78) 116.46 (1.57) 117.15 (2.07) 117.59 (2.44)FedOneShot 59.92 (3.31) 46.55 (1.69) 55.87 (2.67) 46.29 (1.10)Ours 42.65 (4.40) 49.49 (5.45) 58.52 (30.01) 52.60 (5.98)Baselines. We compared our scheme with FedAvg (McMahan et al., 2017) and itsone-shot version FedAvgOneShot (averaging model weights after local training). We alsoimplemented a scheme that used random noise images for KD, and we abbreviated itasFLNoisyKD . In addition, we realized FedOneShot (Guha et al., 2019) using a publicdataset (Pietka et al., 2001) as the proxy dataset for KD.Implementation details. When considering model homogeneity, all clients usedResNet34 (He et al., 2016). For model heterogeneity, we applied ResNet34, ResNet50,and two variants (WRN-16-10, WRN-40-6) based on Wide-Resnets (Zagoruyko and Ko-modakis, 2016). The global model was always adopted as ResNet34. We applied the Adamoptimizer. Local models were trained for 200 epochs using a poly-learning rate with aninitial value of 10−4. We set a learning rate of 10−3and the latent dimension of zto trainthe generator to 128. We set an initial learning rate of 10−4to train the global modeland decayed it to 10−6. The generator and the global model were trained in a loop of 120rounds, and at each round, we trained the generator for 40 epochs and the global model for1 epoch. We set the batch size to 32. All images were resized to a size of 224 ×224 pixels.Experiment results. As illustrated in Table 1, Centralization represents the upper-bound accuracy derived by centralized training. We can observe a noticeable accuracy gapbetween FedAvg with IID and non-IID, indicating that FedAvg is also sensitive to non-IIDin regression, similar to classification (Hsu et al., 2019). Then let us focus on one-shot FLmethods. Limiting by a single global round, FedAvgOneShot achieves much larger MADvalues than FedAvg .FedNoisyKD uses random noise images for KD, leading to the worstperformance. FedOneShot , which conducts KD with a public dataset, achieves overall thebest results. Compared to FedOneShot , our method outperforms it under the setting ofhomo-IID and realizes competitive accuracy under the other three settings. This suggeststhat our approach has the potential to synthesize images comparable to authentic imagesfor KD, eliminating the requirements for proxy datasets.Conclusions. This paper made a first attempt to explore data-free one-shot FL in re-gression. Our method demonstrated its efficacy in this setting. Future work may investigateimproving image generation and apply the proposed method to more regression tasks.3Zheng Hayashi Oda Kitasaka MoriReferencesNeel Guha, Ameet Talwalkar, and Virginia Smith. One-shot federated learning. arXivpreprint arXiv:1902.11175 , 2019.Safwan S Halabi, Luciano M Prevedello, Jayashree Kalpathy-Cramer, Artem B Mamonov,Alexander Bilbily, Mark Cicero, Ian Pan, Lucas Ara ́ ujo Pereira, Rafael Teixeira Sousa,Nitamar Abdala, et al. The rsna pediatric bone age machine learning challenge. Radiology ,290(2):498–503, 2019.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning forimage recognition. In Proceedings of the IEEE conference on computer vision and patternrecognition , pages 770–778, 2016.Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. Measuring the effects of non-identicaldata distribution for federated visual classification. arXiv preprint arXiv:1909.06335 ,2019.Emilio Luz-Ricca, Clare Elizabeth Heinbaugh, and Huajie Shao. Data-free one-shot feder-ated learning under very high statistical heterogeneity. In International Conference onLearning Representations , 2023.Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera yArcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. InProceedings of the 20th International Conference on Artificial Intelligence and Statistics ,pages 1273–1282, 2017.Joao Mendes-Moreira, Carlos Soares, Al ́ ıpio M ́ ario Jorge, and Jorge Freire De Sousa. En-semble approaches for regression: A survey. Acm computing surveys (csur) , 45(1):1–40,2012.Ewa Pietka, Arkadiusz Gertych, Sylwia Pospiech, Fei Cao, HK Huang, and VicenteGilsanz. Computer-assisted bone age assessment: Image preprocessing and epiphy-seal/metaphyseal roi extraction. IEEE transactions on medical imaging , 20(8):715–729,2001.Hongxu Yin, Pavlo Molchanov, Jose M Alvarez, Zhizhong Li, Arun Mallya, Derek Hoiem,Niraj K Jha, and Jan Kautz. Dreaming to distill: Data-free knowledge transfer viadeepinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 8715–8724, 2020.Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprintarXiv:1605.07146 , 2016.Jie Zhang, Chen Chen, Bo Li, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chunhua Shen,and Chao Wu. Dense: Data-free one-shot federated learning. In Advances in NeuralInformation Processing Systems , 2022.4 |
rUpjCWd0BB | Medical Imaging with Deep Learning – Accepted 2023 Short Paper – MIDL 2023Nearest Neighbor Radiomics for Self-Supervised Chest X-rayPneumonia IdentificationCailin Winston∗1,2cailinw@cs.washington.eduCaleb Winston∗1calebwin@cs.stanford.eduChloe Winston2,3chloe.winston@pennmedicine.upenn.edu1Department of Computer Science, University of Washington, Seattle (UW),2Department of Bio-chemistry, UW,3Department of Neuroscience, UWAbstractSelf-supervised training minimizes a contrastive loss objective for unlabeled data. Con-trastive loss estimates the distance in the latent space between positive pairs, which arepairs of images that are expected to have the same label. For medical images, choosingpositive pairs is challenging because simple transformations like rotations or blurs are notclass-invariant. In this paper, we show that choosing positive pairs with nearest-neighborradiomics features for self-supervised training improves chest X-ray pneumonia identifica-tion accuracy by 8 .4% without labeled data.Keywords: Contrastive Learning, Self-Supervised Learning, Radiomics, Chest X-Ray,Pneumonia Identification1. IntroductionMany diseases affecting the lungs, such as pneumonia, can be diagnosed by human analysisof chest X-rays. However, variability in radiologists’ interpretations has motivated the useof deep learning models (Neuman et al., 2012) for automatic disease identification. Becausethese models require large amounts of labeled training data (Saraiva et al., 2019), methodssuch as transfer learning (Kundu et al., 2021) and contrastive learning (Han et al., 2021)are promising. For example, NNCLR maximizes the similarity between latent embeddingsof positive pairs of images (nearest neighbors in embedding space) (Dwibedi et al., 2021).Although successful for natural image classification, NNCLR and other contrastive learningtechniques do not directly extend to medical image classification, because visually similaror geometrically transformed medical images can have profoundly different pathology.Thus, we propose an approach for self-supervised training in medical imaging that maxi-mizes similarity in latent embedding space between different images that have nearest neigh-boring radiomics features. Radiomics reduces an image to a set of biologically meaningfuland radiologist-interpretable features (Tomaszewski and Gillies, 2021). We hypothesize thatself-supervised training with nearest-neighbor radiomics will learn latent embeddings thatreflect variation in radiomics features, which better predict pathology. In this paper, wediscuss our approach and evaluate it on chest X-ray pneumonia identification.2. MethodsWe propose using nearest-neighbor radiomics (NN-radiomics) to identify positive pairs forself-supervised training of chest X-ray classification models. The end-to-end methodology is(1) pretraining on a general labeled dataset such as ImageNet, (2) self-supervised pretrainingwith NN-radiomics and an algorithm for self-supervised learning such as SimSiam, and (3)supervised fine-tuning for the specific task such as pneumonia identification.∗Equal contribution©2023 CC-BY 4.0, C. Winston, C. Winston & C. Winston.Winston Winston WinstonDataset of Chest X-Ray ImagesLung SegmentationMasksRadiomics Features[0.832, 0.0012, ...][0.202, 0.0048, ...][0.496, 0.0008, ...]Pairs of Images withNearest NeighborRadiomics Features2. Existing Self-SupervisedPretraining Algorithm (e.g., SimSiam)1. SupervisedPretrainingDataset of Natural Images (e.g., ImageNet)3. SupervisedFinetuning(a) End-to-End Methodology for Training Chest X-RayModels with NN-Radiomics SSL.Mini-Batch of Chest X-RayImagesImages w . NearestNeighbor Radiomicsencoder f encoder fpredictor hsimilaritystop-grad(b) Architecture for SimSiam-based NN-radiomics SSL.Figure 1: Nearest-Neighbor Radiomics (NN-Radiomics) Self-Supervised Learning (SSL)2.1. Self-supervised Pretraining with Nearest-Neighbor RadiomicsWe use an off-the-shelf image segmentation model to extract the lungs from each chest X-rayimage (Selvan et al., 2020). Then, we compute standard radiomics features using PyRa-diomics (Ferreira Junior, 2021). We used 94 our of the 120 available features (excluded”shape-based” features). For each unlabeled image, we then find its nearest neighbor-ing image using the computed radiomics features. This produces a set of positive pairswith nearest neighboring radiomics. The model is then pretrained using any existing self-supervised algorithm, such as SimSiam (Chen and He, 2021) - which we used - or BYOL.2.2. Model ArchitectureThe model architecture (Figure 1( b)) for chest X-ray classification consists of an encoderand a classifier (not pictured). The encoder is a backbone model (ResNet18 (Kaiming Heand Sun, 2016) pretrained on ImageNet) with a projection MLP head and the classifieris a linear layer with an output size of 1. The parameters of the encoder are learned inthe self-supervised pretraining step, and the classifier is trained on frozen features from thebackbone component of the encoder during the supervised finetuning step.2.3. Experimental SetupWe evaluated self-supervised training with NN-radiomics on a chest X-ray model for pneu-monia detection. We used a binary pneumonia identification dataset of 5856 chest X-rays(Kermany, 2018) that we randomly split into pretraining, finetuning, testing, and validationsplits in a 7:2:0.9:0.1 ratio. The model was pretrained for 20 epochs and finetuned for 60.3. ResultsRQ1: Is NN-radiomics a high-quality indicator of task-specific positive pairs?We found that in a dataset of 4172 chest X-rays, 85 .69% of nearest neighbor radiomics(NN-radiomics) positive pairs have the same label (pneumonia vs. control). Furthermore,65.91% with pneumonia have the same type of pneumonia (viral vs. bacterial). The high2True Positive Pair with Bacterial Pneumonia True Positive Pair with V iral PneumoniaTrue Positive Pair with Control False Positive Paircontrol viralFigure 2: Examples of positive pairs with nearest neighboring radiomics features.percentage of NN-radiomics positive pairs with same labels motivates learning similar latentembeddings via contrastive learning for NN-radiomics positive pairs.Table 1: Accuracy on Pneumonia IdentificationMethod Accuracy (F1) AUROCBaseline 0.7996 0.9313Pretraining w. Random Positive Pairs 0.8141 0.9311Pretraining w. 1st-NN-Radiomics Positive Pairs 0.8669 0.9517Pretraining w. 10th NN-Radiomics 0.8557 0.9572Pretraining w. 50th NN-Radiomics 0.8381 0.9537RQ2: Can self-supervised pretraining with NN-radiomics boost accuracy? Ourresults in Table 1 demonstrate a notable boost in accuracy by pretraining with NN-radiomics-based contrastive learning compared to the baseline of no pretraining.RQ3: How does nearness of radiomics positive pairs affect accuracy? We con-duct an ablation on nwhere the nth-nearest radiomics pairs are used for self-supervisedlearning. The results in Table 1 demonstrate that accuracy degrades as the positive pairsused are farther in radiomics space.4. ConclusionWe present an approach to pretraining chest X-ray models without labeled data by usingpositive pairs that have nearest neighboring radiomics features. Our results demonstratea notable improvement in pneumonia identification accuracy through self-supervised pre-training of chest X-ray models using nearest-neighbor radiomics.AcknowledgmentsWe thank Dr. Linda Shapiro, Professor of Computer Science and Engineering at the Uni-versity of Washington for her support.3Winston Winston WinstonReferencesXinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceed-ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages15750–15758, 2021.Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisser-man. With a little help from my friends: Nearest-neighbor contrastive learning of visualrepresentations. In Proceedings of the IEEE/CVF International Conference on ComputerVision , pages 9588–9597, 2021.Cardona Cardenas D. A. Moreno R. A. de S ́ a Rebelo M. F. Krieger J. E. Gutierrez M. A.Ferreira Junior, J. R. Novel chest radiographic biomarkers for covid-19 using radiomicfeatures associated with diagnostics and outcomes. Journal of digital imaging , 34(2):297–307, 2021.Yan Han, Chongyan Chen, Ahmed Tewfik, Ying Ding, and Yifan Peng. Pneumonia detec-tion on chest x-ray using radiomic features and contrastive learning. In 2021 IEEE 18thInternational Symposium on Biomedical Imaging (ISBI) , pages 247–251. IEEE, 2021.Shaoqing Ren Kaiming He, Xiangyu Zhang and Jian Sun. Deep residual learning for imagerecognition. CVPR, 2016.Kang; Goldbaum Michael Kermany, Daniel; Zhang. Labeled optical coherence tomography(oct) and chest x-ray images for classification. 2018.Rohit Kundu, Ritacheta Das, Zong Woo Geem, Gi-Tae Han, and Ram Sarkar. Pneumoniadetection in chest x-ray images using an ensemble of deep learning models. PloS one , 16(9):e0256630, 2021.Mark I Neuman, Edward Y Lee, Sarah Bixby, Stephanie Diperna, Jeffrey Hellinger, RichardMarkowitz, Sabah Servaes, Michael C Monuteaux, and Samir S Shah. Variability in theinterpretation of chest radiographs for the diagnosis of pneumonia in children. Journalof hospital medicine , 7(4):294–298, 2012.Arat ̃ a Andrade Saraiva, D. B. S. Santos, Nator Junior C. Costa, Jos ́ e Vigno Moura Sousa,Nuno M. Fonseca Ferreira, Ant ́ onio Valente, and Salviano Soares. Models of learning toclassify x-ray images for the detection of pneumonia using neural networks. In BIOIMAG-ING, 2019.Raghavendra Selvan, Erik B. Dam, Nicki Skafte Detlefsen, Sofus Rischel, Kaining Sheng,Mads Nielsen, and Akshay Pai. Lung segmentation from chest x-rays using variationaldata imputation. ICML Workshop on The Art of Learning with Missing Values, July2020. arXiv preprint arXiv:2020.2005.10052.Michal R Tomaszewski and Robert J Gillies. The biological meaning of radiomic features.Radiology , 298(3):505–516, 2021.4 |
TyA5AyU_tSv | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionShape Equivariant Learning for Robust MRI SegmentationAinkaran Santhirasekaram1a.santhirasekaram19@imperial.ac.ukMathias Winkler2m.winkler@imperial.ac.ukAndrea Rockall2a.rockall@imperial.ac.ukBen Glocker1b.glocker@imperial.ac.uk1Department of Computing, Imperial College London, United Kingdom2Department of Surgery and Cancer, Imperial College London, United KingdomEditors: Under Review for MIDL 2023AbstractThe reliability of deep learning based segmentation models is essential to the safe trans-lation of these models into clinical practise. Unfortunately, these models are sensitive todistributional shifts. This is particularly notable in MRI, where there is a large variationof acquisition protocols across different domains leading to varying textural profiles. Wehypothesise that the constrained anatomical variability across subjects can be leveraged todiscretize the latent space to a dictionary of shape components. We achieve this by usingmultiple MRI sequences to learn texture invariant and shape equivariant features whichare used to construct a shape dictionary using vector quantisation. This dictionary is thensampled to compose the segmentation output. Our method achieves SOTA performancein the task of single domain generalisation (SDG) for prostate zonal segmentation.Keywords: Shape Equivariance, Robustness, Segmentation, MRI1. IntroductionMagnetic resonance imaging involves a complex acquisition process which differs acrosssubjects and domains. This can lead to varying textural profiles and artefacts. Deep learningbased segmentation models are however not robust to textural shifts and unencounteredartefacts at test time. Domain generalisability for deep learning has been traditionallytackled through augmentation based strategies such as CutOut (DeVries and Taylor, 2017)and BigAug (Zhang et al., 2020). AdvBias (Chen et al., 2020) is an adversarial technique forMRI data which learns to generate bias field deformations to improve model robustness forsegmentation. RandConv (Xu et al., 2020) which is perhaps the most related work, attemptsto learn textural invariant features by using a randomised convolutional input layer. Here,we propose an alternative method to learn shape equivariant features based on the principlethat in MRI, T2 weighted images and ADC maps calculated from diffusion weighted imagingcontain the exact same spatial information and only differ in their textural profiles. Thereis anatomical consistency across subjects meaning there is reduced spatial variation in thesegmentation outputs. Therefore, we propose to constrain the latent space to a dictionary ofshape components which is sampled to construct the segmentation output. We hypothesisethis will improve the generalisability of any segmentation model which maps the input space,Xto a lower dimensional embedding space, Eusing an encoder, Φ ebefore mapping to thesegmentation output, Ywith a decoder, Φ d. This is achieved using vector quantisation(Van Den Oord et al., 2017) of the shape equivariant features to create a discrete shape©2023 CC-BY 4.0, A. Santhirasekaram, M. Winkler, A. Rockall & B. Glocker.Santhirasekaram Winkler Rockall GlockerT2 Encoder Sample N128ADC EncoderShared W eightsComposeShape DictionarySampled ComponentsFigure 1: Overview of our method demonstrating using the ADC map to learn shape equiv-ariant features which is quantised to construct a shape dictionary, D.space. We assume the dictionary is complete and sufficient to capture the entire distributionof segmentation outputs after composition of the discrete shape space using the decoder.We evaluate the capability of our method to improve domain generalisability in the taskof prostate zonal segmentation with two labels (transitional and peripheral zone) whentraining on a single domain.2. MethodWe start with the image input which is the T2 weighted image, x∈R1×256×256×24and applyan intensity transformation, Tiwhich is equivalent to acquiring the ADC map. We also applya spatial transformation, Tsto the ADC map which involves rotations. Specifically, we applytransformations from the dihedral group (d4) which consists of 90 degree rotation in thez plane and 180 degree rotation in the y plane. The order of this group is 8 so we create8 transformations per sample during training. The T2 image and spatially transformedADC map are passed through an encoder to produce their respective embeddings, z1andz2as shown in Figure 1. Shape equivariance and texture invariance is enforced by satisfyingequation 1.Φe(Ts(Ti(x))) = Ts(z1) (1)Therefore, we minimise the contrastive loss: Lcontr =∥Ts(z1)−z2∥22. Note, a contrastive lossonly theoretically learns equivariance to the 8 spatial transformations applied per sample.It does not constrain the convolutional layers to the D4 group. We assume an approximateequivariance to the D4 group by using our contrastive loss.LQuant =1mi=m−1Xi=0∥sg(z1i)−ek∥2+β∥z1i−sg(ek)∥2 (2)We quantise, z1∈R128×16×16×12using vector quantisation by dividing z1into 16 ×16×12components and replacing each component in z1denoted z1iwith its nearest component,2Shape Equivariant LearningBaseline CutOut BigAug AdvBias RandConv OursDice 0.51 ±0.13 0.53 ±0.17 0.63 ±0.15 0.56 ±0.13 0.59 ±0.15 0.64±0.11HD 0.40 ±0.11 0.37 ±0.19 0.25 ±0.12 0.33 ±0.15 0.29 ±0.08 0.23±0.10Table 1: Dice score and Hausdorff distance(HD) ±standard deviation for different SDGmethods compared to our approachek∈Dwhere k=argmin j∥z1i−ej∥2. This produces the discrete shape latent space, ˆ zwhich is inputted into the decoder to construct the segmentation output. The quantisationloss minimises the euclidean distance between z1iand its nearest component, ek∈Dshownin equation 2. Stop gradients (sg) are applied to the correct operand. We compute the diceloss between the output, ˆ yand the T2 segmentation label, y. The total loss for trainingour framework is Ltotal=Ldice(ˆy, y) +Lcontr +Lquant. Note, only T2 weighted images arerequired as input during inference.3. Experiments and ResultsDataset: The training set comprises the Prostate dataset obtained from the Medical Seg-mentation Decathlon (Antonelli et al., 2022), consisting of 32 T2-weighted and ADC imagescaptured at the Radboud University Nijmegen Medical Centre (RUNMC). We use the 30T2 weighted images in the NCI-ISBI13 Challenge (Bloch et al., 2015) which was acquiredfrom Boston Medical Centre (BMC) for our test set. All images are centre cropped to256×256×24 and normalised between 0 and 1.Baseline Model and Comparison: We use a hybrid 2D/3D UNet as our baseline modelin order to deal with the anisotropic Prostate MRI images. The encoder and decoderis made up of 5 levels consisting of 2D pre-activation residual blocks in the top 4 levelsand a 3D pre-activation residual block in the bottleneck level. We use the same encoderand decoder architecture for our method. We compare our method to the following SDGmethods applied to the baseline model: CutOut (DeVries and Taylor, 2017), BigAug (Zhanget al., 2020), AdvBias (Chen et al., 2020) and RandConv (Xu et al., 2020). All models weretrained for up to 500 epochs using Adam optimisation with a learning rate of 0.001.Results and Discussion: In Table 1, we show that our method outperforms other SDGmethods in terms of the Dice score and Hausdorff distance. We therefore show that one canimprove the domain generalisability of a segmentation model in an anatomical segmentationtask by constraining the latent space to a finite set of shape components.In future work, we will enforce D4 group equivariant convolutional layers by applyingtransformations from the D4 group to the filters themselves to create 8 transformed filtersfrom each convolutional kernel. We will also constrain the convolutional kernels such thatthey are equivariant to other groups such as the SO(3) or SE(3) group as well as develop amethod for SE(3) group equivariant vector quantisation.AcknowledgmentsThis work was supported and funded by Cancer Research UK (CRUK) (C309/A28804)3Santhirasekaram Winkler Rockall GlockerReferencesMichela Antonelli, Annika Reinke, Spyridon Bakas, Keyvan Farahani, Annette Kopp-Schneider, Bennett A Landman, Geert Litjens, Bjoern Menze, Olaf Ronneberger,Ronald M Summers, et al. The medical segmentation decathlon. Nature communica-tions , 13(1):4128, 2022.Nicholas Bloch, Anant Madabhushi, Henkjan Huisman, John Freymann, Justin Kirby,Michael Grauer, Andinet Enquobahrie, Carl Jaffe, Larry Clarke, and Keyvan Farahani.Nci-isbi 2013 challenge: automated segmentation of prostate structures. The CancerImaging Archive , 370:6, 2015.Chen Chen, Chen Qin, Huaqi Qiu, Cheng Ouyang, Shuo Wang, Liang Chen, GiacomoTarroni, Wenjia Bai, and Daniel Rueckert. Realistic adversarial data augmentation formr image segmentation. In International Conference on Medical Image Computing andComputer-Assisted Intervention , pages 667–677. Springer, 2020.Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neuralnetworks with cutout. arxiv 2017. arXiv preprint arXiv:1708.04552 , 2017.Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advancesin neural information processing systems , 30, 2017.Zhenlin Xu, Deyi Liu, Junlin Yang, Colin Raffel, and Marc Niethammer. Robust andgeneralizable visual representation learning via random convolutions. arXiv preprintarXiv:2007.13003 , 2020.Ling Zhang, Xiaosong Wang, Dong Yang, Thomas Sanford, Stephanie Harmon, Baris Turk-bey, Bradford J Wood, Holger Roth, Andriy Myronenko, Daguang Xu, et al. Generalizingdeep learning for medical image segmentation to unseen domains via deep stacked trans-formation. IEEE transactions on medical imaging , 39(7):2531–2540, 2020.4 |
5063TZgHfQm | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionDeep Learning Regression of Cardiac Phase on Real-TimeMRISamira Masoudi smasoudi@ucsd.eduAmin Mahmoodi amahmoodi@health.ucsd.eduHafsa S. Babar Hbabar@health.ucsd.eduAlbert Hsiao alhsiao@health.ucsd.eduUniversity of California San Diego, San Diego, CA, USAEditors: Under Review for MIDL 2023AbstractCine steady-state free-precession (SSFP) is the backbone of cardiac MRI, providing visu-alization of cardiac structure and function over the cardiac cycle, but requires concurrentECG-gating to combine k-space data over multiple heart beats. However, cine SSFP islimited by a number of factors including arrhythmia, where beat-to-beat variability causesimage artifacts. Real-time (RT) SSFP and recent innovations in image reconstructionprovides a new potential alternative, capable of acquiring images without averaging overmultiple heart beats. However, analysis of cardiac function from this image data can becomplex, requiring retrospective analysis of function over multiple cardiac cycles and slices.We propose a deep learning regression method to facilitate cardiac phase detection, lever-aging synthetic training approach from historical cine SSFP image data, and evaluate theeffectiveness of this approach for detecting cardiac phase on RT SSFP images, manuallylabeled by expert readers. This combined approach using RT SSFP may have multiplepotential advantages over traditional cine SSFP for evaluating cardiac function in patientswith arrhythmia or difficulty tolerating long breath holds.Keywords: Real-time (RT) steady-state free-precession (SSFP), data synthesizing, cardiacphase regression.1. IntroductionCine steady-state free-precession (SSFP) serves as the backbone of cardiovascular mag-netic resonance imaging, and enables the quantitative assessment of left ventricular (LV)structure and function. Cine SSFP however, requires retrospective cardiac-gating with anelectrocardiogram (ECG) to be recorded over multiple heartbeats and breath-holds (Wanget al., 2021). Because MRI signals are averaged over multiple RR-intervals using the ECG,beat-to-beat variations are obscured and image quality can be degraded by arrhythmia,in addition to patient motion or respiration. As an alternative, real-time (RT) SSFP,performed without ECG-gating, has the potential to address these limitations. However,quantitative analysis of RT SSFP images is time-consuming, as it requires identificationof cardiac phase over multiple beats. (Chen et al., 2021; Rehman et al., 2022). To tacklethis, we propose a semi-supervised deep learning strategy to automate identification of end-diastolic (ED) and end-systolic (ES) image frames for each short-axis slice across the lengthof the left ventricle, and facilitate estimation of ventricular volume and function.©2023 S. Masoudi, A. Mahmoodi, H.S. Babar & A. Hsiao.Masoudi Mahmoodi Babar Hsiao2. MethodsIn this IRB-approved, HIPAA-compliant study, we trained a convolutional neural network(CNN) to identify the ED and ES frames in successive cardiac cycles from RT SSFP imagesobtained in routine clinical care. Rather than using manual labels of cardiac phases on RTSSFP images, we instead elected to apply a synthetic training strategy (Masutani et al.,2020) to mimic RT SSFP images using historical short axis cine SSFP images from 241cardiac MRI exams, previously labeled with cardiac phases. Since cine SSFP images areacquired with higher spatial and temporal resolution than RT SSFP, images were spatiallyand temporally downsampled to simulate real-time acquisitions. To provide an estimator ofproximity to ED and ES phases, we applied a temporal Gaussian convolution ( σ= 75ms)to ground truth ED and ES cardiac phase landmarks. The Gaussian-convolved proximityestimates were then used as labels for regression.Dataset The data used for training included images from 241 cardiac MRIs, specificallyshort-axis cine SSFP images with temporal resolution ranging from 37 ±12.5msand 256 ×256 spatial resolution. Data were split at a patient-level 80% −10%−10% into training,validation, and test sets. In addition, the proposed CNN was evaluated using an independentset of 8 SAX RT SSFP acquisitions, obtained from a separate cohort of patients, whichwere annotated for ED and ES cardiac phases, by one of two physicians-in-training (A.M.and H.S.B), supervised by a board-certified radiologist with over 10 years of experience incardiac MRI. RT SSFP images were obtained with 155 .375±19.583mstemporal resolutionand 512 ×512 spatial resolution.Model Development We used Xception, pre-trained with ImageNet with 3-channelinput (3 temporally successive frames) and a modified 2-channel output for ED and ESwhere each channel implied a 3 ×1 vector of ED and ES proximity estimation. Meanabsolute error loss was used to regress the ED and ES proximity values according to theGaussian ground truth around the ED/ES frames. Model was trained for 120 epochs usingsynthetic dataset and a batch size of 48, where augmention in form of random temporalresolution ( δt∈[120−240ms]), starting point ( t0), and zoom-out took place during thetraining to simulate RT images with varying temporal resolution and larger filed of view.The best model (with lowest validation loss) was used to infer ED and ES from synthtetictest set. Results were averaged over time windows of 3, strides of 1 along the temporalaxis. Thresholded (at 0 .45), the second derivative of averaged Ed/ES predictions signifiedthe elected ED/ES frames.3. ResultsComparing the predictions to ground truth during the inference on the synthetic test set,results in terms of accuracy, and recall of ED, ES frame detection are provided using amaximum of 0-frame or 1-frame leniency (Table 1). Later, algorithm performance wasassessed against the recorded ECG of the RT SAX SSFP images. Figure 1 depicts anexample case with results for 3 ventricular slices along temporal resolution which confirmsthe efficiency of our proposed method to potentially skip the EKG gating, and to be ableto extract R-R intervals from RT images and use the resulted synthetic cine SSFP imagesto evaluate cardiac function in form of a distribution for each cardiac measure rather thana scalar value.2Short TitleTable 1: Results on synthetic test set0-frame (0ms) 1-frame (120 −240ms)Accuracy (ED) 0.905 0.960Recall (ED) 0.824 1.000Accuracy (ES) 0.877 0.957Recall (ES) 0.739 0.984Figure 1: On the top, an exemplar time series of image frames from RT SSFP are shownalong with image labels marking the ground truth annotations and algorithminference. On the bottom, results of the deep learning regression algorithm areshown for 3 adjacent ventricular slices, along with ECG signals that were recordedconcurrently to serve as an additional ground truth reference point.AcknowledgmentsThe authors would like to acknowledge GE healthcare for their support.3Masoudi Mahmoodi Babar HsiaoReferencesEric Z Chen, Xiao Chen, Jingyuan Lyu, Qi Liu, Zhongqi Zhang, Yu Ding, Shuheng Zhang,Terrence Chen, Jian Xu, and Shanhui Sun. Cardiac functional analysis with cine mri viadeep learning reconstruction. arXiv preprint arXiv:2105.08157 , 2021.Evan M Masutani, Naeim Bahrami, and Albert Hsiao. Deep learning single-frame andmultiframe super-resolution for cardiac mri. Radiology , 295(3):552–561, 2020.A Rehman, P Kellman, H Xue, I Pierce, RH Davies, M Fontana, and JC Moon. Con-volutional neural network transformer (cnnt) for free-breathing real-time cine imaging.European Heart Journal-Cardiovascular Imaging , 23(Supplement 2):jeac141–001, 2022.Xiaoqing Wang, Martin Uecker, and Li Feng. Fast real-time cardiac mri: A review ofcurrent techniques and future directions. Investigative Magnetic Resonance Imaging , 25(4):252–265, 2021.4 |
lUZGyTRzxq | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionSegment Anything Model (SAM) for Digital Pathology:Assess Zero-shot Segmentation on Whole Slide ImagingRuining Deng∗1r.deng@vanderbilt.edu1Vanderbilt University, Nashville, TN, USACan Cui∗1can.cui.1@vanderbilt.eduQuan Liu∗1quan.liu@vanderbilt.eduTianyuan Yao1tianyuan.yao@vanderbilt.eduLucas W. Remedios1lucas.w.remedios@vanderbilt.eduShunxing Bao1shunxing.bao@vanderbilt.eduBennett A. Landman1bennett.landman@vanderbilt.eduLee E. Wheless2,3lee.e.wheless@vumc.org2Vanderbilt University Medical Center, Nashville, TN, USA3Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN, USAlee.e.wheless@vumc.orgLori A. Coburn2,3lori.coburn@vumc.orgKeith T. Wilson2,3keith.wilson@vumc.orgYaohong Wang2yaohong.wang@vumc.orgShilin Zhao2shilin.zhao.1@vumc.orgAgnes B. Fogo2agnes.fogo@vumc.orgHaichun Yang2haichun.yang@vumc.orgYucheng Tang4yuchengt@nvidia.com4NVIDIA Cooperation, Redmond, WA, USAYuankai Huo†1Yuankai.huo@vanderbilt.eduEditors: Under Review for MIDL 2023AbstractThe segment anything model (SAM) was released as a foundation model for image segmen-tation. The promptable segmentation model was trained by over 1 billion masks on 11Mlicensed and privacy-respecting images. The model supports zero-shot image segmentationwith various segmentation prompts (e.g., points, boxes, masks). It makes the SAM attrac-tive for medical image analysis, especially for digital pathology where the training data arerare. In this study, we evaluate the zero-shot segmentation performance of SAM modelon representative segmentation tasks on whole slide imaging (WSI), including (1) tumorsegmentation, (2) non-tumor tissue segmentation, (3) cell nuclei segmentation. Core Re-sults: The results suggest that the zero-shot SAM model achieves remarkable segmentationperformance for large connected objects. However, it does not consistently achieve satisfyingperformance for dense instance object segmentation, even with 20 prompts (clicks/boxes)on each image. We also summarized the identified limitations for digital pathology: (1) im-age resolution, (2) multiple scales, (3) prompt selection, and (4) model fine-tuning. In thefuture, the few-shot fine-tuning with images from downstream pathological segmentationtasks might help the model to achieve better performance in dense object segmentation.Keywords: segment anything, SAM model, digital pathology, medical image analysis.∗Joint first author: contributed equally†Corresponding author©2023 CC-BY 4.0, R. Deng et al.Deng et al.1. IntroductionLarge language models (e.g., ChatGPT (Brown et al., 2020) and GPT-4 (OpenAI, 2023)),are leading a paradigm shift in natural language processing with strong zero-shot and few-shot generalization capabilities. Segmenting objects (e.g., tumor, tissue, cell nuclei) forwhole slide imaging (WSI) data is an essential task for digital pathology (Huo et al., 2021).The ”Segment Anything Model” (SAM) (Kirillov et al., 2023) was proposed as a founda-tion model for image segmentation. The model has been trained on over 1 billion maskson 11 million licensed and privacy-respecting images. Furthermore, the model supportszero-shot image segmentation with various segmentation prompts (e.g., points, boxes, andmasks). This feature makes it particularly attractive for pathological image analysis wherethe labeled training data are rare and expensive.In this study, we assess the zero-shot segmentation performance of the SAM modelon representative segmentation tasks, including (1) tumor segmentation (Liu et al., 2021),(2) tissue segmentation (Deng et al., 2023), and (3) cell nuclei segmentation (Li et al.,2021). Our study reveals that the SAM model has some limitations and performance gapscompared to state-of-the-art (SOTA) domain-specific models.2. Experiments and PerformanceWe obtained the source code and the trained model from https://segment-anything.com. To ensure scalable assessments, all experiments were performed directly using Python,rather than relying on the Demo website. The results are presented in Figure 1 and Table1.Tumor Segmentation . We employed SimTriplet (Liu et al., 2021) approach as theSOTA method, with the same testing cohort to make a fair comparison. In order to becompatible with the SAM segmentation model, the WSI inputs were scaled down 80 timesfrom a resolution of 40 ×, resulting in an average size of 860 ×1279 pixels. Tissue Seg-mentation . We employed Omni-Seg (Deng et al., 2023) approach as the SOTA method,with the same testing cohort to make a fair comparison.. The tissue types consist of theglomerular unit (CAP), glomerular tuft (TUFT), distal tubular (DT), proximal tubular(PT), arteries (VES), and peritubular capillaries (PTC). Cell nuclei Segmentation . TheMoNuSeg dataset (Kumar et al., 2019) includes 30 images for training and 14 for testing.We evaluated the performance of SAM models against the BEDs model (Li et al., 2021), acompetitive nuclei segmentation model trained on the MoNuSeg training data.3. Limitations on Digital PathologyThe SAM models achieve remarkable performance under zero-shot learning scenarios. How-ever, we identified several limitations during our assessment.Image resolution . The average training image resolution of SAM is 3300 ×4950 pix-els (Kirillov et al., 2023), which is significantly smaller than Giga-pixel WSI data ( >109pixels). Multiple scales . Multi-scale is a significant feature in digital pathology. Differenttissue types have their optimal image resolution (as shown in Table 1). Prompt selection .To achieve decent segmentation performance in zero-shot learning scenarios, a considerablenumber of prompts are still necessary. Model fune-tuning . A reasonable online/offline2Short TitleImageManualDTPTCAPTissueTUFTVESPTCSOTA1 point20 pointsTotal pointsTotal boxesTumorCellNucleiTumor Regionn/an/an/a0.5x5x5x10x10x10x40x40xNuclei Seg.Negative point promptSegmentationn/aNot availablePositive point promptBox promptFigure 1: Qualitative segmentation results . The SOTA methods are compared withSAM method with different prompt strategies.Table 1: Compare SAM with state-of-the-art (SOTA) methods. (Unit: Dice score)Method PromptsTumor Tissue Cell0.5× 5× 10× 40× 40×Tumor CAP TUFT DT PT VES PTC NucleiSOTA no prompt 71.98 96.50 96.59 81.01 89.80 85.05 77.23 81.77SAM 1 point 58.71 78.08 80.11 58.93 49.72 65.26 67.03 1.95SAM 20 points 74.98 80.12 79.92 60.35 66.57 68.51 64.63 41.65SAM total points n/a 88.10 89.65 70.21 73.19 67.04 67.61 69.50SAM total boxes n/a 95.23 96.49 89.97 86.77 87.44 87.18 88.30total points/boxes: we place points/boxes on every single instance object (based on theknown ground truth) as a theoretical upper bound of SAM. Note that it is impractical inreal applications.fine-tuning strategy is necessary to propagate the knowledge obtained from manual promptsto larger-scale automatic segmentation on Giga-pixel WSI data.Acknowledgements .This research was supported by NIH R01DK135597, The Leona M.and Harry B. Helmsley Charitable Trust grant G-1903-03793 and G-2103-05128, NSF CA-3Deng et al.REER 1452485, NSF 2040462, NCRR Grant UL1 RR024975-01 (NCATS Grant 2 UL1TR000445-06), NIH NIDDK DK56942, DoD HT94252310003, the VA grants I01BX004366and I01CX002171, VUMC Digestive Disease Research Center supported by NIH grantP30DK058404, NVIDIA hardware grant, resources of ACCRE at Vanderbilt University.ReferencesTom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, PrafullaDhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan-guage models are few-shot learners. Advances in neural information processing systems ,33:1877–1901, 2020.Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Jun Long, Zuhayr Asad, R MichaelWomick, Zheyu Zhu, Agnes B Fogo, Shilin Zhao, et al. Omni-seg: A scale-aware dynamicnetwork for renal pathological image segmentation. IEEE Transactions on BiomedicalEngineering , 2023.Yuankai Huo, Ruining Deng, Quan Liu, Agnes B Fogo, and Haichun Yang. Ai applicationsin renal pathology. Kidney international , 99(6):1309–1320, 2021.Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson,Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything.arXiv preprint arXiv:2304.02643 , 2023.Neeraj Kumar, Ruchika Verma, Deepak Anand, Yanning Zhou, Omer Fahri Onder, Efstra-tios Tsougenis, Hao Chen, Pheng-Ann Heng, Jiahui Li, Zhiqiang Hu, et al. A multi-organnucleus segmentation challenge. IEEE transactions on medical imaging , 39(5):1380–1391,2019.Xing Li, Haichun Yang, Jiaxin He, Aadarsh Jha, Agnes B Fogo, Lee E Wheless, Shilin Zhao,and Yuankai Huo. Beds: Bagging ensemble deep segmentation for nucleus segmentationwith testing stage stain augmentation. In 2021 IEEE 18th International Symposium onBiomedical Imaging (ISBI) , pages 659–662. IEEE, 2021.Quan Liu, Peter C Louis, Yuzhe Lu, Aadarsh Jha, Mengyang Zhao, Ruining Deng, TianyuanYao, Joseph T Roland, Haichun Yang, Shilin Zhao, et al. Simtriplet: Simple tripletrepresentation learning with a single gpu. In Medical Image Computing and ComputerAssisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France,September 27–October 1, 2021, Proceedings, Part II 24 , pages 102–112. Springer, 2021.OpenAI. Gpt-4 technical report, 2023.4 |
dI6wYt1qr1o | Medical Imaging with Deep Learning – Accepted 2023 Short Paper – MIDL 2023 submissionA Deep-Learning Based Approach to AccelerateGroundtruth Generation for Biomarker Status Identificationin Chromogenic Duplex ImagesSatarupa Mukherjee satarupa.mukherjee@roche.comQinle Ba qinle.ba@roche.comJim Martin jim.martin@contractors.roche.comYao Nie yao.nie@roche.comRoche Sequencing Solutions, Santa Clara, CA, USAEditors: Accepted for MIDL 2023AbstractImmunohistochemistry based companion diagnosis relies on the examination of single biomark-ers for patient stratification. However, recent years have seen an increasing need to char-acterize the interactions among biomarkers in the tumor microenvironment. To this end,chromogenic multiplexing immunohistochemistry (mIHC) serves as a promising solution,which enables simultaneous detection of multiple biomarkers in the same tissue sections. Toautomate whole-slide scoring for mIHC, a crucial analysis step involves the identification ofcell locations along with their biomarker staining status (presence/absence of positive stain-ing signals), which we call biomarker status identification. However, developing algorithmsfor such analysis, especially deep-learning (DL) models, often requires manual labeling atthe cell-level, which is time-consuming and resource-intensive. Here, we present a DL basedmethod to accelerate groundtruth label generation for chromogenic duplex (tissue samplesstained with two biomarkers) images. We first generated approximate cell labels and thendeveloped a DL based interactive segmentation system to efficiently refine the cell labels.Our method avoided extensive manual labeling and reduced the time of label generationto 50%-25% of manual labeling, while achieving <5% error rate in pathologist review.Keywords: Deep Learning, Biomarker Status, Accelerated Groundtruth Generation1. IntroductionDue to the complex color blending effects (Figure 1(a)) in chromogenic mIHC images, itis challenging for pathologists to visually decouple the staining intensities from multiplebiomarkers and thus they cannot reliably interpret biomarker status from these images.One promising solution to this challenge is to develop quantitative analysis algorithmsbased on deep-learning, to identify the biomarker staining of each cell type of interest andthen assemble cell-level results into whole-slide scoring for biomarker status.Here, we present a novel approach to generate cell-level labels for chromogenic duplexassays. Our method ensures the validity of the obtained labels, while avoiding extensivemanual labeling by expert pathologists, significantly reducing labeling time. This methodcombined approximate cell-level labeling and a generalized DL-based interactive tissue seg-mentation. We validated the proposed method with a duplex assay for PDL1 (Programmeddeath-ligand) and CK7 (Cytokeratin). Notably, the proposed method can be readily appliedto any other duplex assays.©2023 S. Mukherjee, Q. Ba, J. Martin & Y. Nie.Mukherjee Ba Martin Nie2. MethodologyApproximate cell labels - We aimed to label the pixel location of the nucleus centerand the biomarker presence/absence for each cell, targeting five classes: (i) PDL1+CK7+tumor cells, (ii) PDL1+CK7- tumor cells, (iii) PDL1-CK7+ tumor cells, (iv) PDL1-CK7-tumor cells and (v) Other cells. 40 field of views (FOVs) of size 600x600 pixels at 20x mag-nification were selected by a pathologist from 4 Tamra-PDL1/Dabsyl-CK7 duplex slides(lung cancer) to cover a diverse range of biomarker staining intensities. We used HALO(Indica Labs HALO image analysis platform) for initial stain unmixing, followed by tissuesegmentation and biomarker status identification. The tissue segmentation was performedfor three classes: (i) Tumor, (ii) Stroma and (iii) Other. The biomarker status identificationwas performed to locate and classify cells into the following four types: (i) PDL1+CK7+(ii) PDL1+CK7- (iii) PDL1-CK7+ (iv) PDL1-CK7-. Instead of HALO, any other machinelearning based interface could also be used for generating these approximate cell labels fromstain unmixed singleplex images.Interactive tissue segmentation - We observed inadequate performance of HALO tissuesegmentation and thus developed an interactive segmentation system, inspired by (Sofiiuket al., 2022). We first trained a DL model that learnt to respond to user input and thendeveloped a GUI to enable users to provide input (mouse clicks) to the model at test time.Unlike existing DL-based interactive segmentation models (Sofiiuk et al., 2022, 2020), whichsegment one target class at a time (binary segmentation), we designed a three-class modelby adding an additional class of user input clicks. We trained our model with HighRes-olutionNet (Wang et al., 2020) as backbone on the Semantic Boundaries Dataset (SBD)(Hariharan et al., 2011) containing 11355 images (8498 for training; 2857 for validation).Three-class model was preferred because (1) pathologists requested that we show the gener-ated cell labels in the aforementioned three types of regions separately to assist their reviewand (2) binary models required two rounds of independent segmentation for three types ofregions, leading to ambiguity in the mask merging phase.Refining approximate cell labels with tissue masks - We first identified the non-tumorcells located in the non-tumor regions (“Stroma” and “Other” in segmentation masks) thatwere stained positive for the biomarkers and were thus erroneously labeled as tumor cellsin approximate cell labels. These cells included CK7+ and PDL1+ non-tumor cells. Were-labeled these identified cells as the fifth cell class, “Other”. In general, such a filteringapproach can be leveraged to refine approximate cell labels as needed.3. ResultsWe observed erroneous segmentation of tumor regions with HALO as well as errors in theapproximate cell labeling: (1) macrophages with moderate/strong membrane staining weredetected as PDL1+ tumor cells; (2) benign epithelial cells with positive Dabsyl stainingwere detected as CK7+ tumor cells; (3) many cells in stroma regions with positive Tamrastaining were detected as PDL1+ tumor cells; (4) some necrotic cells with positive Tamrastaining in the necrotic regions were incorrectly detected as PDL1+ tumor cells.With the designed interactive segmentation system, we generated accurate tissue seg-mentation masks (Figure 1(b)) with only a few clicks per tissue class. We found thatthis system, while trained with natural scene images, could be generalized to chromogenic2Deep Learning based Accelerated Groundtruth GenerationmIHC, because it was trained to respond to user input guided by simulated user clicks tar-geting arbitrarily selected classes of regions. With such tissue masks, incorrect cell labelsfor macrophages, CK7+ cells in non-tumor regions, PDL1+ cells in the stroma and necroticregions were re-labeled as “Other” cell types (Figure 1(c)).To ensure the validity of the cell labels, we first performed stain unmixing (Ruifrokand Johnston, 2001) of the duplex images to generate synthetic CK7 and PDL1 singlepleximages respectively, followed by pathologist scoring within the tumor regions in these im-ages. Three pathologists provided scores and their consensus scores (median of their scores)were compared with the groundtruth scores from the corresponding cell labels, as shownin Figure 2, where vertical bars indicate range of pathologist scores. We observed that thequantification from the generated labels aligned well with pathologists’ scores, demonstrat-ing the effectiveness of the proposed cell-level label generation method. With this labelingapproach, it only took around 15-20 minutes to label an FOV of 600x600 pixels in size,whereas manual annotation took around 45 minutes to 1 hour. With the generated labels,we were able to develop a UNet-based (Ronneberger et al., 2001) model for biomarker statusidentification at cell-level with >90% accuracy, which was confirmed by 3 pathologists.Figure 1: (a) An Example FOV (b) Tissue Segmentation Mask (c) Refined Cell LabelsFigure 2: Comparison between Pathologist Scores and Groundtruth Scores4. ConclusionWe have developed a DL-based method for accelerating cell-level labeling with minimalmanual input. We first generate approximate cell labels and then develop a DL-basedinteractive segmentation system to efficiently refine the cell labels. Our labeling approachis highly effective, efficient and readily applicable to various multiplex assays.3Mukherjee Ba Martin NieReferencesB. Hariharan, P. Arbel ́ aez, L. Bourdev, S. Maji, and J. Malik. Semantic contours frominverse detectors. In ICCV , pages 991–998. IEEE, 2011.O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomed-ical image segmentation. Medical Image Computing and Computer-Assisted Interven-tion–MICCAI , Part III(18):234–241, 2001.A.C. Ruifrok and D.A. Johnston. Quantification of histochemical staining by color decon-volution. Analytical and Quantitative Cytology and Histology , 23(4):291–299, 2001.K. Sofiiuk, I. Petrov, O. Barinova, and A. Konushin. F-brs: Rethinking backpropagat-ing refinement for interactive segmentation. IEEE Conference on Computer Vision andPattern Recognition (CVPR) , page 8623–8632, 2020.K. Sofiiuk, I. Petrov, and A. Konushin. Reviving iterative training with mask guidance forinteractive segmentation. IEEE International Conference on Image Processing (ICIP) ,pages 3141–3145, 2022.J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang,W. Liu, and B. Xiao. Deep high-resolution representation learning for visual recognition.IEEE Transactions on PAMI , pages 1–1, 2020.4 |
Ob7xQXamjo_ | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionBias Field Correction in MRI with Hampel Noise DenoisingDiffusion Probabilistic ModelJunhyeok Lee1201802848@hufs.ac.krJunghwa Kang∗1kangjung9592@gmail.comYoonho Nam1yoonhonam@hufs.ac.kr1Department of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, KOREA,REPUBLIC of KoreaTeaYoung Lee2tylee@pusan.ac.kr2Department of Neuropsychiatry, Pusan National University Yangsan Hospital, Yangsan, Republicof KoreaEditors: Under Review for MIDL 2023AbstractNon-uniform bias field due to external factors hampers quantitative MR image analysis.For reliable quantitative MR image analysis, appropriate correction for the bias field isnecessary. In this study, we propose Hampel denoising diffusion model to effectively correctthe bias field from MR images. Compared with N4 and Gaussian denoising diffusion models,the proposed model provided higher PSNRs, SSIMs and lower MSEs. Higher efficiencycould be achieved compared to N4 when our model takes 9 times faster in inference time.Keywords: Diffusion model, Bias field, intensity inhomogeneity correction, Magnetic res-onance imaging1. IntroductionMagnetic Resonance Imaging (MRI) is a widely used medical imaging modality. But biasfield obscure subtle details and impede accurate identification (Meyer et al., 1995; Vovket al., 2007). N4 (Tustison et al., 2010) has been a commonly used method for correctingthe inhomogeneities, however, this method has limitations in terms of its accuracy, technicalfactor, and efficiency. We propose Hampel Denoising Diffusion Model (HDDnet) conceivedto model inhomogeneities by Cauchy-Lorentz distribution (Borgia et al., 1996). We modeledthe Hampel mixture distribution to represent the image intensity disrupted by the inhomo-geneities. To assess the fitness of Hampel function to the image intensity, the mean fittingerror between the histogram and the probability function was calcuated, shown in Figure 1.The intensity difference between the input image and t-step image of diffusion process wasused in both histogram and the probability function. The mean fitting error is 0 .012 lessin Hampel function showing that it is a much better fit to MRI with the bias field, Figure1A-D (Nachmani et al., 2021). Proposed method effectively corrects the bias field and gener-ates reduced inhomogeneities MRI with higher accuracy and faster inference time than N4.∗Contributed equally©2023 CC-BY 4.0, J. Lee, J. Kang, Y. Nam & T. Lee.Lee Kang Nam LeeFigure 1: (A) Hampel distribution (B) Gaussian distribution (C) Comparing the fit(D) The mean fitting MSE between the histogram and the density function2. Method and ExperimentsMethod : We modeled the Hampel Mixture distribution (Hampel and Zurich, 1998) torepresent the image intensity disrupted by the inhomogeneities. Denote H(α, x 0, γ) as theHampel mixture distribution, where α, x 0, γare weight, location, scale parameter, respec-tively; We use the term Fh(x, α), Fn(x; 0,1), Fc(x;x0, γ) as probability distribution functionof Hampel, Gaussian, and Cauchy-Lorentz, respectively. Hampel function1could be writtenFh(x, α) = (1 −α)Fn(x; 0,1) +αFc(x;x0, γ)with 0≤α≤1Fn(x; 0,1) =1√2πexp(−x22), Fc(x;x0, γ) =1πγ2(x−x0)2+γ2Hampel function was optimized with MLE2(Haynes, 2013). Through maximizing the Ham-pel function, we were able to allocate ( α, x 0, γ) as (1 e−05,0.6332,0.0274). Detail explana-tion and the source code can be found in our Github repository3.H(α, x 0, γ) =H(1e−05,0.6332,0.0274)Dataset : This study was approved by the Institutional Review Board. We used 202subjects (126 male, 76 female, age 26 .27±7.84 years) scanned on a 3T MRI following 3Dgradient echo protocol with MT pulse (Nam et al., 2017; Nam et al.). Each of the brainslices is resized to a size of 512 ×512 and normalized the values to range between [0 ,1]. Ourdataset is composed of 6 ,000 images ( n= 176) to train and 780 images ( n= 26) to test.Model and Training : HDDnet is trained on Nvidia RTX 3090 GPU 24GB with the batchsize of 8 for 512 iterations. HDDnet is trained with L2 loss, the sigmoid noise schedule for1,000 steps, a learning rate of 10−6for the Adam optimizer, the first layer is chosen as 64.Evaluation : Evaluation was took in both quantitative and qualitative. For the quantitativeevaluation, MSE, PSNR, and SSIM4(Wang et al., 2004) were used. Each was calculatedbetween model output and the N4 label image. Inference time was measured in same1. Hampel mixture probability distribution function2. Maximum Likelihood Estimation3. github.com/junhyk-lee/Bias-Field-Correction4. Mean Squared Error, Peak Signal-to-Noise Ratio, Structural Similarity Index Map, respectively2Bias Field Correctionenvironment with training setup, with 26 patients. The qualitative assessment was done bycomparing N4 and HDDnet prediction of synthetic bias field. The synthetic bias field wasgenerated from train image bias field merged to test set, shown in Figure 2(A).3. ResultsModel MSE PSNR SSIM TimeAGaussian 0.0004 32.486 0.950 4.471Hampel 0.0003 35.945 0.983 4.473BN4 0.0003 34.766 0.979 39.601HDD 0.0001 36.865 0.978 4.478Table 1: Evaluation MetricsAs shown in Table 1(A), Hampel ran-dom noise outperformed Gaussian ran-dom noise in MSE, PSNR, SSIM.Quantitatively, Hampel mixture dis-tribution can provide clear evidence ofconvergence. Figure 2(B) shows N4and HDDnet follow similar pattern ofthe bias field in synthetic image. Butas in Table 1(B), our model outper-formed on its MSE and PSNR, while SSIM is small in difference. While N4 takes averageof 39.6014 secs to correct the bias field of from its corrupted MRI, HDDnet takes aboutaverage of 4 secs, which is 9.75 times faster. While maintaining or improving the bias fieldcorrection our model shows high efficiency in time.Figure 2: Comparison of bias field estimation results with synthetic bias field4. ConclusionIn this paper, we propose a new bias field correction method by altering the Gaussian noiseto Hampel noise, a mixture of Gaussian distribution and Cauchy-Lorentz distribution. Ourproposed method is more robust with automatic parameter settings on correcting the biasfield than N4. Such automation can give less complexity to the user. We also point outthat such deep learning approach is faster in time while still maintaining the accuracy.3Lee Kang Nam LeeReferencesG.C. Borgia, R.J.S. Brown, and P. Fantazzini. The effect of diffusion and suscepti-bility differences on t2 measurements for fluids in porous media and biological tis-sues. Magnetic Resonance Imaging , 14(7):731–736, 1996. ISSN 0730-725X. doi:https://doi.org/10.1016/S0730-725X(96)00157-9. URL https://www.sciencedirect.com/science/article/pii/S0730725X96001579 . Proceedings of the Third InternationalMeeting on Recent Advances in MR Applications to Porous Media.Frank Hampel and Eth Zurich. Is statistics too difficult? Canadian Journal ofStatistics , 26(3):497–513, 1998. doi: https://doi.org/10.2307/3315772. URL https://onlinelibrary.wiley.com/doi/abs/10.2307/3315772 .Winston Haynes. Maximum Likelihood Estimation , pages 1190–1191. Springer New York,New York, NY, 2013. ISBN 978-1-4419-9863-7. doi: 10.1007/978-1-4419-9863-7 1235.URL https://doi.org/10.1007/978-1-4419-9863-7_1235 .C.R. Meyer, P.H. Bland, and J. Pipe. Retrospective correction of intensity inhomogeneitiesin mri. IEEE Transactions on Medical Imaging , 14(1):36–41, 1995. doi: 10.1109/42.370400.Eliya Nachmani, Robin San Roman, and Lior Wolf. Non gaussian denoising diffusion models.arXiv preprint arXiv:2106.07582 , 2021.Yoonho Nam, Na-Young Shin, and Eung Yeop Kim. Simultaneous imaging of neuromelaninand nigrosome 1 in substantia nigra using 3d multi-echo gradient echo acquisition withmagnetization transfer preparation.Yoonho Nam, Sung-Min Gho, Dong-Hyun Kim, Eung Yeop Kim, and Jongho Lee. Imagingof nigrosome 1 in substantia nigra at 3t using multiecho susceptibility map-weightedimaging (smwi). Journal of Magnetic Resonance Imaging , 46(2):528–536, 2017.Nicholas J. Tustison, Brian B. Avants, Philip A. Cook, Yuanjie Zheng, Alexander Egan,Paul A. Yushkevich, and James C. Gee. N4itk: Improved n3 bias correction. IEEE Trans-actions on Medical Imaging , 29(6):1310–1320, 2010. doi: 10.1109/TMI.2010.2046908.Uro Vovk, Franjo Pernus, and Botjan Likar. A review of methods for correction of intensityinhomogeneity in mri. IEEE Transactions on Medical Imaging , 26(3):405–421, 2007. doi:10.1109/TMI.2006.891486.Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: fromerror visibility to structural similarity. IEEE Transactions on Image Processing , 13(4):600–612, 2004. doi: 10.1109/TIP.2003.819861.4 |
BC4UYzbLRZ | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submission3D Supervised Contrastive-Learning Network forClassification of Ovarian NeoplasmsTarun Roy tarunkanti-roy@uiowa.eduJesus Gonzalez Bosquet jsus-gonzalezbosquet@uiowa.eduSuely Oliveira suely-oliveira@uiowa.eduXiaodong Wu xiaodong-wu@uiowa.eduUniversity of IowaIowa City, IA 52242, USAEditors: Under Review for MIDL 2023AbstractOvarian cancer is the deadliest of all female reproductive system cancers and ranks the 5thincancer deaths among women. We propose a 3D contrastive learning based predictive modelto discriminate benign from malignant masses in abdominal CT scans for ovarian cancerpatients. We used fully supervised contrastive learning(SCL) approach which allowed us toeffectively leverage the label information of our small dataset of 331 patients. All patients’data was collected at the University of Iowa. Three different architectures (VGG, ResNetand DenseNet) were implemented for feature extraction by contrastive learning. We showedthat SCL consistently out-performed over the traditional cross-entropy based networks withVGG and two ResNet variants. With five fold cross validation, our best contrastive learningmodel achieves an accuracy of 92.8%, mean AUC of 92.4%, mean recall of 94.45% and meanspecificity of 90.37%. This work shows that contrastive learning is a promising deep learningmethod to improve early detection of women at risk of harboring ovarian cancer.Keywords: Supervised contrastive learning, ovarian cancer, classification, deep learning,feature encoder, efficentnet, resnet, cross validation1. IntroductionAmerican cancer society indicated, the pobability of a woman getting ovarian cancer is178.Moreover, the chance of dying from it is1108. Diagnostic models for cancer patients mayimprove decision making to personalize management of cancer patients. In this study, wepropose a deep learning-based predictive model for ovarian cancer patients to discriminatebenign from malignant masses in abdominal CT scans. Our developed model uses 3D CTscan data obtained at the University of Iowa. A major challenge in the analysis of ovarianCT scans is that there are a large number of ovarian cysts existing in both malignant andbenign patient data. Manually tracing all of them is cumbersome. Previous works also showthat CNN based models out perform experienced field radiologists in terms of accuracy ofprognosis (Saida et al., 2022). Most of the prior works done with ovarian cancer dataonly use 2D convolutional networks. In this study we trained 3D CNN models and gotbetter performance compared to 2D models. We also implemented a new state-of-the-artcontrastive-learning technique in 3D.©2023 CC-BY 4.0, T. Roy, J.G. Bosquet, S. Oliveira & X. Wu.Roy Bosquet Oliveira Wu2. MethodologyIn our proposed approach, we trained a 3D convolutional feature encoder using a supervisedcontrastive loss. The trained encoder was used on top of a multi-layer perceptron(MLP)netwrok to train the classifier. All the weights of the encoders were frozen during classifiertraining. The feature encoders we used had different convolutional architectures. Thedataset contains CT scans of lower abdomens from 331 patients. Out of these samples,196 scans contained malignant tumors and the rest of the 135 samples had benign tumors.Because of the small sample size, we trained models using five-fold stratified cross validationwith a split of 264 for training and 67 for testing. For each volume image, the region ofinterest (ROI) with a dimension of 128 ×128×64 was set around patients’ lower abdomenswhere the ovaries were located, and the images were cropped to the ROI volume.2.1. Representation Learning FrameworkOur proposed predictive model consists of the following components, as in (Tian et al.,2019; Khosla et al., 2020)•Data Augmentation module: 3D medical images are not suitable for any random aug-mentations. We experimented only with three different augmentations: translation,rotation and flipping (Solovyev et al., 2022). From each input sample ntwo randomaugmented images ̃ n=Augment (n) were generated to train the encoder network withthe objective of minimizing the contrastive loss for the same class and maximizing forthe other classes.•Encoder Network: In this work we used different 3D convolutional architectures asencoder networks that output the vector representation of the input CT volume.x=Enc( ̃n)∈RDEIn our experiments we empirically chose the representation vectorsizeDE= 2048.•Projection Network: Maps the representation vector xto a projection vector z=proj(x)∈RDp. In this paper we used MLP network as the projection head withoutput vector size of Dp= 512. The normalized output vector is used to measurethe sample distances in the projection space. Even though we had different encodernetworks, we used the same projection head in each case.•Supervised Contrastive Losses used in this work can leverage the label informationmore effectively compared to the cross-entropy loss. The idea here is to cluster thepoints belonging to the same class that are pulled together in embedding space whilesimultaneously pushing apart cluster of samples from different classes (Khosla et al.,2020)Lsup=2NXi=1LsupiLsupi=−12N ̃yi−12NXj=11i ̸=j·1 ̃yi= ̃yj·logexp ( zi·zj/τ)P2Nk=11i ̸=k·exp ( zi·zk/τ)2Ovarian Cancer Classifier NetworkFor a minibatch of X1..bsamples, here N ̃yiis the total number of images in theminibatch that have the same label, y, as the anchor image, i. Augmented images areindicated by ̃ y. This loss has important properties well suited for supervised learning:(a)generalization to an arbitrary number of positives, (b) contrastive power increaseswith more negatives.Figure 1: Performance overview of the five fold cross validation (a) Networks trained withCross-entropy loss (b) Networks trained with Contrastive loss3. Result and DiscussionAll the models shown in Table 1 are cross validated with leave-one-fold-out fashion. Thisdemonstrates the robustness of the models to new data. Fig. 1 depicts the performanceboxplot of 5-fold cross validation in terms of accuracy, AUC, recall and specificity scores.Supervised contrastive learning models outperformed the baseline models trained with bi-nary cross-entropy loss.Table 1: Performance Comparison of models on CT volume size of (64 ×128×128)Panel A: BaseLine 3D modelsAcc. (%) AUC(%) Recall(%) Spec.(%)VGG19 84.3 84.1 85.2 82.96ResNet18 80.1 77.9 88.33 67.5ResNet50 81.6 80.1 88.99 71.18DenseNet121 82.15 80.58 80.42 80.73Panel B: SCL 3D modelsVGG19 89.48 88.58 93.45 83.7ResNet18 89.17 88.2 93.42 82.96ResNet50 92.8 92.4 94.45 90.37DenseNet121 91.16 90.61 94.89 86.35This work leverages the state-of-the-art contrastive learning method to develop an auto-mated diagnosis model for the classification of ovarian tumors. We studied fully supervised3Roy Bosquet Oliveira Wucontrastive learning for tackling this problem and investigated the predictive powers withrespect to four common CNN baselines. We expect that with a large training dataset (evenwithout annotations), higher accuracy will be achievable using semi-supervised contrastivelearning as well.ReferencesPrannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola,Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. CoRR ,abs/2004.11362, 2020. URL https://arxiv.org/abs/2004.11362 .Tsukasa Saida, Kensaku Mori, Sodai Hoshiai, Masafumi Sakai, Aiko Urushibara, ToshitakaIshiguro, Manabu Minami, Toyomi Satoh, and Takahito Nakajima. Diagnosing ovariancancer on mri: A preliminary study comparing deep learning and radiologist assessments.Cancers , 14(4), 2022. ISSN 2072-6694. doi: 10.3390/cancers14040987. URL https://www.mdpi.com/2072-6694/14/4/987 .Roman Solovyev, Alexandr A Kalinin, and Tatiana Gabruseva. 3d convolutional neuralnetworks for stalled brain capillary detection. Computers in Biology and Medicine , 141:105089, 2022. doi: 10.1016/j.compbiomed.2021.105089.Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. CoRR ,abs/1906.05849, 2019. URL http://arxiv.org/abs/1906.05849 .4 |
VcgBBAQfMP | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionComp2Comp: Open-Source Body Composition Assessmenton Computed TomographyLouis Blankemeier∗1louis.blankemeier@stanford.edu1Stanford University, CA, USAMalte Jensen∗1mekj@stanford.eduEduardo Pontes Reis∗1,2edreis@stanford.edu2Hospital Israelita Albert Einstein, Sao Paulo, BrazilJuan Manuel Zambrano Chaves∗1jmz@stanford.eduAdrit Rao1adritrao@stanford.eduSally Yao1yaohanqi@stanford.eduPauline Margaret Berens1pberens@stanford.eduAndrew Wentland3alwentland@wisc.edu3University of Wisconsin-Madison, WI, USABhanushree Bahl4bhanushree.bahl@carpl.ai4CARPL.ai, New Delhi, IndiaKushboo Arora4khushboo.arora@carpl.aiOliver Oppers Aalami1aalami@stanford.eduBhavik Patel5patel.bhavik@mayo.edu5Mayo Clinic, MN, USALeon Lenchik6llenchik@wakehealth.edu6Wake Forest University, NC, USAMarc H. Willis1marc.willis@stanford.eduRobert D. Boutin1boutin@stanford.eduArjun D. Desai∗1arjundd@stanford.eduAkshay S. Chaudhari∗1akshaysc@stanford.eduEditors: Under Review for MIDL 2023AbstractComputed tomography (CT) can provide quantitative body composition metrics of tissuevolume, morphology, and quality which are valuable for disease prediction and prognostica-tion. However, manually extracting these measures is a cumbersome and time-consumingtask. Proprietary software to automate this process exist, but these software are closed-source, impeding large-scale access to and usage of these tools. To address this, we havebuilt Comp2Comp , an open-source Python package for rapid and automated body compo-sition analysis of CT scans. The primary advantages of Comp2Comp are its open-sourcenature, the inclusion of multiple tissue analysis capabilities within a single package, and itsextensible design. We discuss the architecture of Comp2Comp and report initial validationresults. Comp2Comp can be found at https://github.com/StanfordMIMI/Comp2Comp .Keywords: computed tomography, segmentation, body composition, abdominal CT.∗Contributed equally or co-senior authorship©2023 CC-BY 4.0, L. Blankemeier et al.Blankemeier et al.1. IntroductionQuantitative metrics from computed tomography (CT) can provide diagnostic and prog-nostic biomarkers for acute and chronic health conditions (Lee et al., 2022a; Thibault et al.,2012; Kuriyan, 2018). Such measures can provide a more objective evaluation of body com-position (BC) than traditional clinical measurements (e.g. weight, body mass index (BMI),waist circumference, skinfolds) (Zeng et al., 2021). However, manually extracting quanti-tative BC measures from CT scans is time-consuming and prone to inter-reader variability,which considerably limits their utility in clinics and research studies.We introduce Comp2Comp , an open-source Python package to expedite CT-based BCanalysis. Comp2Comp contains methods to automatically segment CT images, extract quanti-tative BC measures, and generate polychromatic visual reports. Comp2Comp is designed to beextensible, enabling the development of complex clinically-relevant applications. The pack-age is hosted on the GitHub platform at https://github.com/StanfordMIMI/Comp2Compwith a permissive license.2. Inference PipelinesA key component of Comp2Comp is its inference pipeline system. Inference pipelines stringtogether sequences of building-block inference class modules which perform specific taskslike machine learning, saving or loading data, visualizing outputs, or other computationaltasks. Furthermore, inference pipelines can be reused within other inference pipelines. Thismodular structure shortens iteration cycles for developing complex clinical applications.We list the pipelines currently implemented in Comp2Comp . Each of these pipelines savesnumerical or categorical results, image reports, and segmentation files. All trained modelsare available on HuggingFace and downloaded automatically within Comp2Comp .2.1. Spine Bone Mineral Density from 3D Trabecular Bone Regions at T12-L5Retrospective studies have established that L1 trabecular bone density of <90 Hounsfieldunits (HU) is associated with a high risk of vertebral fracture (odds ratio, 32) (Graffy et al.,2017; Lee et al., 2018). Large scale screening for osteoporosis using CT has been validatedin retrospective cohorts (Roux et al., 2022; Pickhardt et al., 2020), but is not yet widelyimplemented because automated techniques have not been freely disseminated.We provide options for running the TotalSegmentator (TS) spine model (Wasserthalet al., 2022) as well as an nnUNet trained on VerSe (Sekuboyina et al., 2021; L ̈ offler et al.,2020; Liebl et al., 2021) and TS data (Wasserthal et al., 2022). We develop heuristics forextracting 3D regions of interest (ROIs) from vertebral body trabecular bone as describedpreviously (Blankemeier et al., 2023). Comp2Comp reports average HUs within these ROIs.We validate our method with TS on 40 contrast-enhanced CT scans from the Stan-ford emergency department. Comparing to central regions extracted from labeled T12-L5vertebral bodies, we achieve an average HU percent error of 1.82 ±0.86 across these 6 levels.2.2. Slice-by-Slice 2D Analysis of Muscle and Adipose TissueSarcopenia, defined by the loss of muscle tissue and muscle function, is associated withadverse outcomes, such as post-operative complications (Papadopoulou et al., 2020; Surov2Comp2Comp(a) (b) (c)Figure 1: (a) Curved planar projection from our spine inference pipeline. Within the image,we report average ROI HU at T12-L5. (b) Output image from our spine, muscle,and adipose tissue pipeline. Here, the level is automatically determined using thespine model. Within the image, we report mean HU and area of each segmentedtissue. (c) Output image from our liver, spleen, and pancreas inference pipeline.Within the image, we report the organ volume, as well as mean and median HU.and Wienke, 2022). Adipose tissue, particularly visceral adipose tissue (VAT), is a mod-ifiable risk factor for numerous medical conditions (Vilalta et al., 2022; Rao et al., 2021;Katzmarzyk et al., 2022).We provide two models for 2D segmentation of muscle and adipose tissue. Both are 2DUNet models trained on axial CT slices at the L3 vertebral level.On an internal test set of 40 abdominal contrast-enhanced cases at the L3 vertebral level,we achieve the following mean (standard deviation) Dice scores: 0.97 (0.03), 0.96 (0.05),and 0.97 (0.02) for muscle, VAT, and subcutaneous adipose tissue (SAT) respectively. Theerror in HU and area averaged below 1% and 2%, respectively, for all tissues. For the samethree tissues, we achieve 94.7 (5.9), 94.6 (6.7), and 93.2 (14.8) on 20 external cases.2.3. Contrast Phase DetectionContrast agents are used to enhance the radiodensity of the blood vessels and vascularizedtissues. Determining contrast phase is an important step for the successful application ofalgorithms with outputs that are sensitive to pixel intensity.The Comp2Comp contrast phase detection pipeline consists of segmenting key anatomicalstructures, extracting metrics from these structures, and classifying these into one of 4classes (non-contrast, arterial, venous and delayed) using a gradient boosting classifier.On 362 internal test cases, our method achieves F1 scores of 0.96, 0.78, 0.92, and 0.95for non-contrast, arterial, venous, and delayed phases respectively.3Blankemeier et al.2.4. 3D Analysis of Liver, Spleen, and PancreasLiver disease and cirrhosis, the ninth leading cause of death, can be predicted by the volume,morphology, and attenuation of liver and spleen structures (Lee et al., 2022b). Volumemeasures can identify enlarged organs and aid in transplant planning (Linguraru et al.,2010).Comp2Comp provides 3D analyses of the liver, spleen and pancreas. The volume, aswell as the mean and median HU intensities are recorded. The organ segmentations aredisplayed in the axial and coronal planes. The slice with the largest cross-sectional area isdisplayed in the axial plane and the slice with the longest continuous length is displayed inthe coronal plane. To segment these organs, we leverage TS (Wasserthal et al., 2022).2.5. Combining Inference Pipelines: End-to-End Spine, Muscle, and AdiposeTissue Analysis at T12-L5The modular design of Comp2Comp makes it easy to combine various Comp2Comp inferencepipelines. Our spine, muscle, and adipose tissue pipeline combines the spine pipeline withthe muscle and adipose tissue pipeline to analyze spine bone mineral density, muscle andadipose tissue at T12-L5.To select the axial slices for muscle and adipose tissue segmentation, we compute per-level superior/inferior (SI) centers. Comparing to SI centers from our labeled T12-L5 verte-bral bodies, we achieve an average error of 4.2 ±2.0mm across T12-L5. On 20 external cases,our segmentation model achieves mean (standard deviation) Dice scores, averaged acrossT12-L5, of: 0.88 (0.08), 0.93 (0.07), 0.91 (0.16) for muscle, VAT, and SAT respectively.3. ConclusionWe present C2C, a tool for automated analysis of multiple tissues that is extensible andopen source. We hope that Comp2Comp will increase the usage of BC analysis in large-scaleresearch studies and clinical settings. We welcome any contributions from the community.ReferencesLouis Blankemeier, Arjun Desai, Juan Manuel Zambrano Chaves, Andrew Wentland, SallyYao, Eduardo Reis, Malte Jensen, Bhanushree Bahl, Khushboo Arora, Bhavik N Patel,et al. Comp2comp: Open-source body composition assessment on computed tomography.arXiv preprint arXiv:2302.06568 , 2023.Peter M Graffy, Scott J Lee, Timothy J Ziemlewicz, and Perry J Pickhardt. Prevalence ofvertebral compression fractures on routine ct scans according to l1 trabecular attenua-tion: determining relevant thresholds for opportunistic osteoporosis screening. AmericanJournal of Roentgenology , 209(3):491–496, 2017.Peter T Katzmarzyk, Justin C Brown, Shengping Yang, Emily F Mire, Xiao-Cheng Wu,Lucio Miele, Augusto C Ochoa, and Jovanny Zabaleta. Association of abdominal visceraladiposity and total fat mass with cancer incidence and mortality in white and blackadults. Cancer Epidemiology, Biomarkers & Prevention , 31(8):1532–1538, 2022.4Comp2CompRebecca Kuriyan. Body composition techniques. The Indian journal of medical research ,148(5):648, 2018.Matthew H Lee, Ryan Zea, John W Garrett, Peter M Graffy, Ronald M Summers, andPerry J Pickhardt. Abdominal ct body composition thresholds using automated ai toolsfor predicting 10-year adverse outcomes. Radiology , page 220574, 2022a.Scott J Lee, Peter M Graffy, Ryan D Zea, Timothy J Ziemlewicz, and Perry J Pickhardt.Future osteoporotic fracture risk related to lumbar vertebral trabecular attenuation mea-sured at routine body ct. Journal of Bone and Mineral Research , 33(5):860–867, 2018.Sungwon Lee, Daniel C Elton, Alexander H Yang, Christopher Koh, David E Kleiner,Meghan G Lubner, Perry J Pickhardt, and Ronald M Summers. Fully automated andexplainable liver segmental volume ratio and spleen segmentation at ct for diagnosingcirrhosis. Radiology: Artificial Intelligence , 4(5):e210268, 2022b.Hans Liebl, David Schinz, Anjany Sekuboyina, Luca Malagutti, Maximilian T L ̈ offler,Amirhossein Bayat, Malek El Husseini, Giles Tetteh, Katharina Grau, Eva Niederre-iter, et al. A computed tomography vertebral segmentation dataset with anatomicalvariations and multi-vendor scanner data. Scientific Data , 8(1):284, 2021.Marius George Linguraru, Jesse K Sandberg, Zhixi Li, Furhawn Shah, and Ronald M Sum-mers. Automated segmentation and quantification of liver and spleen from ct imagesusing normalized probabilistic atlases and enhancement estimation. Medical physics , 37(2):771–783, 2010.Maximilian T L ̈ offler, Anjany Sekuboyina, Alina Jacob, Anna-Lena Grau, Andreas Scharr,Malek El Husseini, Mareike Kallweit, Claus Zimmer, Thomas Baum, and Jan S Kirschke.A vertebral segmentation dataset with fracture grading. Radiology: Artificial Intelligence ,2(4):e190138, 2020.SK Papadopoulou, P Tsintavis, G Potsaki, and Dimitrios Papandreou. Differences in theprevalence of sarcopenia in community-dwelling, nursing home and hospitalized individ-uals. a systematic review and meta-analysis. The journal of nutrition, health & aging ,24:83–90, 2020.Perry J Pickhardt, Peter M Graffy, Ryan Zea, Scott J Lee, Jiamin Liu, Veit Sandfort,and Ronald M Summers. Automated abdominal ct imaging biomarkers for opportunisticprediction of future major osteoporotic fractures in asymptomatic adults. Radiology , 297(1):64–72, 2020.Vishal N Rao, Christopher G Bush, Morgana Mongraw-Chaffin, Michael E Hall, DonaldClark III, Marat Fudim, Adolfo Correa, Bradley G Hammill, Emily O’Brien, Yuan-I Min,et al. Regional adiposity and risk of heart failure and mortality: the jackson heart study.Journal of the American Heart Association , 10(14):e020920, 2021.Christian Roux, Antoine Rozes, Daniel Reizine, David Hajage, Christel Daniel, Aur ́ elienMaire, St ́ ephane Br ́ eant, Namik Taright, Ronen Gordon, Jacques Fechtenbaum, et al.5Blankemeier et al.Fully automated opportunistic screening of vertebral fractures and osteoporosis on morethan 150 000 routine computed tomography scans. Rheumatology , 61(8):3269–3278, 2022.Anjany Sekuboyina, Malek E Husseini, Amirhossein Bayat, Maximilian L ̈ offler, Hans Liebl,Hongwei Li, Giles Tetteh, Jan Kukaˇ cka, Christian Payer, Darko ˇStern, et al. Verse: Avertebrae labelling and segmentation benchmark for multi-detector ct images. Medicalimage analysis , 73:102166, 2021.Alexey Surov and Andreas Wienke. Prevalence of sarcopenia in patients with solid tumors:A meta-analysis based on 81,814 patients. Journal of Parenteral and Enteral Nutrition ,46(8):1761–1768, 2022.Ronan Thibault, Laurence Genton, and Claude Pichard. Body composition: why, whenand for who? Clinical nutrition , 31(4):435–447, 2012.Adrian Vilalta, Julio A Guti ́ errez, SuZanne Chaves, Mois ́ es Hern ́ andez, Silvia Urbina, andMarcus Hompesch. Adipose tissue measurement in clinical research for obesity, type 2diabetes and nafld/nash. Endocrinology, Diabetes & Metabolism , 5(3):e00335, 2022.Jakob Wasserthal, Manfred Meyer, Hanns-Christian Breit, Joshy Cyriac, Shan Yang, andMartin Segeroth. Totalsegmentator: robust segmentation of 104 anatomical structures inct images. arXiv preprint arXiv:2208.05868 , 2022.Qiang Zeng, Ling Wang, Shengyong Dong, Xiaojuan Zha, Limei Ran, Yongli Li, ShuangChen, Jianbo Gao, Shaolin Li, Yong Lu, et al. Ct-derived abdominal adiposity: Distri-butions and better predictive ability than bmi in a nationwide study of 59,429 adults inchina. Metabolism , 115:154456, 2021.6 |
BSf6JALJoc | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023Overcoming Interpretability and Accuracy Trade-off inMedical ImagingIvaxi Sheth1,2ivaxi-miteshkumar.sheth.1@ens.etsmtl.caSamira Ebrahimi Kahou1,2,3samira.ebrahimi-kahou@etsmtl.ca1Mila Quebec AI2 ́ETS Montr ́ eal3CIFAR AI ChairAbstractNeural networks are considered black boxes. Deploying them into the healthcare domainposes a challenge in understanding model behavior beyond the final prediction. There havebeen recent attempts to establish the trustworthiness of a model. Concept-based modelsprovide insight into the model by introducing a bottleneck layer before the final prediction.They encourage interpretable insights into deep learning models by conditioning the finalpredictions on intermediate predictions of explainable high-level concepts. However, usingconcept-based models causes a drop in performance which poses an accuracy vs explain-ability trade-off. To overcome this challenge we propose coop-CBM, a novel concept-basedmodel. We validate the performance of coop-CBM on diverse dermatology and histopathol-ogy images.Keywords: Interpretability, concept-based explanations1. IntroductionWith the growing use of Deep Learning (DL) based decision-making, it is of paramountimportance that these models are transparent in their decision-making. The field of med-ical imaging has observed tremendous advancements in recent times with the aid of com-puter vision algorithms, assisting radiologists and pathologists in accurately diagnosing dis-eases (Ardila et al., 2019). However, despite its success, the lack of transparency in decision-making by deep learning models remains a concern due to the potential consequences oferrors made by such models (Chen et al., 2022). eXplainable AI (XAI) aims to address theseconcerns by developing techniques that allow us to better understand the reasoning behindthe decisions made by AI systems. Post-hoc explanation methods allow visualization toprovide insights into the model’s behavior and identify which input features are importantfor prediction (Selvaraju et al., 2017; Kim et al., 2018). However, these methods involveextra probing and are not inherently interpretable models. Intrinsically interpretable mod-els in medical imaging can therefore provide a deeper understanding and confidence tothe healthcare professionals in using DL based Computer Aided Diagnostics (CAD) (Boryset al., 2023). Concept learning models have gained popularity in explaining their own pre-dictions while conditioning on human-understandable concepts. Koh et al. (2020) proposedConcept Bottleneck Model that first predicts concepts, and using those concepts, the finallabel is predicted. CBM although portrays an explainable model, it is at the expense of thelower accuracy of the model. Inaccurate diagnostics are equally as undesirable as opaqueblack-box models. In this work, we, therefore, propose a novel concept-based architecture,coop-CBM that overcomes the trade-off between interpretability and accuracy.©2023 I. Sheth & S.E. Kahou.Sheth Kahou2. Proposed MethodStandard classification models digest an image to output a label. Such models might beexplained by using activation visualization or post-hoc vectors after training (Zhou et al.,2018). They aren’t although innately explainable and require probing. Our model, coop-CBM , is a hybrid multi-task model that predicts both labels and explanations.Setup Consider a standard supervised learning setting for a classification task, where mod-elsMare trained on a dataset D={xi, yi}Ki=1withKdata samples. Standard models aimto predict the true distribution pM(y|x) from an input x. In the supervised concept-basedmodel setting, the dataset has additional labeled concepts that can allow supervised con-cept learning in addition to target learning. The dataset D={xi, ci, yi}Ki=1is the input toconcept-based model. The model has prediction at two levels, the first model GX→Cmapsthe input image xto concepts cdenoted by pG(c|x), while the second model FC→Ymapsthe concepts cto the label ydenoted by pF(y|c). During inference, such models are partic-ularly advantageous as they allow model editing based on human feedback. If a supervisorobserves incorrect concepts related to a label, they can correct the output of pG(c|x) whicheffectively changes, often improves, the downstream label prediction pF(y|c).Despite being an explainable model, Mahinpei et al. (2021) have shown that conceptrepresentations of CBMs result in information leakage which deteriorates predictive per-formance. Apart from this, another challenge of CBMs for medical images is the lack offine-grained concept annotations. These concept annotations in originally define the visualaspects of the images. Due to the diversity in images and lack of expert knowledge, it isdifficult to acquire them. To overcome the lack of concepts, one can use the meta-data of apatient that often includes both descriptive features of the image such as tumor size, andnon-descriptive attributes such as sex. On such datasets, CBMs suffer from poor perfor-mance since the concept bank is not sufficiently expressive (Havasi et al., 2022). To overcomethis tradeoff between interpretability and predictive performance, we propose coop-CBM .Coop-CBM To preserve the standard model’s performance, our model, coop-CBM usesa supplementary predictor. Non-bottleneck models that use concepts as auxiliary featureshave most commonly been used in multi-task setup (Zhou et al., 2018). But such models losethe causal property and thus lose the cause →effect explanations. Therefore inspired by theliterature on multi-task learning, we introduce an additional predictor, HX→Ythat predictssupplemental label. This additional stream is separate from the concept prediction pipeline.We hypothesize that this supplementary label prediction helps the concept prediction streamto recover model performance in the absence of fine-grained concept labels.3. ResultsIn this work, we proposed coop-CBM which overcomes the tradeoff of interpretability andaccuracy, we evaluate our model against the current variants of CBM. All of our experi-ments use the Inception V3 (Szegedy et al., 2016) backbone. We consider two classificationdatasets, TIL (Saltz et al., 2018) and DDI (Daneshjou et al., 2021, 2022) to classify cancertumors and skin diseases respectively. These two datasets are different in their conceptrepresentation. The metadata for TIL includes non-image features such as age and genderalong with clinical descriptor terms. There 185 such concepts in TIL. In the case of DDI,only 48 clinical descriptor terms are present.2Coop-CBMFigure 1: L: Coop-CBM model for DDI data. In addition to predicting concepts in thebottleneck, our model also predicts the supplementary label. The final label ispredicted from concepts. R: Performance of different models with interventionson TIL.Performance To evaluate the performance, here, we are concerned with the final predic-tion accuracy, i.e. performance of pF(y|c). From Table 1, we notice our method has the mostsuperior performance in comparison to the baselines on both TIL and DDI datasets. We ob-serve that Concept Bottleneck Models (Koh et al., 2020) observe a big drop in performancein comparison to the Standard model that does not use concepts. Concept EmbeddingModels (Zarlenga et al.) that build upon the (Koh et al., 2020) by introducing a mixtureof concept embedding in the bottleneck layer is just marginally better than the standardmodel. Finally, Autoregressive CBM (Havasi et al., 2022) also performs comparably toCBM. Coop-CBM improves generalization accuracy even beyond ”no concept” models en-abling a higher level of explainability without loss of performance.Model type TIL DDIStandard [No concepts] 51.183.4CBM (Koh et al., 2020) 49.079.9CEM (Zarlenga et al.) 51.383.9CBM-AR (Havasi et al., 2022) 49.580.6Coop-CBM (ours) 53.484.0Table 1: Accuracy of different on TIL and DDI dataset.Interventions An advantage of introducing a bottleneck layer before the final predictionis the ability to perform concept correction during inference (Koh et al., 2020; Sheth et al.).From the perspective of a medical professional, identification of key medical concepts such asskin lesion color may be easier than making a final diagnosis. Therefore if the doctor observesan incorrect concept explanation during test time, they can intervene and alter the conceptsoften resulting in superior downstream performance (Wang et al., 2022). To quantify theeffectiveness of interventions, we compare the accuracy with increasing intervention bychoosing concepts randomly and correcting them to ground truth. Figure 1 shows thatcoop-CBM is highly receptive to concept correction on TIL.3Sheth KahouIn summary this paper tackles the trade-off between concept related interpretability andpredictive performance by proposing multi-task based learning paradigm coop-CBM .4Coop-CBMAcknowledgmentsWe would like to thank the Digital Research Alliance of Canada for computing resourcesand CIFAR for research funding. IS acknowledges funding from Mitacs and Imagia CanexiaHealth.ReferencesDiego Ardila, Atilla P Kiraly, Sujeeth Bharadwaj, Bokyung Choi, Joshua J Reicher, LilyPeng, Daniel Tse, Mozziyar Etemadi, Wenxing Ye, Greg Corrado, et al. End-to-endlung cancer screening with three-dimensional deep learning on low-dose chest computedtomography. Nature medicine , 25(6):954–961, 2019.Katarzyna Borys, Yasmin Alyssa Schmitt, Meike Nauta, Christin Seifert, Nicole Kr ̈ amer,Christoph M Friedrich, and Felix Nensa. Explainable ai in medical imaging: An overviewfor clinical practitioners–beyond saliency-based xai approaches. European journal of ra-diology , page 110786, 2023.Haomin Chen, Catalina Gomez, Chien-Ming Huang, and Mathias Unberath. Explainablemedical imaging ai needs human-centered design: guidelines and evidence from a system-atic review. npj Digital Medicine , 5(1):156, 2022.Roxana Daneshjou, Kailas Vodrahalli, Weixin Liang, Roberto A Novoa, Melissa Jenkins,Veronica Rotemberg, Justin Ko, Susan M Swetter, Elizabeth E Bailey, Olivier Gevaert,et al. Disparities in dermatology ai: Assessments using diverse clinical images. arXivpreprint arXiv:2111.08006 , 2021.Roxana Daneshjou, Mert Yuksekgonul, Zhuo Ran Cai, Roberto Novoa, and James Y Zou.Skincon: A skin disease dataset densely annotated by domain experts for fine-graineddebugging and analysis. Advances in Neural Information Processing Systems , 35:18157–18167, 2022.Marton Havasi, Sonali Parbhoo, and Finale Doshi-Velez. Addressing leakage in conceptbottleneck models. In Advances in Neural Information Processing Systems , 2022.Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas,et al. Interpretability beyond feature attribution: Quantitative testing with conceptactivation vectors (tcav). In International conference on machine learning , pages 2668–2677. PMLR, 2018.Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, BeenKim, and Percy Liang. Concept bottleneck models. ArXiv , abs/2007.04612, 2020.Anita Mahinpei, Justin Clark, Isaac Lage, Finale Doshi-Velez, and Weiwei Pan. Promisesand pitfalls of black-box concept learning models. ArXiv , abs/2106.13314, 2021.Joel Saltz, Rajarsi Gupta, Le Hou, Tahsin Kurc, Pankaj Singh, Vu Nguyen, Dimitris Sama-ras, Kenneth R Shroyer, Tianhao Zhao, Rebecca Batiste, et al. Spatial organization andmolecular correlation of tumor-infiltrating lymphocytes using deep learning on pathologyimages. Cell reports , 23(1):181–193, 2018.5Sheth KahouRamprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, DeviParikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks viagradient-based localization. In Proceedings of the IEEE international conference on com-puter vision , pages 618–626, 2017.Ivaxi Sheth, Aamer Abdul Rahman, Laya Rafiee Sevyeri, Mohammad Havaei, andSamira Ebrahimi Kahou. Learning from uncertain concepts via test time interventions.InWorkshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022 .Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna.Rethinking the inception architecture for computer vision. In Proceedings of the IEEEconference on computer vision and pattern recognition , pages 2818–2826, 2016.Jiaxuan Wang, Sarah Jabbour, Maggie Makar, Michael Sjoding, and Jenna Wiens. Learn-ing concept credible models for mitigating shortcuts. Advances in neural informationprocessing systems , 2022.Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe Marra, FrancescoGiannini, Michelangelo Diligenti, Zohreh Shams, Frederic Precioso, Stefano Melacci,Adrian Weller, et al. Concept embedding models: Beyond the accuracy-explainabilitytrade-off. In Advances in Neural Information Processing Systems .Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba. Interpretable basis decompo-sition for visual explanation. In Proceedings of the European Conference on ComputerVision (ECCV) , pages 119–134, 2018.6 |
iXjsAarmqn | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023ζ-mixup : Richer, More Realistic Mixing of Multiple ImagesKumar Abhishek1kabhishe@sfu.caColin J. Brown2colin.brown@hingehealth.comGhassan Hamarneh1hamarneh@sfu.ca1School of Computing Science, Simon Fraser University, Canada2Hinge Health, CanadaEditors: Accepted for publication at MIDL 2023AbstractData augmentation (DA), an effective regularization technique, generates training samplesto enhance the diversity of data and the richness of label information for training moderndeep learning models. mixup , a popular recent DA method, augments training datasetswith convex combinations of original samples pairs, but can generate undesirable sam-ples, with data being sampled off the manifold and with incorrect labels. In this work,we propose ζ-mixup , a generalization of mixup with provably and demonstrably desirableproperties that allows for convex combinations of N≥2 samples, thus leading to morerealistic and diverse outputs that incorporate information from Noriginal samples using ap-series interpolant. We show that, compared to mixup ,ζ-mixup better preserves the intrin-sic dimensionality of the original datasets, a desirable property for training generalizablemodels, and is at least as fast as mixup . Evaluation on several natural and medical imagedatasets shows that ζ-mixup outperforms mixup , CutMix, and traditional DA methods.Keywords: data augmentation, mixup, intrinsic dimensionality, data manifold1. IntroductionGiven the large parameter space of deep learning models, training on small datasets tendsto cause the models to overfit to the training samples, which is especially a problem whentraining with data from high dimensional input spaces such as images, and consequently,benefits from data augmentation (DA) techniques for improved generalization performance.mixup (Zhang et al., 2018), a popular DA method, generates convex combinations of pairsof original training samples and linear interpolations of corresponding labels with a hyper-parameter λ∼[0,1]. The primary hypothesis of mixup and many derivatives is that amodel should behave linearly between any two training samples, even if the distance be-tween samples is large. This implies that we may train the model with synthetic samplesthat have very low confidence of realism; in effect, over-regularizing. We instead arguethat we should only synthesize examples with high confidence of realism, and that a modelshould only behave linearly nearby training samples, supported by research in cognitivesciences showing that human perception between object category boundaries is warped andnot as linear as mixup seems to suggest (Beale and Keil, 1995; Newell and B ̈ ulthoff, 2002).Consider the K-class classification task, where we are provided with a dataset of mpoints{xi}mi=1in aD-dimensional ambient space RDwith the corresponding labels {yi}mi=1in alabel space L={l1,···, lK} ∈RK. Keeping in line with the manifold hypothesis (Cayton,2005; Fefferman et al., 2016), which states that complex data manifolds in high dimensional©2023 CC-BY 4.0, K. Abhishek, C.J. Brown & G. Hamarneh.Abhishek Brown Hamarneh( d ) ( e ) ( f ) Figure 1: (a) An overview of ζ-mixup with original ( ◦) and synthetic ( △) samples. Note howmixup ((b), (d)) does not respect individual class boundaries and can generateincorrect samples, that lie off the data manifold, with incorrect labels. ζ-mixup((a), (c), (e)) can mix any number of samples (e.g., 3 in (a), 4 or 8 in (c), and 25in (e)) and the generated samples remain close to the original distribution whileincorporating rich information from several samples. (f) The hyperparameter γinζ-mixup formulation can control the diversity of the synthetic samples.ambient spaces are actually made up of samples from manifolds with low intrinsic dimen-sionalities ( Dint), we assume that the mpoints are samples from Kmanifolds {M i}Ki=1withDintas{di}Ki=1, where di<<D ∀i∈[1,K] (Fig. 1 (a)). We seek an augmentation methodthat facilitates a denser sampling of each intrinsic manifold Mi, thus generating more realand more diverse samples with richer labels. Following Wood et al. (2021); Wood (2021), weconsider three criteria for evaluating the quality of synthetic data: (i) realism: allowing thegeneration of correctly labeled synthetic samples close to the original samples, ensuring therealism of the synthetic samples, (ii) diversity: facilitating the synthesis of more diversesamples by allowing exploration of the input space, and (iii) label richness when generat-ing synthetic samples while still staying on the manifold of realistic samples. Additionally,we aim for: (iv) valid probabilistic labels along with (v) computationally efficientaugmentation of training batches (e.g., avoiding inter-sample distance calculations).To this end, we propose to synthesize a new sample (ˆ xk,ˆyk) as ˆxk=PNi=1wixi; ˆyk=PNi=1wiyi, where wis are the weights assigned to the Nsamples being mixed. One suchsuitable weighting scheme is to sample weights from the terms of a p-series, i.e., wi=i−p,which is a convergent series for p≥1. Extending the idea of local synthetic instances forconnectome augmentation (Brown et al., 2015), we adopt the following formulation: givenNsamples (where 2 ≤N≤mand thus, theoretically, the entire dataset), a N×Nrandompermutation matrix π, and the resulting randomized ordering of samples s=π[1,2, . . . , N ]T,the weights are defined as wi=s−γiC, i∈[1, N], where the hyperparameter γallows us tocontrol how far the synthetic samples can stray away from the original samples. Cis thenormalization constant to ensure that wimust satisfy wi≥0∀iandPNi=1wi= 1, suchthat ˆ ykis a valid probabilistic label, where C=PNj=1j−γis the N-truncated Riemann2Multi-Sample ζ-mixupTable 1: Classification error on CIFAR datasets averaged over 3 runs ( γ∈U[γmin,4.0]).MethodCIFAR-10 CIFAR-100MethodCIFAR-10 CIFAR-100ResNet-18 ResNet-18 ResNet-18 ResNet-50 ResNet-18 ResNet-50ERM 5.48 23.33 CutMix 4.13 4.08 19.97 18.99mixup 4.68 21.85 + ζ-mixup 3.84 3.61 19.54 18.86ζ-mixup 4.42 21.35Table 2: Micro-averaged F1 score on skin lesion image datasets ( γ= 2.8).MethodISIC 2016 ISIC 2017 ISIC 2018 DermoFitResNet-18 ResNet-50 ResNet-18 ResNet-50 ResNet-18 ResNet-50 ResNet-18 ResNet-50ERM 0.7836 0.8127 0.7383 0.6867 0.8756 0.8653 0.8269 0.8500mixup 0.7968 0.8179 0.7333 0.7433 0.8394 0.8601 0.8577 0.8500ζ-mixup 0.8654 0.8602 0.7633 0.7733 0.8756 0.9016 0.8731 0.8962zeta function (Riemann, 1859) ζ(z) evaluated at z=γ, and thus we call our method ζ-mixup . Since there exist N! possible N×Nrandom permutation matrices, given Noriginalsamples, ζ-mixup can synthesize N! new samples for a single γ, unlike mixup which canonly synthesize 1 new sample per sample pair for a single λ. Moreover, as a result of itsformulation, ζ-mixup presents two desirable properties: (1)for all values of γ≥γmin=1.72865, the weight assigned to one sample is greater than the sum of weights assigned toall other samples, implicitly introducing the desired notion of linearity in only the localityof original samples; and (2)forN= 2 and γ= log2λ1−λ,ζ-mixup simplifies to mixup .2. Results and DiscussionUsing a PCA-based local Dintestimator calculated using a k-nearest neighborhood aroundeach sample, with k= 128 (Fukunaga and Olsen, 1971), we find that Dintfor CIFAR-10 andCIFAR-100 using ζ-mixup are lower than using mixup : 26.83±6.53 (versus 35 .43±9.47) and24.76±6.22 (versus 32 .41±8.65), respectively, thus showing that ζ-mixup indeed preservesthe low Dintthat natural image datasets lie in (Ruderman, 1994; Pope et al., 2021), whilemixup ’s off-manifold sampling leads to an inflated estimate of local Dint. Tables 1 and 2show the classification performance using traditional DA techniques, e.g., rotation, flipping,and cropping (“ERM”), against those trained with mixup andζ-mixup outputs as well ascompare the benefit of applying ζ-mixup to an orthogonal DA method, CutMix (Yun et al.,2019), as evaluated on natural: CIFAR-10 and CIFAR-100 and medical (skin lesion): ISIC2016 (Gutman et al., 2016), 2017 (Codella et al., 2018), and 2018 (Codella et al., 2019), andDermoFit (Ballerini et al., 2013) image datasets. We report the error rate and the micro-averaged F1-score for natural and medical image datasets, respectively, since the latter areclass-imbalanced. We observe that ζ-mixup improves performance across the board. Ouroptimized ζ-mixup implementation is 2 .1×faster than the original mixup implementation,while similar training time is recorded for both of them for CIFAR-10/100 ( ∼1h 20m).Conclusion: We proposed ζ-mixup , a parameter-free multi-sample generalization of thepopular mixup technique for data augmentation that combines N≥2 samples withoutsignificant computational overhead. The ζ-mixup formulation allows for the weight assignedto one sample to dominate all the others, thus ensuring the synthesized samples are on orclose to the original data manifold. This leads to generating samples that are more realisticand, along with allowing N > 2, generates more diverse samples with richer labels comparedtomixup . Future work will include exploring ζ-mixup in the learned feature space.3Abhishek Brown HamarnehAcknowledgmentsThe authors are grateful to StackOverflow user obchardon and Ashish Sinha for code op-timization suggestions and to Saeid Asgari Taghanaki for initial discussions. The authorsare also grateful for the computational resources provided by NVIDIA Corporation andDigital Research Alliance of Canada (formerly Compute Canada). Partial funding for thisproject was provided by the Natural Sciences and Engineering Research Council of Canada(NSERC).ReferencesLucia Ballerini, Robert B Fisher, Ben Aldridge, and Jonathan Rees. A color and texturebased hierarchical K-NN approach to the classification of non-melanoma skin lesions. InColor Medical Image Analysis , pages 63–86. Springer, 2013.James M Beale and Frank C Keil. Categorical effects in the perception of faces. Cognition ,57(3):217–239, 1995.Colin J Brown, Steven P Miller, Brian G Booth, Kenneth J Poskitt, Vann Chau, Anne RSynnes, Jill G Zwicker, Ruth E Grunau, and Ghassan Hamarneh. Prediction of motorfunction in very preterm infants using connectome features and local synthetic instances.InInternational Conference on Medical Image Computing and Computer-Assisted Inter-vention (MICCAI) , pages 69–76. Springer, 2015.Lawrence Cayton. Algorithms for manifold learning. University of California at San DiegoTechnical Report , 12(1-17):1, 2005.Noel Codella, Veronica Rotemberg, Philipp Tschandl, M Emre Celebi, Stephen Dusza,David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti,et al. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by theInternational Skin Imaging Collaboration (ISIC). arXiv preprint arXiv:1902.03368 , 2019.Noel CF Codella, David Gutman, M Emre Celebi, Brian Helba, Michael A Marchetti,Stephen W Dusza, Aadi Kalloo, Konstantinos Liopyris, Nabin Mishra, Harald Kittler,et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 interna-tional symposium on biomedical imaging (ISBI), hosted by the international skin imagingcollaboration (ISIC). In 2018 IEEE 15th International Symposium on Biomedical Imaging(ISBI 2018) , pages 168–172. IEEE, 2018.Charles Fefferman, Sanjoy Mitter, and Hariharan Narayanan. Testing the manifold hypoth-esis. Journal of the American Mathematical Society , 29(4):983–1049, 2016.Keinosuke Fukunaga and David R Olsen. An algorithm for finding intrinsic dimensionalityof data. IEEE Transactions on Computers , 100(2):176–183, 1971.David Gutman, Noel CF Codella, Emre Celebi, Brian Helba, Michael Marchetti, NabinMishra, and Allan Halpern. Skin lesion analysis toward melanoma detection: A chal-lenge at the International Symposium on Biomedical Imaging (ISBI) 2016, hosted by theInternational Skin Imaging Collaboration (ISIC). arXiv preprint arXiv:1605.01397 , 2016.4Multi-Sample ζ-mixupFiona N Newell and Heinrich H B ̈ ulthoff. Categorical perception of familiar objects. Cog-nition , 85(2):113–143, 2002.Phil Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, and Tom Goldstein. Theintrinsic dimension of images and its impact on learning. In International Conferenceon Learning Representations (ICLR) , 2021. URL https://openreview.net/forum?id=XJk19XzGq2J .Bernhard Riemann. Ueber die anzahl der primzahlen unter einer gegebenen grosse. Ges.Math. Werke und Wissenschaftlicher Nachlaß , 2(145-155):2, 1859.Daniel L Ruderman. The statistics of natural images. Network: computation in neuralsystems , 5(4):517, 1994.Erroll Wood. Synthetic data with digital humans. Microsoft Sponsor Session, CVPR2021, 2021. URL https://www.microsoft.com/en-us/research/uploads/prod/2019/09/2019-10-01-Synthetic-Data-with-Digital-Humans.pdf .Erroll Wood, Tadas Baltruˇ saitis, Charlie Hewitt, Sebastian Dziadzio, Thomas J Cashman,and Jamie Shotton. Fake it till you make it: Face analysis in the wild using synthetic dataalone. In Proceedings of the IEEE/CVF International Conference on Computer Vision(ICCV) , pages 3681–3691, 2021.Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, andYoungjoon Yoo. CutMix: Regularization strategy to train strong classifiers with localiz-able features. In Proceedings of the IEEE/CVF International Conference on ComputerVision (ICCV) , pages 6023–6032, 2019.Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyondempirical risk minimization. In International Conference on Learning Representations(ICLR) , 2018.5 |
2M-2-75emE | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionHigh-Fidelity Image Synthesis from Pulmonary NoduleLesion Maps using Semantic Diffusion ModelXuan Zhao xz1919@imperial.ac.ukBenjamin Hou bh1511@imperial.ac.ukDepartment of Computing, Imperial College London, London, UKEditors: Under Review for MIDL 2023AbstractLung cancer has been one of the leading causes of cancer-related deaths worldwide for years.With the emergence of deep learning, computer-assisted diagnosis (CAD) models based onlearning algorithms can accelerate the nodule screening process, providing valuable assis-tance to radiologists in their daily clinical workflows. However, developing such robust andaccurate models often requires large-scale and diverse medical datasets with high-qualityannotations. Generating synthetic data provides a pathway for augmenting datasets ata larger scale. Therefore, in this paper, we explore the use of Semantic Diffusion Mod-els (SDM) to generate high-fidelity pulmonary CT images from segmentation maps. Weutilize annotation information from the LUNA16 dataset to create paired CT images andmasks, and assess the quality of the generated images using the Fr ́ echet Inception Distance(FID), as well as on two common clinical downstream tasks: nodule detection and nodulelocalization. Achieving improvements of 3.96% for detection accuracy and 8.50% for AP 50in nodule localization task, respectively, demonstrates the feasibility of the approach.Keywords: controlled image synthesis, lung nodules, semantic diffusion model1. IntroductionAccurate detection and localization of pulmonary nodules using Computed Tomography(CT) is one of the main ways to perform early diagnosis of lung cancer. Deep learninghas aided the diagnosis of lung cancer since its emergence. However, current methods fordetecting lung nodules typically only predict their centers, while the size of the nodules,a critical diagnostic criterion, is often overlooked. The volume of a nodule can be used todifferentiate between benign and malignant nodules, with larger volumes often indicatingmalignancy. Additionally, changes in nodule volume over time can be used to assess treat-ment response or disease progression (Gavrielides et al., 2009). Lung nodules are often quitesmall, they can exhibit a wide range of shapes that vary drastically with different seman-tic features. Gradient-based inpainting methods, such as Poisson blending, are unreliablewhen the nodule volume is too small. On the other hand, simple cut-and-paste techniquescan introduce spatial discontinuity. In recent literature, diffusion models have been provento be capable of generating very realistic images (Yang et al., 2022), which is particularlyuseful as a data augmentation method for medical imaging tasks (Chen et al., 2022). Inthis paper, we leverage Semantic Diffusion Model (SDM) (Wang et al., 2022) to synthesizehigh-fidelity pulmonary CT images from segmentation masks containing lung nodules. Ourproposed method permits controlled synthesis of nodule shape and size whilst maintainingimage quality and diversity. The results obtained demonstrate the beneficial performanceof our pipeline in subsequent downstream tasks.©2023 CC-BY 4.0, X. Zhao & B. Hou.Zhao Hou2. Data and MethodOur method utilizes the LUNA16 dataset (Setio et al., 2017), a subset of the publiclyavailable LIDC-IDRI dataset, which contains 888 chest CT scans and 1186 marked nodules.For all experiments, the CT window is set between [-1000,400], and operations are performedin 2D. Slices are considered only if they contain lung structure, as nodules do not existoutside these regions. To create nodule masks, the nodules are first cropped sphericallybased on the centroid and diameter information from the provided annotations. A manualOTSU threshold is then applied to each cropped region-of-interest to get the final masks.The intensities of the cropped pixels are clustered using the K-Means algorithm with twocenters, and the threshold is selected as the average of these centers. Additionally, a ‘bodymask’ is also generated, which comprises the entire patient’s body. Each slice is intensitythresholded at 127, followed by a morphological hole fill process. The largest connectedregion is then selected. The final mask is then composed of the structures in this order;background, left lung, right lung, trachea, body mask, and nodule if one is present.Our data preprocessing method above results in 1139 slices with nodules and 128059slices without nodules. For all experiments, the training and testing sets are divided bypatient ID to ensure no data cross-contamination. Specifically, 744 patients were usedfor training, while 144 patients were used for testing. As nodule-free slices would greatlyoutnumber slices with nodules (almost 1 in 100), only 1 in 4 (empirically selected) nodule-free slices are selected to train the generative models, as well as subsequent downstreamtasks. SDM and SPADE (Park et al., 2019), a previous state-of-the-art method, are trainedand used to generate synthetic 2D pulmonary CT slices; 1000 slices with nodules, andanother 1000 that are nodule-free. Two downstream tasks, namely nodule detection andlocalization, are then trained with a mixture of synthetic and real samples. SDM was trainedwith an image size of 256x256 (reduced due to resource availability) and a batch size of 2.Training took approximately 2 days for 100,000 steps, using the AdamW optimizer withan initial learning rate of 1e-4. SPADE was trained with an image resolution of 512x512and a batch size of 16. Training took approximately 7 hours for 20 epochs, using the Adamoptimizer with an initial learning rate of 1e-4. All experiments were conducted on a machinewith an NVIDIA A6000 GPU.3. Experiments and ResultAll experiments are run for 10-folds with the synthetic images in the test fold being excludedif there are any, and significance of accuracy/AP 50is confirmed by Wilcoxon rank-sum testas shown in the p-value column. Table 1 shows the relevant metrics of two models beforeand after adding diffusion-generated images in the training set. For nodule detection task(determining whether a cropped 32 ×32 patch is nodule or non-nodule), a SE-ResNet (Huet al., 2019) was trained. Table 1 shows that adding SDM-generated images increasesboth the accuracy and F1 score, as well as lower the standard deviation, when comparedto the baseline and SPADE. For nodule localization task, a Faster R-CNN model (Renet al., 2016) was trained for detecting the location/bounding boxes of nodules on a 2Dslice. The model trained with the additional SDM-generated images outperformed boththe baseline and the SPADE-image-trained model in AP and AR in all selected IoU testpoints. Figure 1A shows samples generated by SDM and SPADE, and Figure 1B shows2Short Titleexample downstream nodule localizations. Finally, the FID scores for SDM among noduleand non-nodule diffusion-generated images are 80.820 and 84.494, respectively, and the FIDscores of those for SPADE-generated images are 186.609 and 147.451.Accuracy (%) Precision (%) Rec./Sen. (%) Specificity (%) F1 p-valueI,A 85 .76±1.69 85 .63±3.66 86.42±5.05 85 .03±6.10 85 .83±1.61 -I,B 88 .99±1.32 83 .3±2.74 90 .64±2.86 87.86±2.98 86 .76±1.31 82 .0×10−6I,C 89 .72±1.26 85.09±2.21 90 .37±2.14 89 .29±1.93 87 .61±1.23 9 .54×10−6AP 50(%) AP 60(%) AR 50(%) AR 60(%) AR 70(%) p-valueII,A 80 .26±5.62 73 .73±6.03 89 .23±3.85 83 .62±4.12 64 .96±4.39 -II,B 80 .04±4.60 72 .37±5.14 90 .18±3.52 83 .52±4.10 66 .24±3.50 0.985II,C 88 .75±3.21 84 .80±3.72 95 .02±2.15 91 .55±2.49 78 .08±2.96 4 .825×10−4Table 1: Relevant metrics on 4 experiments: I: Nodule detection task. II: Nodule localization task.A: Without synthetic images in train set. B: With SPADE images in train set. C: With diffusionimages in train set. p-value is generated between A (control experiment) and other experiments (Bor C). AP and AR are Average Precision and Recall, and the subscript denotes the IoU% used.(A) (B)Figure 1: (A) Example images generated by SDM and SPADE. L-to-R: CT image, CT mask,SDM image, SPADE image. Top: Nodule slice. Bottom: Nodule-free slice. (B) Localizationdownstream task. Top: SDM, Bottom: SPADE. Left: Correctly identified nodules, Right: FalseNegative/Positive detections. Green box is ground truth and red box is prediction.4. Discussion and ConclusionThe FID score of SDM-generated images is much lower than SPADE-generated images, indi-cating the quality of synthetic images via SDM is significantly better than synthetic imagesvia SPADE. However, this comes at a trade-off where generating synthetic images usingSDM is much more time-consuming and computationally expensive compared to SPADE(i.e.∼10 min/image for SDM whilst 320 images/min for SPADE running on Nvidia A6000GPU). Surprisingly, in the SDM images, fine details in the trachea region of original imagesare preserved, while in the SPADE images the area is filled with random noises/strokes.Overall, our experiments have shown that SDM has the potential of generating high-fidelitypulmonary CT images, even with nodules of small diameters, as evident by the improve-ment of downstream tasks compared to SPADE and baseline. Future work include trainingSDM in 2.5D in order to perform 3D volume generation, and also to extend the mask classto include nodule malignancy.3Zhao HouReferencesYizhou Chen, Xu-Hua Yang, Zihan Wei, Ali Asghar Heidari, Nenggan Zheng, Zhicheng Li,Huiling Chen, Haigen Hu, Qianwei Zhou, and Qiu Guan. Generative adversarial networksin medical image augmentation: A review. Comput. Biol. Medicine , 144:105382, 2022.Marios A Gavrielides, Lisa M Kinnard, Kyle J Myers, and Nicholas Petrick. Noncalcifiedlung nodules: volumetric assessment with thoracic ct. Radiology , 251(1):26–37, 2009.Jie Hu, Li Shen, Samuel Albanie, Gang Sun, and Enhua Wu. Squeeze-and-excitation net-works, 2019.Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesiswith spatially-adaptive normalization. In CVPR , pages 2337–2346. Computer VisionFoundation / IEEE, 2019.Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-timeobject detection with region proposal networks, 2016.Arnaud Arindra Adiyoso Setio, Alberto Traverso, Thomas de Bel, Moira S. N. Berens,Cas van den Bogaard, Piergiorgio Cerello, Hao Chen, Qi Dou, Maria Evelina Fan-tacci, Bram Geurts, Robbert van der Gugten, Pheng-Ann Heng, Bart Jansen, MichaelM. J. de Kaste, Valentin Kotov, Jack Yu-Hung Lin, Jeroen T. M. C. Manders, Alexan-der S ́ onora-Mengana, Juan Carlos Garc ́ ıa-Naranjo, Evgenia Papavasileiou, and MathiasProkop. Validation, comparison, and combination of algorithms for automatic detectionof pulmonary nodules in computed tomography images: The LUNA16 challenge. MedicalImage Anal. , 42:1–13, 2017.Weilun Wang, Jianmin Bao, Wengang Zhou, Dongdong Chen, Dong Chen, Lu Yuan, andHouqiang Li. Semantic image synthesis via diffusion models. CoRR , abs/2207.00050,2022.Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, YingxiaShao, Wentao Zhang, Ming-Hsuan Yang, and Bin Cui. Diffusion models: A comprehensivesurvey of methods and applications. CoRR , abs/2209.00796, 2022.4 |
B97_xzj69FK | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionA Novel Approach for Assessment of Clonal Hematopoiesisof Indeterminate Potential Using Deep Neural NetworksSangeon Ryu1allen.ryu@yale.eduShawn Ahn1shawn.ahn@yale.eduJeacy Espinoza2jeacy.espinoza@yale.eduAlokkumar Jha2alokkumar.jha@yale.eduStephanie Halene3stephanie.halene@yale.eduJames S. Duncan1james.duncan@yale.eduJennifer M Kwan*2jennifer.kwan@yale.eduNicha C. Dvornek*1nicha.dvornek@yale.edu1Department of Radiology & Biomedical Imaging, Yale School of Medicine, New Haven, USA2Section of Cardiovascular Medicine, Yale School of Medicine, New Haven, USA3Section of Hematology, Yale School of Medicine, New Haven, USA*co-corresponding authorsEditors: Under Review for MIDL 2023AbstractWe propose a novel diagnostic method for clonal hematopoiesis of indeterminate potential(CHIP), a condition characterized by the presence of somatic mutations in hematopoieticstem cells without detectable hematologic malignancy, using deep-learning techniques. Wedeveloped a convolutional neural network (CNN) to predict CHIP status using 4 differentviews from standard delayed gadolinium-enhanced cardiac MRI. We used 5-fold cross val-idation on 82 patients to assess the performance of our model. Different algorithms werecompared to find the optimal patient-level prediction method using the image-level CNNpredictions. We found that the best model had an AUC of 0.85 and an accuracy of 82%.We conclude that a deep learning-based diagnostic approach for CHIP is promising.Keywords: Deep learning, clonal hematopoiesis of indeterminate potential, cardiovasculardisease, cardiac MRI1. IntroductionClonal hematopoiesis of indeterminate potential (CHIP) is an age-related premalignantcondition, characterized by the presence of clonally expanded hematopoietic stem cellscaused by a leukemogenic mutation in individuals without evidence of hematologic ma-lignancy (Marnell et al., 2021). CHIP is an independent risk factor for cardiovasculardiseases (CVDs), such as atherosclerosis, myocardial infarction, and congestive heart fail-ure (Mooney et al., 2021). CVDs such as these are one of the leading causes of morbidityand mortality worldwide; thus, being able to augment the identification of CHIP beyondDNA sequencing is imperative. Further, although CHIP independently increases the risk ofheart disease and heart failure, not all CHIP patients develop these adverse cardiovascularevents. Thus, use of machine learning approaches can potentially identify imaging featuresthat can risk stratify who may develop CVD amongst CHIP patients.©2023 CC-BY 4.0, S. Ryu, S. Ahn, J. Espinoza, A. Jha, S. Halene, J.S. Duncan, J.M. Kwan* & N.C. Dvornek*.Ryu Ahn Espinoza Jha Halene Duncan Kwan* Dvornek*Traditionally, CHIP is diagnosed through next-generation sequencing (NGS), a tech-nique that can determine a person’s DNA sequence. For this, however, the patient’s bloodor bone marrow sample must be acquired, almost always through invasive means. As NGScan take hours to days to return a result as well, a quicker, non-invasive method for evalu-ation of CHIP becomes more desirable.Preliminary data shows that CHIP is associated with increased fibrosis in human en-gineered heart tissue. Delayed gadolinium enhancement (DGE) is the method of choicefor detecting myocardial fibrosis in magnetic resonance imaging (MRI). Thus, we soughtto explore whether fibrosis burden and fibrosis features on cardiac MRI (cMRI) via DGEsignatures could indicate if the patient had CHIP.2. MethodsWe enrolled an anonymized collection of DGE-cMRI images from 82 patients (42% withCHIP), whose genomic DNA was extracted from peripheral blood samples and sequencedto determine CHIP. Cardiac MRI was performed on 1.5 and 3T scanners, with DGE eval-uation performed 8-10 minutes after administration of contrast. Each patient had up to 4different views in their collection of cMRIs: short-axis view (SAS); 4-chamber view (4CH);vertical long axis (VLA); and left ventricular outflow view (LVOT). Multiple views wereincorporated into the prediction model so that we could capture a more complete overviewof the heart. Each patient had up to 5-7 SAS views for their cMRIs, but only one cMRIimage for the other 3 views; some patients had fewer SAS images and/or were missing oneor more of the other 3 views. Missing views were replaced by images with all 0s.The model was a CNN designed for binary classification (Fig. 1), with 4 convolutionallayers and 3 max pooling layers. Uniquely, we incorporated all 4 views as inputs to themodel. Each view underwent processing by the convolutional layers, and features from thefinal convolutional layer were concatenated and processed by fully connected layers. Theoutput of the model gave the probability of CHIP based on the 4-view cMRI sample.The model was trained using a 5-fold cross validation framework in order to assess theperformance of the model. Each fold contained between 14 and 15 patients; we ensured thatall cMRI images belonging to the same patient were in the same fold. Each of the 5 foldswere used once as a test set, while the other 4 folds were combined to be the training set. Inaddition to standard image data augmentation techniques, as SAS view included multipleimage slices, random combinations of the 4 views were used to augment the number ofsamples. The model for each fold was trained using binary cross-entropy loss for 300 epochsand then evaluated on the test set. The evaluation profiles were then combined to givean overview of the model architecture’s performance in the binary classification task usingreceiver operating characteristic (ROC) curve analysis.The model was classifying on an ”image-level” - that is, it was classifying each of theimage sets (one cMRI from each of the 4 views) into one of the two categories, ”CHIP” or”NO CHIP”. To extend this to the ”patient-level” - that is, combining the predictions for allthe images belonging to a patient to make a single classification for the patient themselves- we tested different thresholding approaches for combining the image-level predictions tomake a prediction for the patient. Specifically, two approaches were explored: the ratiothresholding method, which took the portion of the image sets belonging to a patient that2Short Titlewere classified in the CHIP category, and if the ratio was greater than the threshold (=0.4),the patient was classified as CHIP; and the max thresholding method, which classified apatient as CHIP if the patient’s 4-view image set with maximum probability of CHIP wasgreater than the threshold.3. Results and ConclusionsWe found that between the two thresholding methods, the ratio-thresholding approachperformed much better than the max-thresholding method (AUC=0.85 vs. AUC=0.63, Fig.2). In addition, using the ratio-thresholding method, our approach was able to predict thepatient’s CHIP status with an accuracy of 82%.Figure 1: Network architecture for CHIP classifica-tion from multi-view DGE-cMRI. CNN,convolutional neural network; MLP, mul-tilayer perceptron.Figure 2: ROC curves of the twothresholding methods.Top=max-thresholding,bottom=ratio-thresholding.In conclusion, we proposed a novel approach for determining CHIP from multi-viewDGE-cMRI. Our promising early results suggest non-invasive, routine imaging may supple-ment the diagnosis of CHIP. Future work will extend validation of our approach on largepublic datasets (e.g., TOPMed) and apply model interpretation techniques (Adebayo et al.,2018) to identify cMRI biomarkers for CHIP as well as imaging features that can predictadverse cardiovascular outcomes in CHIP patients.ReferencesJulius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and BeenKim. Sanity checks for saliency maps. Advances in neural information processing systems ,31, 2018.Christopher S Marnell, Alexander Bick, and Pradeep Natarajan. Clonal hematopoiesisof indeterminate potential (chip): Linking somatic mutations, hematopoiesis, chronic3Ryu Ahn Espinoza Jha Halene Duncan Kwan* Dvornek*inflammation and cardiovascular disease. Journal of molecular and cellular cardiology ,161:98–105, 2021.Leanne Mooney, Carl S Goodyear, Tamir Chandra, Kristina Kirschner, Mhairi Copland,Mark C Petrie, and Ninian N Lang. Clonal haematopoiesis of indeterminate potential:intersections between inflammation, vascular disease and heart failure. Clinical Science ,135(7):991–1007, 2021.4 |
BLWmZy6kSL7 | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionF acial AU-aid hypomimia diagnosis based on GNNYingjing Xu1poppyxu@zju.edu.cnBo Lin2,3 ∗rainbowlin@zju.edu.cnW ei Luo4luoweirock@zju.edu.cnShuiguang Deng2dengsg@zju.edu.cnJianwei Yin2zjuyjw@cs.zju.edu.cn1School of Software T echnology, Zhejiang University2Col lege of Computer Science and T echnology, Zhejiang University3Innovation Centre for Information, Binjiang Institute of Zhejiang University4Second Affiliated Hospital, Zhejiang University School of MedicineAbstractHypomimia is a prevalent symptom of Parkinson’s Disease(PD). It is characterized byreduced facial expression and delayed facial movement. The work proposes a framework touse Graph Neural Network(GNN) to extract related action unit(AU) features on the facialsmiling videos to help to improve the recognition of hypomimia with PD. AU is an effectiverepresentation of the facial state and movement, while GNN has great capability to presentrelationship information between facial areas. A related AU representation can pay moreattention to the relationships between the facial areas in order to increase the accuracyof the diagnosis. Experiments were conducted using an in-house dataset of 105 facialsmiling videos, which contains 55 healthy control(HC) participants and 50 PD patients.Our method’s performance was compared to that of random forest (RF) and support vectormachine (SVM) classifiers. Our method achieved an Accuracy , PPV, TPR, and F1 score of{91.7%, 92.8%, 90.6%, 91.7% }, while the RF and SVM achieved {84.5%,84.8%, 82.7%,83.7%} and {88.7%, 88.0%, 88,7%, 88.3%} respectively on the dataset.Keywords: Hypomimia, Parkinson’s Disease, Action Unit, GNN1. IntroductionParkinson’s disease is a common neurological disease, the prevalence of Parkinson’s diseasein the population over 65 years old is about 1.7%. F acial hypomimia is one of the manifes-tations of motor symptoms, the patient’s facial expression ability is impaired, and the delayof facial movement leads to the reduction of facial movement. The MDS Unified-ParkinsonDisease Rating Scale(MDS-UPDRS) is an authoritative scale used to assess PD in the clinic.With the development of facial recognition, Action Unit(AU), a technique to represent andquantify facial status, has been widely used and can effectively reflect facial movement. Inthis paper we propose a video-based hypomimia recognition framework that utilizes AU thatcombing facial area information and uses GNN to measure the relation between facial AUareas. Our method can extract comprehensive AU information and outperform traditionalmachine learning methods in the experiment.© 2023 CC-BY 4.0 , Y. Xu, B. Lin, W. Luo, S. Deng & J. Yin.Xu Lin Luo Deng YinFigure 1: The pipline of proposed method.2. MethodsW e applied a AU intensity prediction method( Luo et al. ,2022 ) in our method. As detailedbelow, Figure 1 depicts the pipeline of the proposed method. First, Smiling videos are con-verted into aligned frames after dataprocessing. Next, 8 AU representations are extractedby Swin T ransformer( Liu et al. ,2021 ) . After graph construction and convolution, we canget related AU representations. At last, classifier determines hypomimia by related AUrepresentations.Data processing. The original dataset is smiling videos from HC participants and PD pa-tients. The videos were recorded at 1920×1080 pixels with 60 frames per second. F or eachframe, to avoid the interference factor of head position and lighting intensity , we performedface detection, face alignment, and face normalization sequentially by using MTCNN( Zhanget al. ,2016 ) .AU feature extraction. T o extract the full face representations, we used the Swin T rans-former backbone( Liu et al. ,2021 ) . The encoder contains a fully connected layer(FC) anda global average pooling layer(GAP). 8 non-related AU representations can be extractedfrom a full face representation by the encoder.Graph convolution. W e defined the non-related AU features extracted by Swin T rans-former as nodes of the graph, and the similarity between pairs of AUs calculated by K(K=2)nearest neighbors algorithm was defined as edges of the graph. W e then performed graphconvolution on the graph and obtain representations containing related AU information.Classification. The classifier was a fully connected neural network, and it used cross-entropy loss to determine hypomimia.3. Expermental resultsExperimental setup. W e collected 105 smiling videos from The Second Affiliated Hospitalof Zhejiang University , which included 50 smiling videos from PD patients and 55 smilingvideos from HC participants. After a series of processing by MTCNN, the smiling videoswere split into training, validation, and test sets according to each person using Hold-outMethod, which were divided into 60, 20, 20 people respectively . The corresponding video∗Corresponding author2F acial AU-aid hypomimia diagnosis based on GNNframes were 20,479, 6,898, and 8,030 frames respectively . Support V ector Machine(SVM)and Random F orest(RF) is used as the baseline of the experiment, in which the inputof baseline is AU intensity values. The evaluation metrics used were accuracy , positivepredictive value(PPV), T rue positive rate(TPR), and F1 score. F or hyperparameters, thelearning rate was set to 0.001, the batch size is set to 24, and the epoch is set to 20.Results. The proposed method has shown promising results in extracting facial expressionsand identifying hypomimia. As shown in T able 1, the results showed that our methodachieved the best performance, with an accuracy of 0.9167 and an F1 score of 0.9170 on thetest set. In comparison, SVM achieved an accuracy of 0.8869 and an F1 score of 0.8834, whileRF achieved an accuracy of 0.8448 and an F1 score of 0.8373. Our method outperformedthe traditional classifiers in terms of accuracy , PPV, TPR, and F1 score, indicating thatthe graph representation of facial expressions can better capture the relationship betweenfacial areas and improve the diagnosis of hypomimia with PD.T able 1: Results on validation and test sets.Model Accuracy PPV TPR F1 scoreRF 0.845 0.848 0.827 0.837SVM 0.887 0.88 0.887 0.883Our method 0.917 0.928 0.906 0.9174. ConclusionIn this work, we propose a deep learning method to encode facial action unit information torecognize hypomimia with PD based on GNN. W e demonstrate that using GNN to extractAUs can better represent facial features and their relationships, leading to improve accuracyof hypomimia identification. Through short videos, it can help ordinary users to get moreconvenient diagnosis. F or future work, integrating the characteristics of the disease into thegraph construction can increase the medical interpretability of the model, and increase thereliability of Parkinson’s disease recognition.AcknowledgmentsThis research was supported in part by the National Key Research and Development Pro-gram of China under Grant 2022YFF0902004, and in part by the “Pioneer” and “LeadingGoose” R&D Program of Zhejiang under Grant 2023C03101.ReferencesZe Liu, Y utong Lin, Y ue Cao, Han Hu, Yixuan W ei, Zheng Zhang, Stephen Lin, andBaining Guo. Swin transformer: Hierarchical vision transformer using shifted windows.In Proceedings of the IEEE/CVF international conference on computer vision , pages10012–10022, 2021.3Xu Lin Luo Deng YinCheng Luo, Siyang Song, W eicheng Xie, Linlin Shen, and Hatice Gunes. Learning multi-dimensional edge feature-based au relation graph for facial action unit recognition. arXivpreprint arXiv:2205.01782 , 2022.Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Y u Qiao. Joint face detection and align-ment using multitask cascaded convolutional networks. IEEE signal processing letters ,23(10):1499–1503, 2016.4 |
2oCb0q5TA4Y | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023 submissionContrast Invariant Feature Representations for Segmentationand Registration of Medical ImagesYue Zhi, Russ Chua russchua@mit.eduAdrian Vasile Dalca adalca@mit.eduAbstractImaging tasks like segmentation and registration are fundamental in a broad range ofmedical research studies. These tasks are increasingly solved by machine learning basedmethods. However, given the heterogeneity of medical imaging modalities, many existingmethods are not able to generalize well to new modalities or even slight variations of ex-isting modalities, and only perform well on the type of data they were trained on. Mostpractitioners have limited training data for a given task, limiting their ability to train gen-eralized networks. To enable neural networks trained on one image type or modality toperform well on other imaging contrasts, we propose CIFL : contrast invariant feature learn-ing. CIFL uses synthesized images of varying contrasts and artifacts, and an unsupervisedloss function, to learn rich contrast-invariant image features. The resulting representationcan be used as input to downstream tasks like segmentation or registration given somemodality available at training, and subsequently enables performing that task on contrastsnot available during training. In this paper, we perform experiments that demonstrategeneralizability in brain segmentation and registration.Keywords: Segmentation, Registration.1. Introduction and Related WorkImaging technologies including photographs, Magnetic Resonance Imaging (MRI) and Com-putational Tomography (CT)-Scans have provided effective means of medical diagnosis andtreatment. As a result, there is significant variability in medical images given a variety ofacquisition technologies, vendors, protocol choices, and patient populations even within asingle institution. This poses a problem for neuroimaging tools that are usually trainedon very specific MRI pulse sequences in available data sets which poorly generalize to un-seen MRI modalities at inference. To address this, data augmentation techniques are oftenemployed to produce Convolutional Neural Networks (CNNs) that generalize across MRIpulse sequences. Some methods augment data with MRI-based forward models that leveragephysics-domain approximations to generate plausible, synthetic training examples similarto MRI pulse sequences (Jog et al., 2019). Dense Cycle Generative Adversarial Neural Net-works (GANs) achieve adaptation between image modalities by synthesizing one modalityfrom another (Lei et al., 2019). Recent strategies learn to use anatomically-consistent spa-tial deformation fields and intensity augmentations in segmentation tasks (Chaitanya et al.,2019; Zhao et al., 2019). Synthetic generation techniques of unseen MRI contrasts werealso explored (Billot et al., 2020; Hoffmann et al., 2021b; Hoopes et al., 2022). However, allthese methods train networks specific to a certain anatomy or deep-learning task (e.g. ei-ther segmentation or registration). In this paper, we build on synthetic generation methodsbut focus on producing general feature representations that are invariant to image contrast(modality) and are useful for a variety of analysis tasks.©2023 Y.Z.R. Chua & A.V. Dalca.Chua Dalca15Random Shapes MaskAssign IntensitiesSynthesized ImagesSynthesisCIFL NetworkCIFL NetworkShared Weights Contrast Invariant Features(output)CIFL LossContrast Invariant Feature Learner (CIFL)Add NoiseFigure 1: From a synthesized label mask of random shapes, we assign random intensitiesto each anatomy class to generate two different contrasts. We subsequently addrandom noise to produce a synthesized image. We apply a Contrast InvariantFeature Learner to each of the two images (with shared weights), giving image-sized feature representations. We use an unsupervised loss which encourages thefeatures to be rich (diverse across channels of the representations) yet similar forthe two contrasts.2. MethodWe define function fθ:Rl×w×h→ Sl×w×h×Cwith parameters θthat encodes a featurerepresentation rfor input image x. The feature representation r=fθ(x) is a C-channelimage of the same spatial dimensions as x. For two images xm1andxm2of the sameanatomy but different modalities m1andm2, we encourage two properties for r:•Similar representation of two modalities from the same anatomy: fθ(xm1)≈fθ(xm2)•Rich representation to be usable in downstream applications: fcθ(xm1) and fc′θ(xm1)should be different, where the superscript crepresents the c-th channel of the featurerepresentation.To achieve these properties, we build on contrastive learning to optimize the lossLcontrastive (θ, τα, τβ, τγ;X)≜E(xm1,xm2)"−logPcefcθ(xm1)·fcθ(xm2)/ταPcefcθ(xm1)·fcθ(xm2)/τβ+PcPc′̸=cefcθ(xm1)·fc′θ(xm1)/τγ#,where τα,τβandτγare individual temperature terms that scale their effects and Xisa dataset of multi-modality images. We employ synthetic images of different shapes andcontrasts to train such a network following SynthMorph (Hoffmann et al., 2021a). This isillustrated pictorially in Figure 1 which provides a broad overview of our CIFL training pro-cess. We employ a CNN to approximate fθwith eight convolutional layers each with kernelsize 3 and normalize the output final CIFL features using a l2normalization layer to dis-tribute features onto a unit hypersphere, producing uniform intensities and closer positive2CIFLTable 1: Performance (Dice Score) on downstream tasks, on modalities unseen during train-ing.Method Dice ScoreCIFLTask Dimension Dataset Baseline τγ= 1 τγ= 0.1 τγ= 0.01Segmentation 2D Inverted OASIS T1 0 .18±0.010.84±0.04 0 .83±0.04 0 .79±0.04Registration 2D OASIS T1-Inverted T1 0 .45±0.020.69±0.05 0 .69±0.05 0 .66±0.06IXI T1-T2 0 .36±0.03 0 .59±0.13 0 .59±0.120.61±0.1313CIFLT1 ImageT2 ImageChannel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6CIFLFigure 2: Example of CIFL features yielded by a trained CIFL network for two differentMRI pulse sequences. The feature representations are similar for any two modal-ities in the same channel shown in each blue rectangle. The representations foreach image are also different across channels (between blue rectangles), whichprovide downstream deep learning models with rich information.alignments (Wang and Isola, 2020). To train standard downstream networks for segmen-tation and registration tasks (Balakrishnan et al., 2019; Ronneberger et al., 2015), we usethe feature representations yielded from the trained CIFL network to generate downstreaminput features during inference on unseen modalities for the same task.3. Experiments and ResultsWe perform preliminary experiments where we first train a CIFL network on image dataof synthetic shapes. Then we train a downstream network using CIFL features from a T1MRI brain image, and test the performance of those networks using CIFL features fromunseen modalities. For downstream tasks, we employ the OASIS dataset (Hoopes et al.,2021; Marcus et al., 2007) of T1 images, processed to be normalized, affinely aligned andinclude 5-label segmentation maps consisting of the background, white matter, grey matter,cortical spinal fluid, and thalamus. In this preliminary work, we extract the mid-coronalslice and work in 2D. We partitioned 232 images for training, and 58 for validation. We thentest each model on 100 samples of unseen images from OASIS, and Information eXtractionfrom Images (IXI) dataset1. Our preliminary experimental results in Table 1 show promisethat the CIFL features can enable generalizability to unseen modalities shown in Figure 2.1. IXI Dataset: https://brain-development.org/ixi-dataset/3Chua DalcaReferencesGuha Balakrishnan, Amy Zhao, Mert R Sabuncu, John Guttag, and Adrian V Dalca.Voxelmorph: a learning framework for deformable medical image registration. IEEEtransactions on medical imaging , 38(8):1788–1800, 2019.Benjamin Billot, Douglas Greve, Koen Van Leemput, Bruce Fischl, Juan Eugenio Iglesias,and Adrian V Dalca. A learning strategy for contrast-agnostic mri segmentation. arXivpreprint arXiv:2003.01995 , 2020.Krishna Chaitanya, Neerav Karani, Christian F Baumgartner, Anton Becker, Olivio Donati,and Ender Konukoglu. Semi-supervised and task-driven data augmentation. In Interna-tional conference on information processing in medical imaging , pages 29–41. Springer,2019.Malte Hoffmann, Benjamin Billot, Douglas N Greve, Juan Eugenio Iglesias, Bruce Fis-chl, and Adrian V Dalca. Synthmorph: learning contrast-invariant registration withoutacquired images. IEEE transactions on medical imaging , 41(3):543–558, 2021a.Malte Hoffmann, Benjamin Billot, Juan E Iglesias, Bruce Fischl, and Adrian V Dalca.Learning mri contrast-agnostic registration. In 2021 IEEE 18th International Symposiumon Biomedical Imaging (ISBI) , pages 899–903. IEEE, 2021b.Andrew Hoopes, Malte Hoffmann, Bruce Fischl, John Guttag, and Adrian V Dalca. Hy-permorph: Amortized hyperparameter learning for image registration. In InternationalConference on Information Processing in Medical Imaging , pages 3–17. Springer, 2021.Andrew Hoopes, Jocelyn S Mora, Adrian V Dalca, Bruce Fischl, and Malte Hoffmann.Synthstrip: Skull-stripping for any brain image. arXiv preprint arXiv:2203.09974 , 2022.Amod Jog, Andrew Hoopes, Douglas N Greve, Koen Van Leemput, and Bruce Fischl.Psacnn: Pulse sequence adaptive fast whole brain segmentation. NeuroImage , 199:553–569, 2019.Yang Lei, Joseph Harms, Tonghe Wang, Yingzi Liu, Hui-Kuo Shu, Ashesh B Jani, Walter JCurran, Hui Mao, Tian Liu, and Xiaofeng Yang. Mri-only based synthetic ct generationusing dense cycle consistent generative adversarial networks. Medical physics , 46(8):3565–3581, 2019.Daniel S Marcus, Tracy H Wang, Jamie Parker, John G Csernansky, John C Morris, andRandy L Buckner. Open access series of imaging studies (oasis): cross-sectional mri datain young, middle aged, nondemented, and demented older adults. Journal of cognitiveneuroscience , 19(9):1498–1507, 2007.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In International Conference on Medical image computingand computer-assisted intervention , pages 234–241. Springer, 2015.4CIFLTongzhou Wang and Phillip Isola. Understanding contrastive representation learningthrough alignment and uniformity on the hypersphere. In International Conference onMachine Learning , pages 9929–9939. PMLR, 2020.Amy Zhao, Guha Balakrishnan, Fredo Durand, John V Guttag, and Adrian V Dalca. Dataaugmentation using learned transformations for one-shot medical image segmentation.InProceedings of the IEEE/CVF conference on computer vision and pattern recognition ,pages 8543–8553, 2019.5 |
NZu3TrXvCk | Medical Imaging with Deep Learning 2023Deep Learning-Based Segmentation of Locally AdvancedBreast Cancer on MRI in Relation to Residual CancerBurden: A Multi-Institutional Cohort StudyMark Janse1m.h.a.janse-2@umcutrecht.nlLiselore Janssen1l.m.janssen-11@umcutrecht.nlBas van der Velden1B.H.M.vanderVelden-2@umcutrecht.nlMaaike Moman1,2maaikemoman@alexandermonro.nlElian Wolters-van der Ben3e.wolters@antoniusziekenhuis.nlMarc Kock4kockm@asz.nlMax Viergever1M.A.Viergever-2@umcutrecht.nlPaul van Diest1p.j.vandiest@umcutrecht.nlKenneth Gilhuijs1k.g.a.gilhuijs@umcutrecht.nl1University Medical Center Utrecht, Utrecht, The Netherlands2Alexander Monro hospital, Bilthoven, The Netherlands3St. Antonius hospital, Nieuwegein, The Netherlands4Albert Schweitzer hospital, Dordrecht, The NetherlandsThis paper was previously published as (Janse et al., 2023)AbstractWhile several methods have been proposed for automated assessment of breast-cancer re-sponse to neoadjuvant chemotherapy on breast MRI, limited information is available abouttheir performance across multiple institutions. In this paper, we assess the value and ro-bustness of nnU-Net-derived volumes of locally advanced breast cancer (LABC) on MRIto infer the presence of residual disease after neoadjuvant chemotherapy. An nnU-Net wastrained to segment LABC on a single-institution training set and validated on a multi-centerindependent testing cohort. Based on resulting tumor volumes, an extremely randomizedtree model was trained to assess residual cancer burden (RCB)-0/I vs. RCB-II/III. Anindependent model was developed using functional tumor volume (FTV). Models weretested on an independent testing cohort, response assessment performance and robustnessacross multiple institutions were assessed. Results show that nnU-Net accurately estimatechanges in tumor load on DCE-MRI, that these changes associated with RCB after NAC,and that they are robust against variations between institutions.Keywords: Breast MRI, segmentation, deep learning, response monitoring, locally ad-vanced breast cancer1. IntroductionNeoadjuvant chemotherapy (NAC) is increasingly used to treat patients with breast canceras it allows monitoring of treatment response with the tumor in situ thus offers opportunityfor more personalized treatment. The most sensitive modality to visualize tumor extent inthree dimensions is dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI).Methods for response monitoring on MRI range from manual assessment by radiologiststo methods being investigated for fully automated analysis. Manual assessment has beenshown to be predictive of pathological complete response (pCR), depending on tumor sub-type (Janssen et al., 2022). Combinations of manually selecting regions of interest (ROI) and©2023 CC-BY 4.0, M.J. , L.J. , B.v.d.V. , M.M. , E.W.-v.d.B. , M.K. , M.V. , P.v.D. & K.G. .semi-automatic thresholding have also been proposed, including a semi-automated methodto establish functional tumor volume (FTV) (Hylton et al., 2016). Fully automated data-driven methods to assess response to NAC on MRI have also been proposed, using radiomicsor deep learning (Choi et al., 2019; Comes et al., 2021; Joo et al., 2021). Little is knownabout the robustness of these methods across multiple institutions.This study aimed to establish whether nnU-Net accurately assesses changes in tumorload on DCE-MRI that are associated with residual cancer burden (RCB) after NAC robustto variations between institutions and MRI scanners. Secondly, whether such model is inagreement with the relationship between FTV and RCB.2. Methods2.1. DatasetsThe training cohort consisted of 105 consecutively included breast cancer cases treated withneoadjuvant chemotherapy (NAC) in a single institution. Patients underwent two MRIexaminations: The first at baseline prior to NAC, and the second either midway throughthe chemotherapy schedule, or immediately before the second-to-last cycle of chemotherapy.The second MRI examination was defined as the follow-up examination. Either 1 .5 T or3 T Philips scanners were used in this set. The independent testing cohort consisted of54 consecutively included breast cancer cases treated with NAC in four institutions. Thebaseline scan before any treatment were used as well as the follow-up scans after all cycles ofNAC but prior to surgery. Imaging in the testing cohort was performed exclusively on 3 Tscanners, both Philips (Hospital 1 and 4) and Siemens (Hospital 2 and 3). All examinationshad a pre-contrast scan and up to five post-contrast scans acquired at intervals of 55 to89 seconds, fat-supressed was used. Ground-truth annotations of all training scans werederived from a previously reported histopathology-validated semi-automated region grower(Alderliesten et al., 2007).To evaluate the response to NAC on histopathology, the residual cancer burden RCB wasderived from the final post-surgery resection specimens following the methodology describedby Symmans et al (Symmans et al., 2007). The RCB score was dichotomized into twocategories: RCB categories RCB-0 (i.e. pCR, pathological complete response) and RCB-Iwere defined as good responders to NAC. Conversely, categories RCB-II and RCB-III wereconsidered bad responders.2.2. Response assessmentThe ground truth segmentations were used to train a 3D nnU-Net CNN (Isensee et al., 2021).Input to the nnU-Net were the precontrast DCE series and five postcontrast DCE MRI (i.e.six channels in total). To evaluate segmentation performance, two-fold cross-validation wasperformed on the training set, while for final evaluation of response assessment performance,the network was trained on the entire training set.To compare nnU-Net to a previously validated method for response assessment, Func-tional Tumor Volume (FTV) was also calculated per breast from each MRI examinationfollowing the description by Newitt et al. (Newitt et al., 2014).2LABC segmentationTable 1: Performance of tumor response assessment in terms of residual cancer burden(RCB) on a per-hospital basis for the nnU-Net and FTV segmentation methods.Numbers presented are areas under the receiver operator curve (AUC).MR vendor nnU-Net FTVHospital 1 ( n= 12) Philips 0 .63 0 .71Hospital 2 ( n= 21) Siemens 0 .74 0 .75Hospital 3 ( n= 19) Siemens 0 .79 0 .81Hospital 4 ( n= 2) Philips 1 .00 1 .00An extremely randomized tree model was fit to assess tumor response to NAC (Geurtset al., 2006). Three input candidate predictors were used: the lesion volume determinedon the follow-up scan, tumor subtype (HER2+, HER2-/ER+ or triple negative) and thedifference in tumor volume between baseline and follow-up. The end point was tumorresponse expressed as the dichotomized RCB. The area under the receiver operator curve(AUC) was used as measure of model performance. Five-fold nested cross-validation wasperformed for hyperparameter tuning and internal model validation. Two separate modelswere trained, one where the volumes were determined using the previously trained nnU-Net,the second one using FTV.3. Results and conclusionThe median (interquartile range (IQR)) cross-validated Dice score from the nnU-Net, was0.87 (0.62-0.93). Pearson’s correlation between volumes derived from the nnU-Net and theground truth was R=0.95 (fold 1: R= 0.93, fold 2: R= 0.97). The correlation betweenthe nnU-Net-derived volume and FTV in the training cohort was R= 0.74 for the baselinescan, R= 0.72 for follow-up, and R= 0.80 for all scans combined. All correlations werestatistically significant ( P < 0.05). In the testing cohort, the median (IQR) AUC of theresponse assessment model was 0.76 (0.71-0.84) for nnU-Net-derived tumor volumes and0.77 (0.74-0.86) for FTV. There was no significant difference in AUC between the twomodels ( p= 0.66). Per hospital performance varied, with the worst performance associatedwith the hospital from the training set (Hospital 1) (Table 1).We conclude that nnU-Net can accurately estimate changes in tumor load on DCE-MRIand that these changes are associated with RCB after NAC. The response assessment is onpar with that derived using FTV, a previously validated method, but it is fully automatedand therefore observer independent. The performance of the model appears to be robustto variations in scan parameters across multiple institutions, proving the versatility of themethod.AcknowledgmentsThe authors would like to thank R. Offenberg for her help in preparing and analyzingthe data. This research was funded by the European Union Horizon 2020 research andinnovation program under grant agreement no. 755333 (LIMA).3ReferencesTanja Alderliesten, Angelique Schlief, Johannes Peterse, Claudette Loo, Hendrik Teertstra,Sara Muller, and Kenneth Gilhuijs. Validation of semiautomatic measurement of the ex-tent of breast tumors using contrast-enhanced magnetic resonance imaging. InvestigativeRadiology , 42:42–49, 2007. ISSN 00209996.Woo Jung Choi, Hak Hee Kim, Joo Hee Cha, Hee Jung Shin, and Eun Young Chae. Compar-ison of pathologic response evaluation systems after neoadjuvant chemotherapy in breastcancers: Correlation with computer-aided diagnosis of mri features. American Journalof Roentgenology , 213:944–952, 2019. ISSN 15463141.Maria Colomba Comes, Annarita Fanizzi, Samantha Bove, Vittorio Didonna, Sergio Dio-taiuti, Daniele La Forgia, Agnese Latorre, Eugenio Martinelli, Arianna Mencattini, An-nalisa Nardone, Angelo Virgilio Paradiso, Cosmo Maurizio Ressa, Pasquale Tamborra,Vito Lorusso, and Raffaella Massafra. Early prediction of neoadjuvant chemotherapy re-sponse by exploiting a transfer learning approach on breast dce-mris. Scientific Reports ,11:1–12, 2021. ISSN 20452322.Pierre Geurts, Damien Ernst, and Louis Wehenkel. Extremely randomized trees. MachineLearning , 63:3–42, 4 2006. ISSN 0885-6125.Nola M. Hylton, Constantine A. Gatsonis, Mark A. Rosen, Constance D. Lehman, David C.Newitt, Savannah C. Partridge, Wanda K. Bernreuter, Etta D. Pisano, Elizabeth A.Morris, Paul T. Weatherall, Sandra M. Polin, Gillian M. Newstead, Helga S. Marques,Laura J. Esserman, and Mitchell D. Schnall. Neoadjuvant chemotherapy for breast cancer:Functional tumor volume by mr imaging predicts recurrencefree survival-results from theacrin 6657/calgb 150007 i-spy 1 trial. Radiology , 279:44–55, 12 2016. ISSN 15271315.Fabian Isensee, Paul F. Jaeger, Simon A.A. Kohl, Jens Petersen, and Klaus H. Maier-Hein.nnu-net: a self-configuring method for deep learning-based biomedical image segmenta-tion. Nature Methods , 18:203–211, 2021. ISSN 15487105.Markus H. A. Janse, Liselore M. Janssen, Bas H. M. van der Velden, Maaike R. Moman,Elian J. M. Wolters-van der Ben, Marc C. J. M. Kock, Max A. Viergever, Paul J. vanDiest, and Kenneth G. A. Gilhuijs. Deep learning-based segmentation of locally advancedbreast cancer on mri in relation to residual cancer burden: A multi-institutional cohortstudy. Journal of Magnetic Resonance Imaging , 2023. Online ahead of print.L. M. Janssen, B. M. den Dekker, K. G. A. Gilhuijs, P. J. van Diest, E. van der Wall,and S. G. Elias. Mri to assess response after neoadjuvant chemotherapy in breast cancersubtypes: a systematic review and meta-analysis. npj Breast Cancer , 8:107, 9 2022. ISSN2374-4677.Sunghoon Joo, Eun Sook Ko, Soonhwan Kwon, Eunjoo Jeon, Hyungsik Jung, Ji-YeonKim, Myung Jin Chung, and Young-Hyuck Im. Multimodal deep learning models for theprediction of pathologic response to neoadjuvant chemotherapy in breast cancer. Scientificreports , 11:18800, 2021. ISSN 2045-2322.4LABC segmentationDavid C. Newitt, Sheye O. Aliu, Neil Witcomb, Gal Sela, John Kornak, Laura Esserman,and Nola M. Hylton. Translational Oncology , 7:94–100, 2014. ISSN 19365233.W. Fraser Symmans, Florentia Peintinger, Christos Hatzis, Radhika Rajan, Henry Kuerer,Vicente Valero, Lina Assad, Anna Poniecka, Bryan Hennessy, Marjorie Green, Aman U.Buzdar, S. Eva Singletary, Gabriel N. Hortobagyi, and Lajos Pusztai. Measurement ofresidual breast cancer burden to predict survival after neoadjuvant chemotherapy. Journalof Clinical Oncology , 25:4414–4422, 2007. ISSN 0732183X.5 |
FCYGwhzF7E | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023 submissionTSNet: Integrating Dental Position Prior and Symptoms forTooth Segmentation from CBCT ImagesLinjie Tong∗1linjie.19@intl.zju.edu.cnJiaxiang Liu∗1jiaxiang.21@intl.zju.edu.cnYang Feng2yang0478@e.ntu.edu.sgTianxiang Hu1tianxianghu@intl.zju.edu.cnZuozhu Liu†1zuozhuliu@intl.zju.edu.cn1Zhejiang University2Angelalign Inc.Editors: Accepted for publication at MIDL 2023AbstractAutomated dental diagnosis requires accurate segmentation of tooth from cone-beam com-puted tomography (CBCT) images. However, existing segmentation methods often over-look incorporating prior information and symptoms of teeth, which can cause unsatis-factory segmentation performance on teeth with symptoms. To this respect, we proposeTooth Symptom Network (TSNet), consisting of Dental Prior Guiding Data Augmentation(DPGDA) and Dental Symptom Shape Loss (DSSL), to improve segmentation performancefor teeth with different clinical symptoms. Experiments show that TSNet outperforms allstate-of-the-art methods across datasets with all kinds of symptoms with an average in-crease of 1.13% in Dice and 2.00% in IoU.Keywords: Tooth symptoms, CBCT, Dental prior, Symptom shape loss1. IntroductionDigital cone-beam computed tomography (CBCT) reconstruction has been shown to im-prove the effectiveness of dental treatment planning and management (Hao et al., 2022). Ahigh-quality automated CBCT model reconstruction requires an accurate tooth segmenta-tion from CBCT images (Weiss and Read-Fuller, 2019). Prior works are general approachesthat were not specifically developed for tooth segmentation, and as a result may not attainperfect accuracy as well as robustness on tooth CBCT images (Ronneberger et al., 2015;Zhou et al., 2018; Valanarasu et al., 2021; Wang et al., 2022; Jain et al., 2021).Figure 1(a) shows one normal tooth CBCT image and CBCT images with five symptoms(Decurcio et al., 2012; Fontenele et al., 2022; Liu et al., 2007; Hofmann et al., 2013; Kuoet al., 2016). Root canal therapy (RCT) leads to high-density filling imagery in the area ofthe Root canal, while filings lead to high density filling imagery and metal artifacts. Theseverity of metal artifacts differs between composite-metal and composite-resin. Multiplyteeth usually appear in the middle of upper jaws, and their dental crowns are in the shapeof small cones, while their roots are smaller than those of normal teeth. Permanent tooth∗Contributed equally†Corresponding author©2023 CC-BY 4.0, L. Tong, J. Liu, Y. Feng, T. Hu & Z. Liu.Tong Liu Feng Hu Liugerm lacks normal dental parts, such as dental crowns. Prosthesis leads to high-densityimagery and artifacts, which can be more severe if the material is metal. Since the numberof images with these symptoms is significantly smaller than that of normal images, modelsdo not have the chance to learn symptoms thoroughly, these degradations are likely toresult in shape distortion of the tooth segmentation result as well as the misclassificationof high-density filling imagery or artifacts as tooth when encountering certain symptoms.Recognizing that the region surrounding the dental arch curve is not only the mostinformative area for tooth segmentation but also the region where various degradationsoccur, it is logical to emphasize the model’s attention on this region of interest. In this paper,we propose a novel method named Tooth Symptom Network (TSNet), which consists of twodesigns. Firstly, Dental Prior Guiding Data Augmentation (DPGDA) incorporates toothlocation information to prioritize the neighborhood of the dental arch curve, guiding themodel’s attention towards this crucial area. Secondly, Dental Symptom Shape Loss (DSSL)aims to minimize the disparity between the predicted tooth boundary and the ground truth,enabling the model to make more informed decisions when dealing with degraded images. Byintegrating two designs, semantic priors are embedded into the transformer layer, assistingin the extraction of semantic map (Jain et al., 2021), which can improve the performanceof segmentation. Experimental results demonstrate the superior performance of TSNet intooth segmentation on both normal images and images presenting diverse symptoms.2. MethodWe propose DPGDA which leverages the distribution of tooth positions to guide the sam-pling of CBCT images for data augmentation, as is illustrated in Figure 1(c). Figure 1(b)exhibits the framework of TSNet. To integrate prior information about the distribution ofteeth, we initially extract position information from the dataset to generate a Dental Posi-tion Map, by dividing all CBCT images into 128 ×128 patches and recording the number ofimages that have teeth in each patch. Then, instead of using individual pixels, we employpatches (eg., 16 ×16 patches) to describe the sampling size. For each position, we calculatethe sum of the corresponding patches in the Dental Position Map, resulting in the DentalPosition Guiding Map. The map is then normalized to obtain a probability distributionthat guides the sampling of CBCT images during data augmentation.Besides, we propose DSSL to constrain the shape of tooth segmentation results, partic-ularly in images presenting symptoms. Firstly, Ipis obtained by softargmax (Chapelle andWu, 2010) on the probability map that is outputted by the decoder. Then, the shape of thetooth is extracted in the ground truth Igand the prediction Ip. DSSL is defined as follows:lossDSSL =(shape (Ig)−shape (Ip))2N, shape (I) = 255 ·(Ix)2+ (Iy)2max{(Ix)2+ (Iy)2},(1)where Ndenotes the number of pixels in the image, IxandIydenote the Sobel operatorresults in the x-direction and y-direction, respectively (Pratt, 2007).3. ExperimentWe construct a CBCT tooth image dataset comprising a training set and six test sets. Thedataset consists of 160 patient samples with diverse symptoms, collected from hospitals in2Short TitleFigure 1: (a) includes typical six symptoms of tooth CBCT. (b) is the pipeline of TSNet.(c) is the process of DPGDA.Table 1: Comparison of TSNet with state-of-the-art methods.Normal RCT Fillings Multiply Teeth Permanent Tooth Germ ProsthesisIoU Dice IoU Dice IoU Dice IoU Dice IoU Dice IoU DiceUNet 84.04 91.33 80.19 89.01 84.28 91.47 83.87 91.23 85.95 92.45 76.02 86.38UNet++ 84.25 91.45 77.5 87.32 83.75 91.16 83.24 90.85 85.42 92.14 69.79 82.21UCTransNet 85.89 92.41 84.02 91.32 86.17 92.57 86.84 92.96 87.83 93.52 83.25 90.86MedT 79.31 88.46 60.92 75.72 77.05 87.04 73.68 84.85 79.67 88.69 42.88 60.02SemaskT 84.31 91.49 86.10 92.53 86.40 92.70 87.44 93.30 87.63 93.41 85.90 92.42TSNet 86.92 +1 .0393.00 +0 .5987.81 +1 .7193.51 +0 .9889.22 +2 .8294.30 +1 .6089.05 +1 .6194.21 +0 .9190.22 +2 .3994.86 +1 .3488.31 +2 .4193.79 +1 .37China during 2018-2021. The training set consists of 1906 images from 100 normal indi-viduals. The six test sets encompasses 196 images from 10 normal patients, 198 imagesfrom 10 RCT patients, 189 images from 10 fillings patients, 184 images from 10 multiplyteeth patients, 97 images from 10 permanent tooth germ patients, and 203 images from10prosthesis patients. To evaluate the effectiveness of TSNet, we compare it with fiveother methods: U-Net (Ronneberger et al., 2015), U-Net++ (Zhou et al., 2018), MedT(Valanarasu et al., 2021), UCTransnet (Wang et al., 2022), and Semask T (Jain et al.,2021). To ensure fairness, all baseline methods were implemented using the original sourcecode, with default hyperparameters and evaluation metrics were computed using MMseg-mentation (Contributors, 2020). Experimental results, as shown in Table 1, demonstratethat TSNet outperforms the other methods across all datasets.4. ConclusionIn this work, we design two modules, DPGDA and DSSL, for tooth segmentation tasks, andpropose an integrated tooth segmentation method called TSNet. We evaluate TSNet’s per-formance by comparing it with five advanced image segmentation methods on six datasetscontaining normal tooth CBCT images and images with different symptoms, TSNet demon-strates superior segmentation performance. Moving forward, we anticipate the applicationof TSNet in clinical settings to aid in the diagnosis and treatment for dental diseases.AcknowledgmentsThis work is supported by the National Natural Science Foundation of China (GrantNo. 62106222), the Natural Science Foundation of Zhejiang Province, China (Grant No.LZ23F020008), the Scientific Research Fund of Zhejiang University (XY2022025), and theZhejiang University-Angelalign Inc. R&D Center for Intelligent Healthcare.3Tong Liu Feng Hu LiuReferencesOlivier Chapelle and Mingrui Wu. Gradient descent optimization of smoothed informationretrieval metrics. Information retrieval , 13:216–235, 2010.MMSegmentation Contributors. MMSegmentation: Openmmlab semantic segmentationtoolbox and benchmark. 2020.Daniel Almeida Decurcio, Mike Reis Bueno, Ana Helena Gon ̧ calves de Alencar, OlavoC ́ esar Lyra Porto, Bruno Correa Azevedo, and Carlos Estrela. Effect of root canal fillingmaterials on dimensions of cone-beam computed tomography images. Journal of AppliedOral Science , 20:260–267, 2012.Rocharles Cavalcante Fontenele, Maur ́ ıcio do Nascimento Gerhardt, J ́ ader Camilo Pinto,Adriaan Van Gerven, Holger Willems, Reinhilde Jacobs, and Deborah Queiroz Freitas. In-fluence of dental fillings and tooth type on the performance of a novel artificial intelligence-driven tool for automatic tooth segmentation on cbct images–a validation study. Journalof dentistry , 119:104069, 2022.Jin Hao, Jiaxiang Liu, Jin Li, Wei Pan, Ruizhe Chen, Huimin Xiong, Kaiwei Sun,Hangzheng Lin, Wanlu Liu, Wanghui Ding, et al. Ai-enabled automatic multimodalfusion of cone-beam ct and intraoral scans for intelligent 3d tooth-bone reconstructionand clinical applications. arXiv preprint arXiv:2203.05784 , 2022.Elisabeth Hofmann, J ̈ urgen Medelnik, Martin Fink, Michael Lell, and Ursula Hirschfelder.Three-dimensional volume tomographic study of the imaging accuracy of impacted teeth:Msct and cbct comparison—an in vitro study. The European Journal of Orthodontics , 35(3):286–294, 2013.Jitesh Jain, Anukriti Singh, Nikita Orlov, Zilong Huang, Jiachen Li, Steven Walton, andHumphrey Shi. Semask: Semantically masked transformers for semantic segmentation.arXiv preprint arXiv:2112.12782 , 2021.Rong-Fu Kuo, Kwang-Ming Fang, Wong Ty, and Chia Yu Hu. Quantification of dentalprostheses on cone-beam ct images by the taguchi method. Journal of Applied ClinicalMedical Physics , 17(1):207–220, 2016.Deng-gao Liu, Wan-lin Zhang, Zu-yan Zhang, Yun-tang Wu, and Xu-chen Ma. Three-dimensional evaluations of supernumerary teeth using cone-beam computed tomographyfor 487 cases. Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology, and En-dodontology , 103(3):403–411, 2007.William K Pratt. Digital image processing: PIKS Scientific inside , volume 4. Wiley OnlineLibrary, 2007.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In Medical Image Computing and Computer-AssistedIntervention, 2015, Proceedings, Part III 18 , pages 234–241. Springer, 2015.4Short TitleJeya Maria Jose Valanarasu, Poojan Oza, Ilker Hacihaliloglu, and Vishal M Patel. Medicaltransformer: Gated axial-attention for medical image segmentation. In Medical ImageComputing and Computer Assisted Intervention, 2021, Proceedings, Part I 24 , pages36–46. Springer, 2021.Haonan Wang, Peng Cao, Jiaqi Wang, and Osmar R Zaiane. Uctransnet: rethinking the skipconnections in u-net from a channel-wise perspective with transformer. In Proceedings ofthe AAAI conference on artificial intelligence , volume 36, pages 2441–2449, 2022.Robert Weiss and Andrew Read-Fuller. Cone beam computed tomography in oral andmaxillofacial surgery: an evidence-based review. Dentistry journal , 7(2):52, 2019.Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang.Unet++: A nested u-net architecture for medical image segmentation. In Deep learningin medical image analysis and multimodal learning for clinical decision support , pages3–11. Springer, 2018.5 |
JExQEfV5um | Medical Imaging with Deep Learning – nnn 2023 Short Paper – MIDL 2023 submissionInter-Scale Dependency Modeling for Skin LesionSegmentation with Transformer-based NetworksSania Eskandari∗1ses235@uky.eduJanet Lumpp1jklumpp@uky.edu1Department of Electrical and Computer Engineering, University of Kentucky, Lexington, USAEditors: Accepted for publication at MIDL 2023AbstractMelanoma is a dangerous form of skin cancer caused by the abnormal growth of skincells. Fully Convolutional Network (FCN) approaches, including the U-Net architecture,can automatically segment skin lesions to aid diagnosis. The symmetrical U-Net modelhas shown outstanding results, but its use of a convolutional operation limits its ability tocapture long-range dependencies, which are essential for accurate medical image segmen-tation. In addition, the U-shaped structure suffers from the semantic gaps between theencoder and decoder. In this study, we developed and evaluated a U-shaped hierarchicalTransformer-based structure for skin lesion segmentation while we proposed an Inter-scaleContext Fusion (ISCF) to utilize the attention correlations in each stage of the encoder toadaptively combine the contexts coming from each stage to hinder the semantic gaps. Thepreliminary results of the skin lesion segmentation benchmark endorse the applicabilityand efficacy of the ISCF module.Keywords: Deep learning, Transformer, Skin lesion segmentation, Inter-scale contextfusion1. IntroductionAutomatic segmentation of organs is an essential cue for developing the pre and post-diagnosis process with computer-aided diagnosis (CAD), while manual delineation is a te-dious and laborious task. Skin cancer is a dangerous and often deadly disease. The skincomprises three layers: the epidermis, dermis, and hypodermis. When exposed to ultra-violet radiation from the sun, the epidermis produces melanin, which can be produced atan abnormal rate if too many melanocytes are present. Malignant melanoma is a deadlyform of skin cancer caused by the abnormal growth of melanocytes in the epidermis, witha mortality rate of 1.62%. In 2022, it was estimated that there would be 99,780 new casesof melanoma with a mortality rate of 7.66% (Siegel et al., 2022). The survival rate dropsfrom 99% to 25% when melanoma is diagnosed at an advanced stage due to its aggressivenature. Therefore, early diagnosis is crucial in reducing the number of deaths from this dis-ease. Utilizing U-Net, with a hierarchical encoder-decoder design in semantic segmentationtasks, is a common choice. The U-shaped structure leverages some advantages, making itan ideal choice for skin lesion segmentation tasks. However, the conventional design suffersfrom limited receptive fields due to the convolution operations’ presence in the framework.Therefore, Vision Transformer (ViT), as a drift from the natural language processing’s emi-nent counterpart, Transformers, adapted to the wide range of vision architectures to capture∗Corresponding author©2023 S. Eskandari & J. Lumpp.Eskandari Lumpplong-range dependencies. While the computational complexity of ViTs is proportional tothe number of patches and quadratic, utilizing the standalone ViT as a main backbone fordense prediction tasks, e.g., segmentation, is problematic. On the other hand, due to theneed to design a hierarchical pipeline for segmentation tasks, ViTs, in their conventionalaspect, is not desirable. Thus, various studies explored minimizing this computational bur-den to make ViTs ready to participate in segmentation tasks by delving into the innerstructure of the Transformer’s multi-head self-attention (MHSA) calculation or changingthe tokenization process such as the Efficient Transformer (Xie et al., 2021), and the SwinTransformer (Liu et al., 2021). Moreover, an analytic demonstration of MHSA by (WangEfficientTransf ormer Patch MergingOverlapPatch EmbeddingEfficientTransf ormer Patch MergingEfficientTransf ormer EfficientTransf ormer Patch ExpandingEfficientTransf ormer Patch ExpandingSegmentation HeadLinear LinearLinear LinearInter -ScaleContext Fusion(ISCF)GPGPAttention Cor relationFFNConcatConv 3x1x1GPSqueez eExcitation( a)( c)( b)Input Ima ge Ground T ruth PredictionFigure 1: ( a) The overall end-to-end proposed pipeline for skin lesion segmentation withEfficient Transformer (Shen et al., 2021) in a U-shaped structure. ( b) Inter-Scale Context Fusion Module. ( c) Qualitative results on ISIC 2018 dataset.Green represents the ground truth contour and blue denotes the prediction maskcontour.et al., 2022) revealed that the Transformers perform as a low-pass filter due to the Softmaxnon-linear operation. This deficiency further degrades the Transformer’s applicability fordense semantic segmentation tasks, while the conventional U-shaped methods already needto improve from the semantic gaps between the encoder and decoder. Thus, we alleviate thelosing high-frequency input counterparts in stacked Transformer-based structures besideshindering the semantic gap between the encoder and decoder in an adaptive technique.Contrary to the Swin Transformer (Liu et al., 2021), we utilize the Efficient Transformerblock to hinder the loss of contextual information in the windowing strategy for MHSA.One significant drawback is MHSA can extract the limited context within windows. Ina window-based Transformer, the input sequence is divided into fixed-size windows, andself-attention is only applied within each window. This means long-range dependenciesbetween elements in different windows may not be effectively captured. This can limitthe ability of window-based Transformers to model complex patterns and relationships insequential data. In this paper, we used the Efficient Transformer (Shen et al., 2021) as amain block in our U-shaped Transformer-based pipeline as in Figure 1(a) to compensate forthe Swin Transformer’s mentioned deficiency. U-shaped structures suffer from the semanticgap between the encoder and decoder. To this end, we were inspired by the squeeze andexcitation paradigm and proposed a Inter-ScaleContext Fusion ( ISCF ) to alleviate thementioned semantic gap (see Figure 1(b)).2Inter-scale Context Fusion2. MethodOur proposed U-shaped structure is defined as three stages multi-scale manner coupledwith an ISCF module. Due to the hierarchical design of the structure, the attention maps’shape at each level differs from the next one. Therefore, we used a Linear layer in the firsttwo stages two make the attention map sizes as same as in the last stage. This operationis done at the output of the ISCF module to remap the attention maps to their originalsizes. In the ISCF module, we utilize the Global Pooling (GP) operation to produce asingle value for each stage’s attention correlation and concatenate them, followed by a FeedForward Network (FFN) to amalgamate the contribution of each global value with eachother as a scaling factor. Then each attention map applies the Hadamard production withthe corresponding scaling value and concatenates the resultant attention maps. Finally,to adaptively amalgamate these global contexts with each other to lessen the mentionedsemantic gaps, a 3 ×1×1 is used. We use the publicly available ISIC 2018 skin lesionbenchmark dataset that contains 2,594 images for the evaluation process. We resized eachsample to 224 ×224 pixels from 576 ×767 and used 1,815 samples for training, 259 samplesfor validation, and 520 samples for testing. Our proposed method is implemented end-to-endusing the PyTorch library and is trained on a single Nvidia RTX 3090 GPU. The trainingis done with a batch size of 24 and an Adam optimizer with a learning rate of 1e-4, whichwas carried out for 100 epochs, and for the loss function, we used binary cross-entropy.3. ResultsIn Table 1, the quantitative results for our proposed method are displayed. We reportedthe performance of the model on the Dice score (DSC), sensitivity (SE), specificity (SP),and accuracy (ACC). The preliminary results show that our design can outperform SOTAmethods without pre-training weights and having fewer parameters. In addition, Figure 1(c)represents the qualitative results that the network performs well with respect to the groundtruth results and preserves the high-frequency details such as boundary information.Table 1: Performance comparison on the ISIC 2018 skin lesion segmentation dataset.Methods # Params(M) DSC SE SP ACCU-Net (Ronneberger et al., 2015) 14.8 0.8545 0.8800 0.9697 0.9404Att U-Net (Oktay et al., 2018) 34.88 0.8566 0.8674 0.9863 0.9376TransUNet (Chen et al., 2021) 105.28 0.8499 0.8578 0.9653 0.9452FAT-Net (Wu et al., 2022) 28.75 0.8903 0.9100 0.9699 0.9578Swin U-Net (Cao et al., 2023) 82.3 0.8946 0.9056 0.9798 0.9645Efficient Transformer (without ISCF) 22.31 0.8817 0.8534 0.9698 0.9519Efficient Transformer (with ISCF) 23.43 0.9136 0.9284 0.9723 0.96304. ConclusionThe semantic gap between the encoder and decoder in a U-shaped Transformer-based net-work can be mitigated by carefully recalibrating the already calculated attention maps fromeach stage. In this study, not only do we address the hierarchical semantic gap drawback,but also we compensate for the deep Transformers’ high-frequency losses by utilizing theearlier Transformer’s attention map by the ISCF. ISCF module is a plug-and-play andcomputation-friendly module that can effectively be applied to any Transformer-based ar-chitecture.3Eskandari LumppReferencesHu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, and Man-ning Wang. Swin-unet: Unet-like pure transformer for medical image segmentation. InComputer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, October 23–27, 2022, Pro-ceedings, Part III , pages 205–218. Springer, 2023.Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan LYuille, and Yuyin Zhou. Transunet: Transformers make strong encoders for medical imagesegmentation. arXiv preprint arXiv:2102.04306 , 2021.Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, andBaining Guo. Swin transformer: Hierarchical vision transformer using shifted windows.InProceedings of the IEEE/CVF international conference on computer vision , pages10012–10022, 2021.Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, KazunariMisawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, et al. At-tention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 ,2018.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In International Conference on Medical image computingand computer-assisted intervention , pages 234–241. Springer, 2015.Zhuoran Shen, Mingyuan Zhang, Haiyu Zhao, Shuai Yi, and Hongsheng Li. Efficient at-tention: Attention with linear complexities. In Proceedings of the IEEE/CVF winterconference on applications of computer vision , pages 3531–3539, 2021.Rebecca L Siegel, Kimberly D Miller, and Ahmedin Jemal. Cancer statistics, 2022. CA: ACancer Journal for Clinicians , 72(1):7–33, 2022.Peihao Wang, Wenqing Zheng, Tianlong Chen, and Zhangyang Wang. Anti-oversmoothingin deep vision transformers via the fourier domain analysis: From theory to prac-tice. In International Conference on Learning Representations , 2022. URL https://openreview.net/forum?id=O476oWmiNNp .Huisi Wu, Shihuai Chen, Guilian Chen, Wei Wang, Baiying Lei, and Zhenkun Wen. Fat-net: Feature adaptive transformers for automated skin lesion segmentation. MedicalImage Analysis , 76:102327, 2022.Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and PingLuo. Segformer: Simple and efficient design for semantic segmentation with transformers.Advances in Neural Information Processing Systems , 34:12077–12090, 2021.4 |
mad9Y_7khs | Medical Imaging with Deep Learning 2023 Short Paper TrackTowards Realistic Ultrasound Fetal Brain Imaging SynthesisMichelle Iskandar∗2Harvey Mannering∗1Zhanxiang Sun∗1Jacqueline Matthew2Hamideh Kerdegari2Laura Peralta2Miguel Xochicale∗1m.xochicale@kcl.ac.ul1University College London2King’s College LondonAbstractPrenatal ultrasound imaging is the first-choice modality to assess fetal health. Medicalimage datasets for AI and ML methods must be diverse (i.e. diagnoses, diseases, patholo-gies, scanners, demographics, etc), however there are few public ultrasound fetal imagingdatasets due to insufficient amounts of clinical data, patient privacy, rare occurrence ofabnormalities in general practice, and limited experts for data collection and validation.To address such data scarcity, we proposed generative adversarial networks (GAN)-basedmodels, diffusion-super-resolution-GAN and transformer-based-GAN, to synthesise imagesof fetal ultrasound brain planes from one public dataset. We reported that GAN-basedmethods can generate 256x256 pixel size of fetal ultrasound trans-cerebellum brain im-age plane with stable training losses, resulting in lower Fr ́ echet inception distance (FID)values for diffusion-super-resolution-GAN (average 7.04 and lower FID 5.09 at epoch 10)than the FID values of transformer-based-GAN (average 36.02 and lower 28.93 at epoch60). The results of this work illustrate the potential of GAN-based methods to synthe-sise realistic high-resolution ultrasound images, leading to future work with other fetalbrain planes, anatomies, devices and the need of a pool of experts to evaluate synthe-sised images. Code, data and other resources to reproduce this work are available athttps://github.com/budai4medtech/midl2023 .Keywords: Medical Image Synthesis, Ultrasound Fetal Imaging, GANs1. IntroductionPrenatal imaging is performed to assess various aspects of pregnancy, including confirmationof the pregnancy, screening for developmental defects, and investigation of pregnancy com-plications (Kline–Fath and Bitters, 2007). In the last decade, the fields of machine learning(ML) and artificial intelligence (AI) have been successful to model intelligent behaviorswith minimal human interference (Hamet and Tremblay, 2017). Particularly, automaticclassification of fetal ultrasound planes and fetal head biometric measurement (Burgos-Artizzu et al., 2020b; Sin, 2018; Fiorentino et al., 2022). Despite such advances, thereare few challenges faced in prenatal imaging: (a) the accuracy of recorded measurementswhich can be caused by differences in intra-view variability of imaging equipment and inter-observer variability of sonographer skills (England, 2015; Sarris et al., 2012; Villar et al.,1989; Kesmodel, 2018), (b) availability of expert clinicians or trained technicians to select,to classify and to validate regions of interest (Burgos-Artizzu et al., 2020a), (c) the insuffi-cient and limited amount of clinical data (Jang et al., 2018; Sin, 2018; He et al., 2021), (d)data accessibility due to patient privacy or protection of personal health information (Shinet al., 2018), and (e) the cost of acquisition of clinical data as it requires expensive imagingequipment and experts for data collection and validation (Wang et al., 2019; Kim et al.,2019). Given the advances with generative adversarial networks (GAN) methods to handle∗Contributed equally©2023 CC-BY 4.0, M. Iskandar, H. Mannering, Z. Sun, J.M. , H.K. , L. Peralta & M. Xochicale.Iskandar Mannering Sun Peralta Xochicaleproblems in medical reconstructions, image resolution, enhancement, segmentation, lesiondetection, data simulation or classification (AlAmir and AlGhamdi, 2022), we hypothesizethat realistic ultrasound imaging synthesis can address challenges in data scarcity, accessi-bility and expensiveness. For instance, Eli et al. (2017) proposed a method of generatingfreehand ultrasound image simulation using a spatially conditioned GAN. Kazeminia et al.(2020) presented a review of the state-of-the-art research in GAN in medical imaging forclassification, denoising, reconstruction, synthesis, registration, and detection. Monteroet al. (2021) proposed a method to generate fetal brain US images using an unconditionalGAN, StyleGAN2, specifically to improve the fine-grained plane classification, specificallythe trans-thalamic and trans-ventricular plane. Hence, the aim of this work is to showthe potential of GAN-based methods to generate realistic ultrasound fetal trans-cerebellumbrain plane imaging with small datasets.2. Methods and datasets2.1. Diffusion-Super-Resolution-GAN (DSR-GAN)We use a Denoising Diffusion Probabilistic Model (DDPM) (Ho et al., 2020) due to itsrecent success in unconditional image synthesis. Computational resources were limited.Therefore to reduce computation time, we finetune a pretrained DDPM to produce 128x128pixel images. Upscaling to 256x256 using bilinear interpolation yields an FID score of8.93. To enhance this score, a superresolution model is employed. Both diffusion andGAN-based approaches were explored, but computational limitations led to the selectionof Super-Resolution-GAN (SRGAN) (Ledig et al., 2017). The DDPM and SRGAN weretrained separately. Histogram matching (Castleman, 1996) is applied after DDPM andbefore SRGAN to align the synthetic image color distribution with real images. Randomzooming, rotating, and horizontal flipping augmentations diversify the dataset.2.2. Transformer-based-GAN (TB-GAN)The Transformer-based GAN was chosen in order to reproduce longer-distance spatial re-lationships found in the original images with attention mechanism. This approach aims atgenerating more coherent images that maintain similar semantic layouts as the original ones.Meanwhile, StyleSwin implements a window attention mechanism that effectively reducesthe memory usage in training, enabling synthesize images of higher resolutions (Zhanget al., 2022). Differentiable data augmentation (DiffAug) and adaptive pseudo augmen-tation (APA) are implemented for StyleSwin because GANs are prone to model collapseand discriminator over-fitting when there are limited data. The two augmentations helpedstabilize training for GAN (Zhao et al., 2020; Jiang et al., 2021).2.3. Image Quality AssessmentQuality of synthesised images are evaluated with Fr ́ echet inception distance (FID), mea-suring the distance between distributions of synthesised and original images (Heusel et al.,2017). The lower the FID number is, the more similar the synthesised images are to theoriginal ones. FID metric showed to work well for fetal head ultrasound images comparedto other metrics (Bautista et al., 2022).2Towards Realistic Ultrasound Fetal Brain Imaging Synthesis2.4. DatasetsTrans-cerebellum brain plane ultrasound images from Voluson E6 were used for this work,consisting of 408 training images (Burgos-Artizzu et al., 2020a,b). Scans were collected bymultiple operators of similar skill level at BCNatal hospital during standard clinical practicebetween October 2018 and April 2019. DICOM images were collected and anonymised usingpng format, resulting in images of various pixel size (e.g., 692x480, 745x559, and 961x663).Note that such datasets only contain healthy participants.3. Experiments: Design and resultsDiffusion model was finetuned for 10000 epochs with the Adam optimiser to then trainSRGAN for 200 epochs from scratch with the Adam optimiser. The images used to trainboth models are flipped horizontally, zoomed and rotated randomly to increase the varietyof the dataset(Fig 1c). Transfer learning is used when training StyleSwin. The model wasfirstly pre-trained for 500 epochs on Trans-thalamus plane, which contains more images(1072). Then, the model was fine-tuned on Trans-cerebellum plane images for an additional200 epochs. Adam optimizer was also used during both pre-training and fine-tuning stages,following the two time-scale update rule with learning rates of 2e-4 for the discriminatorand 5e-5 for the generator (van den Heuvel et al., 2018).Figure 1: Results from Diffusion-Super-Resolution-GAN (DSR-GAN) and transformer-based-GAN (TB-GAN) models: (a) Convergency of training losses for Generatorand Discriminator networks, (b) FID scores: DSR-GAN lower average 7.04 thanTB-GAN average 36.02, and (c) 256x256 pixel size trans-cerebellum images oftwo randomised batches (B1, B2) of real and models.4. Conclusions and future workSynthesising fetal brain images with the diffusion-Super-Resolution-GAN and transformer-based-GAN methods were successful, generating images of 256x256 pixel size resolutionwith stable loss values and resulting in lower FID values for Diffusion-Super-Resolution-GAN (average 7.04 and lower 5.09 at epoch 10) compared to FID values of Transformer-based-GAN (average 36.02 and lower 28.93 at epoch 60). The limitations of this work arein the generated 256x256 pixel size image resolutions due limited hardware access and thesynthesised images for only healthy participants. However, reported results suggest futurework with the potential to synthesise realistic higher-resolution fetal ultrasound images forother anatomies, ultrasound-devices and abnormalities, which can facilitate downstreamtasks such as classification or segmentation of fetal ultrasound images.3Iskandar Mannering Sun Peralta XochicaleReferencesHuman-level Performance On Automatic Head Biometrics In Fetal Ultrasound Using FullyConvolutional Neural Networks , 7 2018. IEEE. ISBN 978-1-5386-3646-6. doi: 10.1109/EMBC.2018.8512278.Manal AlAmir and Manal AlGhamdi. The role of generative adversarial network in medicalimage analysis: An in-depth survey. ACM Comput. Surv. , mar 2022. ISSN 0360-0300.doi: 10.1145/3527849. URL https://doi.org/10.1145/3527849 . Just Accepted.Thea Bautista, Jacqueline Matthew, Hamideh Kerdegari, Laura Peralta Pereira, and MiguelXochicale. Empirical study of quality image assessment for synthesis of fetal head ultra-sound imaging with dcgans, 2022. URL https://arxiv.org/abs/2206.01731 .Xavier P. Burgos-Artizzu, David Coronado-Guti ́ errez, Brenda Valenzuela-Alcaraz, ElisendaBonet-Carne, Elisenda Eixarch, Fatima Crispi, and Eduard Gratac ́ os. Evaluation ofdeep convolutional neural networks for automatic classification of common maternal fetalultrasound planes. Scientific Reports , 10(1):10200, Jun 2020a. ISSN 2045-2322. doi:10.1038/s41598-020-67076-5. URL https://doi.org/10.1038/s41598-020-67076-5 .Xavier P. Burgos-Artizzu, David Coronado-Gutierrez, Brenda Valenzuela-Alcaraz, ElisendaBonet-Carne, Elisenda Eixarch, Fatima Crispi, and Eduard Gratac ́ os. Fetal planes db:Common maternal-fetal ultrasound images. Scientific Reports , 6 2020b. doi: 10.5281/ZENODO.3904280. URL https://zenodo.org/record/3904280 .Kenneth R Castleman. Digital image processing . Prentice Hall Press, 1996.Eli, Lee Li-Lin, Xie Weidi, Barratt Dean C., Vercauteren Tom, Noble J Alison Hu Yipeng,and Gibson. Freehand ultrasound image simulation with spatially-conditioned generativeadversarial networks. Molecular Imaging, Reconstruction and Analysis of Moving BodyOrgans, and Stroke Imaging and Treatment , pages 105–115, 2017.NHS England. Fetal anomaly screening programme handbook . NHS Digital, 7 2015.Maria Chiara Fiorentino, Francesca Pia Villani, Mariachiara Di Cosmo, Emanuele Fron-toni, and Sara Moccia. A review on deep-learning algorithms for fetal ultrasound-imageanalysis, 2022. URL https://arxiv.org/abs/2201.12260 .Pavel Hamet and Johanne Tremblay. Artificial intelligence in medicine. Metabolism , 69:S36–S40, 4 2017. ISSN 00260495. doi: 10.1016/j.metabol.2017.01.011.Fujiao He, Yaqin Wang, Yun Xiu, Yixin Zhang, and Lizhu Chen. Artificial intelligence inprenatal ultrasound diagnosis. Frontiers in Medicine , 8, 2021. ISSN 2296-858X. doi:10.3389/fmed.2021.729978. URL https://www.frontiersin.org/article/10.3389/fmed.2021.729978 .Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, andSepp Hochreiter. Gans trained by a two time-scale update rule convergeto a local nash equilibrium. Advances in Neural Information Processing4Towards Realistic Ultrasound Fetal Brain Imaging SynthesisSystems , 30, 2017. URL https://proceedings.neurips.cc/paper/2017/file/8a1d694707eb0fefe65871369074926d-Paper.pdf .Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. InH. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances inNeural Information Processing Systems , volume 33, pages 6840–6851. Curran Associates,Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf .Jaeseong Jang, Yejin Park, Bukweon Kim, Sung Min Lee, Ja-Young Kwon, and Jin KeunSeo. Automatic estimation of fetal abdominal circumference from ultrasound images.IEEE Journal of Biomedical and Health Informatics , 22:1512–1520, 9 2018. ISSN 2168-2194. doi: 10.1109/JBHI.2017.2776116.Liming Jiang, Bo Dai, Wayne Wu, and Chen Change Loy. Deceive d: Adaptive pseudoaugmentation for gan training with limited data, 2021.Salome Kazeminia, Christoph Baur, Arjan Kuijper, Bram van Ginneken, Nassir Navab,Shadi Albarqouni, and Anirban Mukhopadhyay. Gans for medical image analysis. Ar-tificial Intelligence in Medicine , 109:101938, 2020. ISSN 0933-3657. doi: https://doi.org/10.1016/j.artmed.2020.101938. URL https://www.sciencedirect.com/science/article/pii/S0933365719311510 .Ulrik S. Kesmodel. Information bias in epidemiological studies with a special focus onobstetrics and gynecology. Acta Obstetricia et Gynecologica Scandinavica , 97:417–423, 42018. ISSN 00016349. doi: 10.1111/aogs.13330.Mingyu Kim, Jihye Yun, Yongwon Cho, Keewon Shin, Ryoungwoo Jang, Hyun jin Bae,and Namkug Kim. Deep learning in medical imaging. Neurospine , 16:657–668, 12 2019.ISSN 2586-6583. doi: 10.14245/ns.1938396.198.Beth Kline–Fath and Constance Bitters. Prenatal imaging. Newborn and Infant NursingReviews , 7:197–204, 12 2007. ISSN 15273369. doi: 10.1053/j.nainr.2007.09.002.Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejan-dro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and WenzheShi. Photo-realistic single image super-resolution using a generative adversarial network.InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , July 2017.Alberto Montero, Elisenda Bonet-Carne, and Xavier Paolo Burgos-Artizzu. Generativeadversarial networks to improve fetal brain fine-grained plane classification. Sensors(Basel, Switzerland) , 21, 11 2021. ISSN 1424-8220. doi: 10.3390/s21237975.I. Sarris, C. Ioannou, P. Chamberlain, E. Ohuma, F. Roseman, L. Hoch, D. G. Altman, andA. T. Papageorghiou. Intra- and interobserver variability in fetal ultrasound measure-ments. Ultrasound in Obstetrics and Gynecology , 39:266–273, 3 2012. ISSN 09607692.doi: 10.1002/uog.10082.5Iskandar Mannering Sun Peralta XochicaleHoo-Chang Shin, Neil A Tenenholtz, Jameson K Rogers, Christopher G Schwarz, Matthew LSenjem, Jeffrey L Gunter, Katherine P Andriole, and Mark H Michalski. Medical imagesynthesis for data augmentation and anonymization using generative adversarial net-works. ArXiv , abs/1807.10225, 2018.Thomas L. A. van den Heuvel, Dagmar de Bruijn, Chris L. de Korte, and Bram van Gin-neken. Automated measurement of fetal head circumference using 2D ultrasound images.PLOS ONE , July 2018. doi: 10.5281/zenodo.1327317. URL https://doi.org/10.5281/zenodo.1327317 .J Villar, J Repke, L Markush, W Calvert, and G Rhoads. The measuring of blood pressureduring pregnancy. American journal of obstetrics and gynecology , 161:1019–24, 10 1989.ISSN 0002-9378. doi: 10.1016/0002-9378(89)90777-1.Ruoyao Wang, Zhenghan Fang, Jiaqi Gu, Yi Guo, Shicong Zhou, Yuanyuan Wang, CaiChang, and Jinhua Yu. High-resolution image reconstruction for portable ultrasoundimaging devices. EURASIP Journal on Advances in Signal Processing , 2019:56, 12 2019.ISSN 1687-6180. doi: 10.1186/s13634-019-0649-x.Bowen Zhang, Shuyang Gu, Bo Zhang, Jianmin Bao, Dong Chen, Fang Wen, Yong Wang,and Baining Guo. Styleswin: Transformer-based gan for high-resolution image gener-ation. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR) , pages 11304–11314, June 2022.Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han. Differentiable augmentationfor data-efficient gan training, 2020.6 |
XfXcA9-0XxR | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionArtificial Intelligence and Radiologists at Prostate CancerDetection in MRI – The PI-CAI ChallengeAnindo Saha∗1anindya.shaha@radboudumc.nlJoeran S. Bosma∗1joeran.bosma@radboudumc.nlJasper J. Twilt∗1jasper.twilt@radboudumc.nlBram van Ginneken1,2bram.vanginneken@radboudumc.nlDerya Yakar3,4d.yakar@umcg.nlMattijs Elschot5,6mattijs.elschot@ntnu.noJeroen Veltman7j.veltman@zgt.nlJurgen F ̈ utterer1jurgen.futterer@radboudumc.nlMaarten de Rooij†1maarten.derooij@radboudumc.nlHenkjan Huisman†1,5henkjan.huisman@radboudumc.nl1Department of Medical Imaging, Radboud University Medical Center, The Netherlands2Fraunhofer Institute for Digital Medicine MEVIS, Germany3Department of Radiology, Nuclear Medicine and Molecular Imaging, University Medical CenterGroningen, The Netherlands4Department of Radiology, Netherlands Cancer Institute, The Netherlands5Department of Circulation and Medical Imaging, Norwegian University of Science and Technology,Norway6Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospi-tal, Norway7Department of Radiology, Ziekenhuis Groep Twente, The NetherlandsEditors: Under Review for MIDL 2023AbstractWe hypothesized that state-of-the-art AI models, trained using thousands of patient cases,are non-inferior to radiologists at clinically significant prostate cancer diagnosis using MRI.To test the same, we designed an international comparative study titled the PI-CAI chal-lenge, where we investigated AI models that were independently developed, trained andexternally tested using a large multi-center cohort of 10,207 patient exams. Preliminary re-sults indicate that when trained on 1,500 cases only, such models already achieve diagnosticperformance comparable to that of radiologists reported in literature.Keywords: prostate cancer, artificial intelligence, magnetic resonance imaging, radiolo-gists, computer-aided detection and diagnosis1. IntroductionClinically significant prostate cancer (csPCa) caused over 375,000 deaths worldwide in 2020(Sung et al., 2021). Magnetic resonance imaging (MRI) is playing an increasingly important∗Contributed equally†Contributed equally©2023 CC-BY 4.0, A. Saha et al.Saha Bosma Twilt Ginneken Yakar Elschot Veltman F ̈utterer Rooij Huismanrole in csPCa management, and has been recommended by recent clinical guidelines in theEuropean Union, United Kingdom and the United States (Mottet et al., 2021; NICE, 2019;Eastham et al., 2022). Artificial intelligence (AI) algorithms have matched expert cliniciansin medical image analysis across several domains, and can address the rising demand inimaging (Milea et al., 2020; Bulten et al., 2022; McKinney et al., 2020; Hricak et al., 2021).However, limited scientific evidence on the efficacy of AI-driven csPCa diagnosis impedes itswidescale adoption (van Leeuwen et al., 2021; Angus, 2020). We hypothesized that state-of-the-art AI models, trained using thousands of patient cases, are non-inferior to radiologistsat csPCa diagnosis using MRI. To test the same, we designed an international comparativestudy, titled the PI-CAI challenge ( https://pi-cai.grand-challenge.org/ ).2. Materials and MethodsThe PI-CAI study protocol was established in conjunction with 16 experts across prostateradiology, urology and AI (Saha et al., 2022). This retrospective study included 10,207prostate MRI exams (9,129 patients) curated from four European tertiary care centersbetween 2012–2021. All patients were men suspected of harboring prostate cancer, without ahistory of treatment or prior csPCa findings. Imaging was acquired using various commercial1.5 or 3T MRI scanners, equipped with surface coils. In the first phase of this study,algorithm developers worldwide were invited to design AI models for detecting csPCa inbiparametric MRI (bpMRI), using 1,500 training cases that were made publicly available.For a given bpMRI exam, AI models were required to complete two tasks: localize allcsPCa lesions (if any), and predict the case-level likelihood of csPCa diagnosis. To thisend, AI models could use imaging data and several variables (PSA, patient age, prostatevolume, scanner model) to inform their predictions. Once developed, these algorithms wereindependently tested using a hidden cohort of 1,000 patient cases (including external datafrom an unseen center) in a fully-blinded setting, where histopathology and a follow-upperiod of ≥3 years were used to establish the reference standard.3. Results and ConclusionBetween June–November 2022, >830 AI developers ( >50 countries) opted-in and >310 al-gorithm submissions were made. Parallel to this, 79 radiologists (55 centers, 22 countries)enlisted in a multi-reader multi-case observer study, whose primary objective was to estimateclinician’s performance at this same task. Distribution of AI developers and radiologistshas been illustrated in Figure 1. When trained on 1,500 cases, the top five most performantprostate-AI models reached 0.88 ±0.01 AUROC in case-level diagnosis, and 76.38 ±0.74%sensitivity at 0.5 false positives per case in lesion detection (as shown in Table 1), which iscomparable to that of radiologists’ performance reported in literature (Schelb et al., 2019;Hosseinzadeh et al., 2022; Roest et al., 2023). When ensembled with equal weighting, di-agnostic performance increased substantially to 0.912 AUROC, indicating notable diversityamong the top five methods. In the next phase of the challenge, these AI models will bere-trained using a private dataset of 9,107 cases, performance will be re-evaluated across1,000 testing cases, and the ensembled AI system will be benchmarked against radiologistsparticipating in the reader study and the historical reads made during routine practice.2The PI-CAI ChallengeFigure 1: Distribution of >830 AI developers ( >50 countries) and 79 radiologists (55centers, 22 countries) participating in the PI-CAI challenge, as of 10 November,2022. Radiologists’ experience varies between 1 and 23 years (median 7 years),where 72% (57) of readers can be categorized as “expert” based on the 2020ESUR/ESUI consensus statements (de Rooij et al., 2020).Table 1: Case-level diagnostic performance, as estimated by the Area Under Receiver Op-erating Characteristic (AUROC) metric, and lesion-level detection performance,as estimated by the Average Precision (AP) and the detection sensitivity at 0.5false positives per patient metrics, across 1,000 testing cases.Model AUROC AP Sens @ 0.5 FP/PatientY. Yuan et al. (Australia) 0.881 0.633 77.64%C. A. Nader et al. (France) 0.889 0.615 76.63%A. Karag ̈ oz et al. (Turkey) 0.889 0.614 75.38%X. Li, S. Vesal, S. Saunders et al. (USA) 0.871 0.612 76.13%H. Kan et al. (China) 0.886 0.593 76.13%Ensemble of Top Five Models (Global) 0.912 – –AcknowledgmentsThis study has been endorsed by MIDL, MICCAI, the European Society of UrogenitalRadiology, the European Association of Urology, and supported in parts by Amazon WebServices, EU H2020: ProCAncer-I and Health ∼Holland.3Saha Bosma Twilt Ginneken Yakar Elschot Veltman F ̈utterer Rooij HuismanReferencesDerek C. Angus. Randomized Clinical Trials of Artificial Intelligence. JAMA , 323(11):1043–1045, 03 2020. ISSN 0098-7484. doi: 10.1001/jama.2020.1039. URL https://doi.org/10.1001/jama.2020.1039 .Wouter Bulten, Kimmo Kartasalo, Po-Hsuan Cameron Chen, Geert Litjens, Martin Ek-lund et al., and the PANDA challenge consortium. Artificial intelligence for diagno-sis and Gleason grading of prostate cancer: the PANDA challenge. Nature Medicine ,28(1):154–163, Jan 2022. ISSN 1546-170X. doi: 10.1038/s41591-021-01620-2. URLhttps://doi.org/10.1038/s41591-021-01620-2 .Maarten de Rooij, Bas Isra ̈ el, Marcia Tummers, Hashim U. Ahmed, Jochen Walz, andJelle O. Barentsz et al. ESUR/ESUI consensus statements on multi-parametric MRIfor the detection of clinically significant prostate cancer: quality requirements for imageacquisition, interpretation and radiologists’ training. European Radiology , 30(10):5404–5416, Oct 2020. ISSN 1432-1084. doi: 10.1007/s00330-020-06929-z. URL https://doi.org/10.1007/s00330-020-06929-z .James A. Eastham, Gregory B. Auffenberg, Daniel A. Barocas, Roger Chou, TonyCrispino, John W. Davis, Scott Eggener, Eric M. Horwitz, Christopher J. Kane, ErinKirkby, Daniel W. Lin, Sean M. McBride, Alicia K. Morgans, Phillip M. Pierorazio,George Rodrigues, William W. Wong, and Stephen A. Boorjian. Clinically Local-ized Prostate Cancer: AUA/ASTRO Guideline, Part I: Introduction, Risk Assess-ment, Staging, and Risk-Based Management. Journal of Urology , 208(1):10–18, 2022.doi: 10.1097/JU.0000000000002757. URL https://www.auajournals.org/doi/abs/10.1097/JU.0000000000002757 .Matin Hosseinzadeh, Anindo Saha, Patrick Brand, Ilse Slootweg, Maarten de Rooij, andHenkjan Huisman. Deep learning–assisted prostate cancer detection on bi-parametric mri:minimum training data size requirements and effect of prior knowledge. European Ra-diology , 32(4):2224–2234, Apr 2022. ISSN 1432-1084. doi: 10.1007/s00330-021-08320-y.URL https://doi.org/10.1007/s00330-021-08320-y .Hedvig Hricak, May Abdel-Wahab, Rifat Atun, Miriam Mikhail Lette, Diana Paez,James A. Brink, Llu ́ ıs Donoso-Bach, Guy Frija, Monika Hierath, Ola Holmberg, Pek-Lan Khong, Jason S. Lewis, Geraldine McGinty, Wim J. G. Oyen, Lawrence N. Shul-man, Zachary J. Ward, and Andrew M. Scott. Medical imaging and nuclear medicine:a ¡em¿lancet oncology¡/em¿ commission. The Lancet Oncology , 22(4):e136–e172, Apr2021. ISSN 1470-2045. doi: 10.1016/S1470-2045(20)30751-8. URL https://doi.org/10.1016/S1470-2045(20)30751-8 .Scott Mayer McKinney, Marcin Sieniek, Varun Godbole, Jeffrey De Fauw, and ShravyaShetty et al. International evaluation of an ai system for breast cancer screening. Nature ,577(7788):89–94, Jan 2020. ISSN 1476-4687. doi: 10.1038/s41586-019-1799-6. URLhttps://doi.org/10.1038/s41586-019-1799-6 .Dan Milea, Raymond P. Najjar, Zhubo Jiang, Tien Y. Wong, and Val ́ erie Biousse et al.Artificial intelligence to detect papilledema from ocular fundus photographs. New England4The PI-CAI ChallengeJournal of Medicine , 382(18):1687–1695, 2020. doi: 10.1056/NEJMoa1917130. PMID:32286748.Nicolas Mottet, Roderick C.N. van den Bergh, Erik Briers, Peter-Paul M. Willemse, andPhilip Cornford et al. EAU-EANM-ESTRO-ESUR-SIOG Guidelines on Prostate Can-cer—2020 Update. Part 1: Screening, Diagnosis, and Local Treatment with CurativeIntent. European Urology , 79(2):243–262, 2021. ISSN 0302-2838. doi: https://doi.org/10.1016/j.eururo.2020.09.042. URL https://www.sciencedirect.com/science/article/pii/S0302283820307697 .NICE. NICE Guidance – Prostate cancer: diagnosis and management. BJU Inter-national , 124(1):9–26, 2019. doi: https://doi.org/10.1111/bju.14809. URL https://bjui-journals.onlinelibrary.wiley.com/doi/abs/10.1111/bju.14809 .C. Roest, T. C. Kwee, A. Saha, J. J. F ̈ utterer, D. Yakar, and H. Huisman. Ai-assistedbiparametric mri surveillance of prostate cancer: feasibility study. European Radiology ,33(1):89–96, Jan 2023. ISSN 1432-1084. doi: 10.1007/s00330-022-09032-7. URL https://doi.org/10.1007/s00330-022-09032-7 .Anindo Saha, Jasper Jonathan Twilt, Joeran Sander Bosma, Bram van Ginneken, DeryaYakar, Mattijs Elschot, Jeroen Veltman, Jurgen F ̈ utterer, Maarten de Rooij, and HenkjanHuisman. Artificial Intelligence and Radiologists at Prostate Cancer Detection in MRI:The PI-CAI Challenge (Study Protocol), June 2022. URL https://doi.org/10.5281/zenodo.6667655 .Patrick Schelb, Simon Kohl, Jan Philipp Radtke, Manuel Wiesenfarth, Philipp Kickin-gereder, Sebastian Bickelhaupt, Tristan Anselm Kuder, Albrecht Stenzinger, MarkusHohenfellner, Heinz-Peter Schlemmer, Klaus H. Maier-Hein, and David Bonekamp.Classification of cancer at prostate mri: Deep learning versus clinical pi-rads assess-ment. Radiology , 293(3):607–617, 2019. doi: 10.1148/radiol.2019190938. URL https://doi.org/10.1148/radiol.2019190938 . PMID: 31592731.Hyuna Sung, Jacques Ferlay, Rebecca L. Siegel, Mathieu Laversanne, Isabelle Soerjo-mataram, Ahmedin Jemal, and Freddie Bray. Global cancer statistics 2020: Globo-can estimates of incidence and mortality worldwide for 36 cancers in 185 countries.CA: A Cancer Journal for Clinicians , 71(3):209–249, 2021. doi: https://doi.org/10.3322/caac.21660. URL https://acsjournals.onlinelibrary.wiley.com/doi/abs/10.3322/caac.21660 .Kicky G. van Leeuwen, Steven Schalekamp, Matthieu J. C. M. Rutten, Bram van Ginneken,and Maarten de Rooij. Artificial intelligence in radiology: 100 commercially availableproducts and their scientific evidence. European Radiology , 31(6):3797–3804, Jun 2021.ISSN 1432-1084. doi: 10.1007/s00330-021-07892-z. URL https://doi.org/10.1007/s00330-021-07892-z .5 |
_bAp02OXNiT | Medical Imaging with Deep Learning 2023High-resolution 3D Maps of Left Atrial Displacements usingan Unsupervised Image Registration Neural NetworkChristoforos Galazis c.galazis20@imperial.ac.ukAnil Anthony Bharath a.bharath@imperial.ac.ukMarta Varela marta.varela@imperial.ac.ukImperial College London, UKAbstractFunctional analysis of the left atrium (LA) plays an increasingly important role in theprognosis and diagnosis of cardiovascular diseases. Echocardiography-based measurementsof LA dimensions and strains are useful biomarkers, but they provide an incomplete pictureof atrial deformations. High-resolution dynamic magnetic resonance images (Cine MRI)offer the opportunity to examine LA motion and deformation in 3D, at higher spatial res-olution and with full LA coverage. However, there are no dedicated tools to automaticallycharacterise LA motion in 3D. Thus, we propose a tool that automatically segments theLA and extracts the displacement fields across the cardiac cycle. The pipeline is able toaccurately track the LA wall across the cardiac cycle with an average Hausdorff distanceof 2.51±1.3mmand Dice score of 0 .96±0.02.Keywords: Left Atrial, Image Registration Neural Network, Displacement Field Vector.1. IntroductionThe analysis of the anatomy and function of the left atrium (LA) is becoming more impor-tant for the prognosis and diagnosis of cardiac conditions such as atrial fibrillation (AF) orheart failure (HF) (Hoit, 2017; Peters et al., 2021). Structural characteristics of the LA areestablished atrial disease biomarkers (Varela et al., 2017) and analysis of LA deformationshas been explored using speckle-tracking echocardiography (Smiseth et al., 2022). Thesebiomarkers are typically obtained for a single LA view and spatial averages across LA wallregions. Spatiotemporal 3D maps of LA deformation are expected to provide more spe-cific signatures of LA pathology, with greater diagnostic and prognostic value, as has beenshown for the left ventricle (LV) (Duchateau et al., 2020). However, there are currently nopublicly available MRI datasets or adequate image analysis tools to extract high-resolutiondisplacement field vector (DFV) maps of the whole LA.In this paper, we use a novel high-resolution Cine MRI protocol designed specifically forthe LA. These Cine MRI offer information about the LA at higher spatial resolution thanimages of any other existing database. However, given that only a small number of subjectshave been imaged with this protocol, we develop and utilize methods for limited number oftraining images.Aim We propose the following pipeline to automatically obtain high-resolution 3D DFVsof the LA: 1) A few-shot segmentation network (LA-SNet) of the LA across the cardiac cycleto guide the registration; 2) Extraction of the LA segmentation contour and dilation; 3) Anautomatic subject-by-subject image registration of the LA contour image (LA-DNet).©2023 CC-BY 4.0, C. Galazis, A. Anthony Bharath & M. Varela.Galazis Anthony Bharath Varela2. Methods2.1. DataWe use 3D LA Cine MRI bSSFP scans acquired using a novel acquisition protocol (Varelaet al., 2020). In summary, they were acquired in a single breath-hold, with resolution of1.72×1.72×2.00mm3and 20 phases across the cardiac cycle. Phase 0 corresponds tocardiac end diastole (smallest LA volume). As proof of concept, we analyse images from sixsubjects: three healthy volunteers and three subjects with suspected cardiovascular disease.2.2. PreprocessingThe images are cropped to a size of 96 ×96×36 voxels, centered at the LA. Additionally,they are translated such that the LA centroid is stationary across the cardiac cycle and theirintensity is min-max normalized. We manually segment the LA across the entire cardiaccycle to use as ground truth. From the segmented data, the contour is extracted and dilatedusing a 2 voxel radius spherical structure, which is used to mask the images.2.3. ModelDetails of LA-SNet and LA-DNet are in Figure 1, which their parameters have been ex-perimentally selected. They share the same architecture that is based on a 3D U-Net(Ronneberger et al., 2015). The models incorporate squeeze and excitation blocks (Huet al., 2018), which were already applied to LV MRI segmentation and image registration(Galazis et al., 2022). LA-DNet also utilizes a spatial transformer (Jaderberg et al., 2015)to obtain the DFV in an unsupervised way. The DFV is smoothed using a bending energyregularizer (Rueckert et al., 1999). LA-SNet is trained on the augmented whole LA imageson cardiac phases 0, 8, and 15 and predicts the respective LA segmentation. LA-DNet takesthe two contour masked images (moving: cardiac phase 0; fixed: cardiac phase [0-19]) togenerate a displacement field that resamples the moving to the target image.Figure 1: The proposed pipeline to extract high-resolution LA displacement field maps.2High-resolution 3D Maps of Left Atrial Displacements3. ResultsLA-SNet can accurately segment the LA across the cardiac cycle, with an average Hausdorffdistance (HD) of 3 .03±1.12mmand Dice score (DS) of 0 .95±0.02. Similarly, LA-DNet isable to accurately track the LA wall across the cycle (see Figure 2). The LA segmentationsobtained when adding the estimated DFV to the LA segmentation in phase 0 compareextremely well with the GT segmentations: HD = 2.51±1.3mm;DS= 0.96±0.02. Itoutperformed previously used symmetric diffeomorphic image normalization from ANTspackage (Avants et al., 2009) which obtained ( HD = 2.57±1.16mm;DS= 0.85±0.04)to the same LA contour images and ( HD = 3.35±1.48mm;DS= 0.77±0.09) whenapplied to the unsegmented LA images. Using LA-DNet directly on the unsegmented LAimages as inputs also led to poor results ( HD= 3.35±1.05mm;DS= 0.78±0.07). TheLA-DNet estimated DFVs are spatially and temporally smoother and the Jacobian of thedeformation gradient is consistent with the known volumetric changes of the LA, as can beseen in: https://tinyurl.com/2eju3r9f.Figure 2: The image registration metrics plotted for LA-DNet and ANTs: A) Hausdorffdistance (HD) and, B) Dice score (DS). HD and DS are obtained by compar-ing manual LA segmentations across the cardiac cycle with segmentations trans-formed using the estimated DFV on phase 0.4. ConclusionsThe proposed pipeline is able to extract DFVs that accurately track the LA wall across thecardiac cycle. The estimated high-resolution 3D LA DFVs pave the way towards potentiallydetecting regional functional biomarkers for conditions such as AF or HF. They may alsoprovide useful information for the identification of LA fibrosis (Sohns and Marrouche, 2020).The LA registration across the cardiac cycle is more challenging than that of the LV.For the latter, several registration tools are available (Hernandez et al., 2021; De Vos et al.,2019), but these performed poorly for the LA registration task. The usual assumption, thatthe intensity of the different image components (e.g. the LV myocardium) is constant acrossthe cardiac cycle, is not valid for the LA. This is because the LA myocardium is very thin(Varela et al., 2017), and thus barely identifiable in bSSFP images; and the LA blood poolvoxels’ intensity depends on blood velocity and is therefore very variable across the cardiaccycle. We successfully propose a different approach for automatically LA registration, usingLA contours from automated segmentations as inputs, training it on a subject by subjectbasis to allow its deployment to small datasets of Cine MRI of the LA.3Galazis Anthony Bharath VarelaAcknowledgmentsThis work was supported by the UKRI CDT in AI for Healthcare http://ai4health.io(Grant No. EP/S023283/1) and the British Heart Foundation Centre of Research Excellenceat Imperial College London (RE/18/4/34215). We acknowledge computational resourcesand support provided by the Imperial College Research Computing Service ( http://doi.org/10.14469/hpc/2232 ). Last but not least, we thank the volunteers for allowing the useof their data for this research.ReferencesBrian B Avants, Nick Tustison, Gang Song, et al. Advanced normalization tools (ants).Insight j , 2(365):1–35, 2009.Bob D De Vos, Floris F Berendsen, Max A Viergever, Hessam Sokooti, Marius Staring, andIvana Iˇ sgum. A deep learning framework for unsupervised affine and deformable imageregistration. Medical image analysis , 52:128–143, 2019.Nicolas Duchateau, Andrew P King, and Mathieu De Craene. Machine learning approachesfor myocardial motion and deformation analysis. Frontiers in cardiovascular medicine , 6:190, 2020.Christoforos Galazis, Huiyi Wu, Zhuoyu Li, Camille Petri, Anil A Bharath, and MartaVarela. Tempera: Spatial transformer feature pyramid network for cardiac mri segmen-tation. In Statistical Atlases and Computational Models of the Heart. Multi-Disease,Multi-View, and Multi-Center Right Ventricular Segmentation in Cardiac MRI Chal-lenge: 12th International Workshop, STACOM 2021, Held in Conjunction with MICCAI2021, Strasbourg, France, September 27, 2021, Revised Selected Papers , pages 268–276.Springer, 2022.Karen Andrea Lara Hernandez, Theresa Rienm ̈ uller, Daniela Baumgartner, and ChristianBaumgartner. Deep learning in spatiotemporal cardiac imaging: A review of methodolo-gies and clinical usability. Computers in Biology and Medicine , 130:104200, 2021.Brian D Hoit. Evaluation of left atrial function: current status. Structural Heart , 1(3-4):109–120, 2017.Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of theIEEE conference on computer vision and pattern recognition , pages 7132–7141, 2018.Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks.Advances in neural information processing systems , 28, 2015.Dana C Peters, J ́ erˆ ome Lamy, Albert J Sinusas, and Lauren A Baldassarre. Left atrial eval-uation by cardiovascular magnetic resonance: sensitive and unique biomarkers. EuropeanHeart Journal-Cardiovascular Imaging , 23(1):14–30, 2021.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In Medical Image Computing and Computer-Assisted4High-resolution 3D Maps of Left Atrial DisplacementsIntervention–MICCAI 2015: 18th International Conference, Munich, Germany, October5-9, 2015, Proceedings, Part III 18 , pages 234–241. Springer, 2015.Daniel Rueckert, Luke I Sonoda, Carmel Hayes, Derek LG Hill, Martin O Leach, and David JHawkes. Nonrigid registration using free-form deformations: application to breast mrimages. IEEE transactions on medical imaging , 18(8):712–721, 1999.Otto A Smiseth, Tomasz Baron, Paolo N Marino, Thomas H Marwick, and Frank A Flach-skampf. Imaging of the left atrium: pathophysiology insights and clinical utility. EuropeanHeart Journal-Cardiovascular Imaging , 23(1):2–13, 2022.Christian Sohns and Nassir F Marrouche. Atrial fibrillation and cardiac fibrosis. Europeanheart journal , 41(10):1123–1131, 2020.Marta Varela, Felipe Bisbal, Ernesto Zacur, Antonio Berruezo, Oleg V Aslanidi, LluisMont, and Pablo Lamata. Novel computational analysis of left atrial anatomy improvesprediction of atrial fibrillation recurrence after ablation. Frontiers in physiology , 8:68,2017.Marta Varela, Sandro Queir ́ os, Mustafa Anjari, Teresa Correia, Andrew P King, Anil ABharath, and Jack Lee. Strain maps of the left atrium imaged with a novel high-resolutioncine mri protocol. In 2020 42nd Annual International Conference of the IEEE Engineeringin Medicine & Biology Society (EMBC) , pages 1178–1181. IEEE, 2020.5 |
nTnAm_El0RC | Medical Imaging with Deep Learning 2023Segmentation of Lipid Droplets in Histological ImagesDaniel Budelmann1daniel.budelmann@mevis.fraunhofer.deCao Qing2caoqingcf@gmail.comHendrik Laue3hendrik.laue@mevis.fraunhofer.deMohamed Albadry4,5Mohamed.Albadry@med.uni-jena.deUta Dahmen4Uta.Dahmen@med.uni-jena.deLars Ole Schwen3ole.schwen@mevis.fraunhofer.de1Fraunhofer Institute for Digital Medicine MEVIS, L ̈ ubeck, Germany2Machine Learning for Computer Vision, TU Dresden, Dresden, Germany3Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany4Experimental Transplantation Surgery, Department of General, Visceral and Vascular Surgery,University Hospital Jena, Jena, Germany5Department of Veterinary Pathology, Menoufia University, EgyptAbstractSteatosis is a common liver disease characterized by the accumulation of lipid dropletsin cells. Precise and reliable fat droplet identification is essential for automatic steatosisquantification in histological images. We trained a nnU-Net to automatically segment lipidvacuoles in whole-slide images using semi-automatically generated reference annotations.We evaluated the performance of the trained model on two out-of-distribution datasets.The trained model’s average F1 scores (0.801 and 0.804) suggest a high potential of thennU-Net framework for the automatic segmentation of lipid vacuoles.Keywords: Hepatic steatosis, histology, whole-slide images, nnU-Net, segmentation1. IntroductionSteatosis, the fat accumulation in liver cells, is the predominant symptom of alcoholic andnon-alcoholic fatty liver disease.Quantification of steatosis is an important factor in the decision to proceed with livertransplantation, and visual inspection of tissue stained with hematoxylin and eosin (H&E)is a common method (Roy et al., 2020).Semi- and fully-automatic image segmentation approaches have been developed forcomputer-aided quantification of steatosis (Homeyer et al., 2015; Roy et al., 2020). Trainingmachine learning methods for this purpose requires annotated image data, which is tediousto create manually.We explored an approach using semi-automatically generated and thus imperfect datato train an off-the-shelf medical image segmentation method, namely the self-configuringnnU-Net (Isensee et al., 2021).©2023 CC-BY 4.0, D. Budelmann, C. Qing, H. Laue, M. Albadry, U. Dahmen & L.O. Schwen.Budelmann Qing Laue Albadry Dahmen Schwen2. Datasets and MethodsImage Datasets Besides one dataset (A) used for training and in-distribution evalua-tion, we evaluated the performance on two out-of-distribution datasets to assess intra- (B)and inter-species (C) generalizability. Dataset A1consists of 19 whole-slide images (WSI)of H&E-stained tissue from male C57BL/6J mice with diet-induced steatosis of differentseverity.Dataset B2consists of 36 WSI of H&E-stained slides of one male C57BL/6N mouse withdiet-induced steatosis. The images of datasets A and B have a resolution of 908 nm/pixel.Dataset C3contains H&E-stained human liver tissue scanned at 20 ×objective magni-fication and contains pixel-level annotations (exact resolution not specified).Reference Data Preparation Dataset A was annotated by a non-expert using thesemi-automatic approach by Homeyer et al. (2015); images and segmentation masks weresubsequently tiled in 256 ×256 images. For training and validation, 16 WSI and correspond-ing annotations of dataset A were used, three (one each for every steatosis extent) were keptback as a test set. Dataset B was annotated in the same way as dataset A, a randomlyselected subset of segmentation masks was subsequently corrected manually by visual in-spection using GIMP-2.10. Reference segmentations for datasets A and B are available fromhttps://doi.org/10.5281/zenodo.7802210 .To obtain compatible image size and resolution for dataset C, we mirrored and con-catenated these patches in both dimensions and downsampled them subsequently. Thisintroduced artifacts (mirrored partial cells and lipid vacuoles) near the stitching boundary.To avoid these artifacts in the evaluation, a border of 25 pixels (approximately 25 μm, theaverage size of the structures of interest) extending from the mirroring axis was omitted.Training and Evaluation We trained the 2D nnU-Net (Isensee et al., 2021) on thegenerated reference dataset A using all three color channels, for 1000 epochs, and with fivedifferent folds of training/validation split.We quantified segmentation accuracy of the trained nnU-Net by a pixel-wise F1 scoreover all tiles containing tissue. This prevents irrelevant (background-only) regions from ar-tificially simplifying the task. Code is available from https://doi.org/10.5281/zenodo.7802210 .3. Results and DiscussionTrained on the data generated using a semi-automatic segmentation method with minimaleffort, the nnU-Net generalized well to different datasets from the same and from a differentspecies (F1 scores 0.732 and 0.804, respectively, see Table 1). A higher F1 score for thecorrected subset of B (0.744 vs. 0.801) indicates that the nnU-Net reflects human visionbetter than the semi-automatic approach. The nnU-Net identifies smaller droplets as fat,whereas the expert annotation in dataset C does not, see Figure 1.1. available from https://doi.org/10.15490/FAIRDOMHUB.1.STUDY.1070.1 , (Albadry et al., 2022)2. available from https://doi.org/10.5281/zenodo.4738561 , (Budelmann et al., 2022)3. available from https://figshare.com/s/d75b129d969b4f463168 , (Roy et al., 2020)2Segmentation of Lipid DropletsTable 1: Mean F1 scores for the segmentation result of the trained nnU-Net modelsDataset # Patches F1 ScoreTest A 4512 0.867B 65176 0.732subset B 232 0.744subset B, corrected 237 0.801C 736 0.804Figure 1: Left: Example tile of corrected subset B with a visualization of true nega-tive (black), true positive (white), false negative (teal) and false positive (orange)pixels; right: Example tile of dataset C with evaluation areaAcknowledgmentsThis work was funded by Deutsche Forschungsgemeinschaft (DFG) via project 410848700(SteaPKMod).ReferencesM. Albadry et al. Periportal steatosis in mice affects distinct parameters of pericentral drugmetabolism. Sci Rep , 12(1):21825, Dec 2022. doi: 10.1038/s41598-022-26483-6.D. Budelmann et al. Automated detection of portal fields and central veins in whole-slideimages of liver tissue. J Pathol Inform , 13(100001), 2022. doi: 10.1016/j.jpi.2022.100001.A. Homeyer et al. Fast and accurate identification of fat droplets in histological images.Comput Methods Programs Biomed , 121(2):59–65, 2015. doi: 10.1016/j.cmpb.2015.05.009.F. Isensee et al. nnU-Net: a self-configuring method for deep learning-based biomed-ical image segmentation. Nat Methods , 18(2):203–211, Feb 2021. doi: 10.1038/s41592-020-01008-z.3Budelmann Qing Laue Albadry Dahmen SchwenM. Roy et al. Deep-learning-based accurate hepatic steatosis quantification for histolog-ical assessment of liver biopsies. Lab Invest , 100(10):1367–1383, 2020. doi: 10.1038/s41374-020-0463-y.4 |
c0KnufAuX6k | Medical Imaging with Deep Learning 2023Robust Identification of White Matter Hyperintensities inUncontrolled Settings Using Deep LearningAlice Schiavone1,2alisch@di.ku.dkSebastian Nørgaard Llambias1snl@di.ku.dkJacob Johansen2jj@cerebriu.comSilvia Ingala2si@cerebriu.comAkshay Pai2ap@cerebriu.comMads Nielsen1,2madsn@di.ku.dkMostafa Mehdipour Ghazi1ghazi@di.ku.dk1Pioneer Centre for AI, Department of Computer Science, University of Copenhagen, Denmark2Cerebriu A/S, Copenhagen, DenmarkAbstractWhite matter hyperintensities (WMH) are associated with an increased risk of stroke,cognitive decline, and dementia. A robust, yet accurate detection of WMH can help withthe prevention of more lesions from forming. The task is still challenging as the lesions areoften small and irregular. Hence, we propose a robust deep learning-based method for theautomatic segmentation of WMH only using fluid-attenuated inversion recovery (FLAIR)scans and MRI-specific data augmentation and compare it with state-of-the-art methods.The methods are tested on public and private data, and we show that our model is morerobust to domain shift and achieves higher segmentation accuracy than the alternatives.Keywords: Deep learning, domain shift, data augmentation, white matter hyperintensity.1. IntroductionWhite matter hyperintensities (WMH) of presumed vascular origin are common findings onbrain magnetic resonance imaging (MRI), typically assessed in fluid-attenuated inversionrecovery (FLAIR) sequences (Pantoni, 2010). These are associated with vascular risk factorsand predict an increased risk of stroke, dementia, depression, cognitive impairment, andmobility, both in cross-sectional and longitudinal studies (Wardlaw et al., 2015). Hence,segmentation and detection of WMH are crucial in the analysis of the brain.Automatic segmentation of WMH attempts to replace the time-consuming, expensiveprocess of manual annotation. Still, the task is challenging because WMH are often smalland irregular, making the classes highly imbalanced. Besides, data availability and vari-ability make the problem more complex due to dealing with sensitive data and differencesin pathologies, anatomies, MRI scanners, and acquisition protocols.In this study, we use 3D U-Net models (Isensee et al., 2020) and train them on FLAIRimages for WMH segmentation using the common image data augmentation techniques usedin (Isensee et al., 2020) and MRI-specific data augmentation proposed by Mehdipour Ghaziand Nielsen (2022). We compare the segmentation accuracy of these two methods with thestate-of-the-art method (Li et al., 2018) known as the winner of the MICCAI 2017 WMHSegmentation Challenge (Kuijf et al., 2019), which benefits from both T1-weighted andFLAIR MRI scans for WMH segmentation. The obtained results show that the proposedmethod identifies WMH in two different datasets significantly better than the alternatives.©2023 CC-BY 4.0, A.S. , S.N.L. , J.J. , S.I. , A.P. , M.N. & M.M.G. .2. Methods2.1. Study DataThe MICCAI 2017 WMH Segmentation Challenge training ( n= 60) and testing ( n= 110)datasets were used for training and testing purposes, respectively. The used datasets containFLAIR images from five different scanners, two of which were excluded from the trainingset and assigned for testing. They were manually annotated as background, WMH, andother (non-WMH) pathologies. we used FLAIR images Additionally, we used an externaltest set of 22 FLAIR images from an in-house dataset acquired from the US and India.Since the WMH load has a major impact on the model performance (Gaubert et al.,2023), we divided our test sets into low-load and high-load subsets concerning the volume ofWMH and other abnormalities (OA). We refer to images with a load of other abnormalitiesgreater than 1mL as WMH+OA and images without significant other abnormalities asWMH. Either group can be further divided into a “high WMH load” set if the WMHvolume is greater than or equal to 10mL, or into a “low WMH load” set otherwise. Thethreshold values are obtained based on visual inspection.2.2. Evaluation MetricsGiven the ground truth annotations and predicted segmentations, we evaluate the modelperformances using the overlap-based metric of Dice similarity coefficient (DSC) (Dice,1945) and the distance-based metric of volume symmetry (VS) (Taha and Hanbury, 2015).2.3. Deep Learning ModelsWe compare three different deep learning-based models for WMH segmentation. We use thestate-of-the-art method of (Li et al., 2018), known as the winner of the MICCAI 2017 WMHSegmentation Challenge, which uses an ensemble of 2D U-Nets (Ronneberger et al., 2015).Moreover, the nnUNet (Isensee et al., 2020) is used as a segmentation framework usingU-Nets with deep supervision, standard image data augmentation, and auto-configurationof network parameters. Finally, we train 3D U-Nets with MRI-specific data augmentationproposed by Mehdipour Ghazi and Nielsen (2022).3. Experiments and ResultsWe trained the 3D models (nnU-Net and the proposed) using a combination of cross-entropyloss and Dice loss and optimized them based on the SGD method with Nesterov momentumofμ= 0.99 and an initial learning rate of α= 0.01. The training data were randomly splitinto training and validation sets in a 5-fold cross-validation fashion, where in each fold 80%assigned for training (48 samples) and 20% (12 samples) for validation.Table 1: Test segmentation accuracy (mean ±SD) of different models. The best results are in boldface.MICCAI data In-house dataDSC VS DSC VSChallenge winner (Li et al., 2018) .64±.17 .76±.17 .42±.21 .71±.283D nnUNet (Isensee et al., 2020) .73±.13 .91±.09 .41±.27 .63±.29Our proposed .80±.10 .92±.09 .49±.23 .72±.282White matter hyperintensity detectionFigure 1: The obtained DSC distribution of different models on WMH segmentation of the MICCAIchallenge test set with/without other abnormalities and w.r.t. the WMH load.Figure 2: Two samples of the in-house data with high WMH load: a scan with WMH on the left (WMH15.7mL), and a scan with WMH and other abnormalities on the right (WMH 19.7mL, tumor 15mL).The obtained segmentation results are shown in Tables 1 and 2. As can be seen, by onlyusing FLAIR images, the proposed model achieves the best WMH segmentation accuracyin both test sets compared to the alternatives. We also observe that the WMH load ofother abnormalities has less impact on the accuracy of the MICCAI test set (see Fig. 1).We achieve a total DSC of 0.80 on the MICCAI test set, which is 16% better than thechallenge winner. The total DSC on the in-house test set is 0.49, which is 7% better thanthe challenge winner. More specifically, as shown in Table 2 and Fig. 2, we obtain DSCs of0.87 (+9%) and 0.78 (+8%), on subjects with high and low WMH load, respectively. Still,when other abnormalities are present, the accuracy drops to 0.81 (+6%) and 0.72 (+19%).4. ConclusionWe proposed a robust deep learning-based method for WMH segmentation using FLAIRimages. The robustness was tested using two different datasets with scans from differentdevices and acquisition parameters unseen to the trained models and compared the resultswith two state-of-the-art methods. We showed that FLAIR sequences are enough to achievea higher WMH segmentation accuracy than the state-of-the-art, even for out-of-distributiondata. However, the models cannot generalize well to the data with a low WMH load.Table 2: Segmentation accuracy (mean ±SD) of different models and loads. The best results are in boldface.WMH+OA WMHHigh WMH load Low WMH load High WMH load Low WMH loadMethods (MICCAI test set) DSC VS DSC VS DSC VS DSC VSChallenge winner (Li et al., 2018) .75±.12 .84±.13 .63±.10 .76±.14 .78±.06 .88±.08 .50±.15 .65±.153D nnUNet (Isensee et al., 2020) .77±.13 .89±.15 .72±.07 .89±.08 .82±.06 .94±.05 .65±.12 .89±.10Our proposed .81±.14 .89±.16 .75±.05 .88±.09 .87±.04 .95±.04 .74±.09 .90±.09Methods (In-house test set)Challenge winner (Li et al., 2018) .53±.18 .68±.10 .27±.18 .56±.34 .70±.09 .89±.04 .46±.12 .86±.103D nnUNet (Isensee et al., 2020) .69±.09 .89±.06 .17±.15 .44±.28 .77±.05 .92±.06 .50±.17 .69±.23Our proposed .72±.04 .89±.09 .31±.18 .61±.33 .78±.01 .96±.02 .55±.16 .72±.243AcknowledgmentsThis project has received funding from Innovation Fund Denmark under grant number 1063-00014B, Lundbeck Foundation with reference number R400-2022-617, and Pioneer Centrefor AI, Danish National Research Foundation, grant number P1.ReferencesLee R. Dice. Measures of the amount of ecologic association between species. Ecology , 26(3):297–302, July 1945. doi: 10.2307/1932409.Malo Gaubert, Andrea Dell’Orco, Catharina Lange, et al. Performance evaluation of au-tomated white matter hyperintensity segmentation algorithms in a multicenter cohorton cognitive impairment and dementia. Frontiers in Psychiatry , 13, January 2023. doi:10.3389/fpsyt.2022.1010273.Fabian Isensee, Paul F. Jaeger, Simon A. A. Kohl, Jens Petersen, and Klaus H. Maier-Hein.nnU-Net: a self-configuring method for deep learning-based biomedical image segmenta-tion. Nature Methods , 18(2):203–211, December 2020. doi: 10.1038/s41592-020-01008-z.Hugo J Kuijf, J Matthijs Biesbroek, Jeroen De Bresser, Rutger Heinen, Simon Andermatt,Mariana Bento, Matt Berseth, Mikhail Belyaev, M Jorge Cardoso, Adria Casamitjana,et al. Standardized assessment of automatic segmentation of white matter hyperintensitiesand results of the WMH segmentation challenge. IEEE Transactions on Medical Imaging ,38(11):2556–2568, 2019.Hongwei Li, Gongfa Jiang, Jianguo Zhang, Ruixuan Wang, Zhaolei Wang, Wei-Shi Zheng,and Bjoern Menze. Fully convolutional network ensembles for white matter hyperin-tensities segmentation in MR images. NeuroImage , 183:650–665, December 2018. doi:10.1016/j.neuroimage.2018.07.005.Mostafa Mehdipour Ghazi and Mads Nielsen. FAST-AID Brain: Fast and accuratesegmentation tool using artificial intelligence developed for brain. arXiv preprintarXiv:2208.14360 , 2022. doi: 10.48550/arXiv.2208.14360.Leonardo Pantoni. Cerebral small vessel disease: from pathogenesis and clinical charac-teristics to therapeutic challenges. The Lancet Neurology , 9(7):689–701, July 2010. doi:10.1016/s1474-4422(10)70104-6.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional networks forbiomedical image segmentation. arXiv preprint arXiv:1505.04597 , 2015. doi: 10.48550/arXiv.1505.04597.Abdel Aziz Taha and Allan Hanbury. Metrics for evaluating 3D medical image segmentation:analysis, selection, and tool. BMC Medical Imaging , 15(1), August 2015. doi: 10.1186/s12880-015-0068-x.Joanna M. Wardlaw, Maria C. Vald ́ es Hern ́ andez, and Susana Mu ̃ noz-Maniega. What arewhite matter hyperintensities made of? Journal of the American Heart Association , 4(6), June 2015. doi: 10.1161/jaha.114.001140.4 |
d1O5xjKX_yd | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionOn the dice loss variants and sub-patchingHoel Ker v adec1hoel@kervadec.science1Erasmus MC, Rotterdam, The NetherlandsMarleen de Br uijne1,2marleen.debruijne@erasmusmc.nl2University of Copenhagen, DenmarkEditors: Under Review for MIDL 2023AbstractThe soft-Dice loss is a very popular loss for image semantic segmentation in the medi-cal field, and is often combined with the cross-entropy loss. It has recently been shownthat the gradient of the dice loss is a “negative” of the ground truth, and its supervisioncan be trivially mimicked by multiplying the predicted probabilities with a pre-computed“gradient-map” (Kervadec and de Bruijne, 2023). In this short paper, we study the proper-ties of the dice loss, and two of its variants (Milletari et al., 2016; Sudre et al., 2017) whensub-patching is required, and no foreground is present. As theory and experiments show,this introduce divisions by zero which are difficult to handle gracefully while maintaininggood performances. On the contrary, the mime loss of (Kervadec and de Bruijne, 2023)proved to be far more suited for sub-patching and handling of empty patches.Keywords: Semantic segmentation, full supervision, dice loss1. BackgroundThe Dice coefficient, measuring overlap between two areas can be written as DSC( y, s;k) :=2Ω(k)y∩Ω(k)sΩ(k)y+Ω(k)s=2Pi∈Ωy(i,k)s(i,k)Pi∈Ω[y(i,k)+s(i,k)], with Ω ⊂RDaD-dimensional image space, y(·,·): (Ω×K)→ {0,1}a ground-truth as a binary function, and s(·,·): (Ω× K)→ {0,1}a predictedsegmentation. K={0,1, ..., K}is the set of classes to segment, 0 being the backgroundclass and Kthe number of object classes. Ω(k)y:={i∈Ω|y(i,k)= 1} ⊆Ω denotes the subsetof the image space where yis of class k. With continuous probabilities s(·,·)θ∈[0,1] we candefine a Dice loss:LDSC(y, sθ) :=1|K|Xk∈K1−2Pi∈Ωy(i,k)s(i,k)θPi∈Ωhy(i,k)+s(i,k)θi. (1)It has been shown (Kervadec and de Bruijne, 2023) that its gradient wrt. the softmaxtakes the following form:∂LDSC∂s(i,k)θ=−2(U(k)−I(k))(U(k))2 ify(i,k)= 1,2I(k)(U(k))2 otherwise ,(2)with I(k)=Pi∈Ωy(i,k)s(i,k)θandU(k)=Pi∈Ωhy(i,k)+s(i,k)θi. This means that the gradientof the dice loss takes only two different values over the whole image, as a weighted negativeof y.©2023 CC-BY 4.0, H. Kervadec & M. de Bruijne .Kervadec de BruijneMoreover, (Kervadec and de Bruijne, 2023) has shown that the supervision of the Diceloss can be mimicked with the following simple loss:LMime(y, sθ) :=ω⊤ysθ, (3)withωy∈R|K||Ω|a flattened, pre-computed gradient map, and sθ∈[0,1]|K||Ω|the flattenedpredicted probabilities. With y∈ {0,1}|K||Ω|the flattened ground truth, we can simplydo:ωy=−ya+ (1−y)bwith a, b > 0. In this paper, we set aandbbased on theclass distribution over the whole dataset D={(xn, yn)}Nn=1,i.e.a(k)=1|D|Pn∈D|Ω(k)yn|andb(k)=1|D|Pn∈Dh|Ω|−|Ω(k)yn|iSome well-known variants have been introduced to better handle imbalanced tasks. TheGeneralized Dice Loss (Sudre et al., 2017) is based on the Generalized Dice Score (Crumet al., 2006):LGDL(y, sθ) := 1 −2Pk∈Kw(k)Pi∈Ωy(i,k)s(i,k)θPk∈Kw(k)Pi∈Ωhy(i,k)+s(i,k)θi, (4)with w(k)=1(Pi∈Ωy(i,k))2. V-Net (Milletari et al., 2016) slightly modify the base dice lossby squaring the denominator probabilities:LVNet(y, sθ) :=1|K|Xk∈K1−2Pi∈Ωy(i,k)s(i,k)θPi∈Ωy(i,k)2+s(i,k)θ2. (5)2. Sub-patching and empty patchesAs the Dice overlap score is defined through the intersection and union of two areas, itcannot be “decomposed” in smaller computations: one cannot compute a series of Dice onsubsets of Ω, and then aggregate them to get the original dice score. This is an issue whentraining a neural network requires sub-patching—either a 3D sub-patch or a 2D slice—due to memory limitations. Computing the dice on the sub-patch is doable, but it losesits semantic meaning. More importantly, it increases the chance of encountering emptyforegrounds (Ω(k)y= Ω(k)s=∅) within the patch, which for all dice variants (1), (4) and (5)will cause divides-by-zero in various places. While it can be relatively mitigated throughcareful addition of small εin their implementation, it is less than ideal and can introduceinstabilities in the training.3. ExperimentsExperiments are performed with a lightweight 2D-ENet (Paszke et al., 2016) using theAdam optimizer (Kingma and Ba, 2014). We report the mean DSC and 95th percentile ofthe Hausdorff distance on the testing set for both datasets. For HD95, when no object ispredicted, we count the diagonal of the scan. When there is no object to predict, and noobject is predicted, we count 0. We evaluate on the following two datasets:2On the dice loss variants and sub-patchingTable 1: Mean testing DSC (%) ↑/ HD95 (mm) ↓..LossDataset ACDC WMHRV Myo LV All WMH Other pathologies AllLDSC 77.2/11.8 79.2/04.9 90.3/03.2 82.2/06.7 68.6/009 00.5/251 34.6/130LVNet 78.0/13.7 78.4/03.8 89.9/05.8 82.1/07.8 70.4/007 00.2/251 35.3/129LGDL 78.3/14.6 80.4/03.8 90.2/06.0 83.0/08.1 08.2/088 00.0/286 04.1/187LMime 81.5/09.7 80.2/03.4 90.9/04.0 84.2/05.7 61.1/006 63.0/135 62.1/071(a) GT (b)LDSC (c)LVNet (d)LGDL (e)LMimeFigure 1: Example results from the WMH testing set.ACDC (Bernard et al., 2018) contains cine-MRI of the heart, providing annotationsat systole and diastole of the right-ventricle (RV), myocardium ( Myo ) and left-ventricle(LV) so that K= 3. The dataset contains 100 patients with different pathologies. We kept10 patients for validation and 20 for testing.WMH 1.0 (Kuijf et al., 2022) The full dataset of the White Matter Hyperintensities(WMH) MICCAI 2017 challenge contains annotations for the 60 scans of the training set (10are kept here for validation) and 110 scans of the testing set. Additionally, the annotationsalso roughly segment other pathologies present in the scans, so that K= 2. This is a veryimbalanced dataset, even more pronounced for the other pathologies class.4. Results, discussion and conclusionMetrics computed on the testing set are reported in Table 1 and Figure 1 shows a singleslice of WMH testing set. We can see that all dice variants perform similarly on the ACDCdataset, which is to be expected. However, on WMH, all dice variants struggle with the“Other pathologies” class, while the hyperintensities are more-or-less well segmented. Onthe contrary, the mime loss proved able to handle more gracefully empty patches, whichresulted in a better segmented “other pathologies” while maintaining performances on themain class.To summarize, we discussed the limitations of the dice loss and some of its variants,with respect to sub-patching. Notably, all variants struggle when a patch is empty, as itintroduce division by zero. On the contrary the Mime loss from (Kervadec and de Bruijne,2023) can easily be sub-patched without introducing extra instabilities. Its simple definitionalso enables easy tuning with respect to the datasets imbalance.3Kervadec de BruijneReferencesOlivier Bernard, Alain Lalande, Clement Zotti, Frederick Cervenansky, Xin Yang, Pheng-Ann Heng, Irem Cetin, Karim Lekadir, Oscar Camara, Miguel Angel Gonzalez Ballester,et al. Deep learning techniques for automatic mri cardiac multi-structures segmentationand diagnosis: is the problem solved? IEEE transactions on medical imaging , 37(11):2514–2525, 2018.William R Crum, Oscar Camara, and Derek LG Hill. Generalized overlap measures for eval-uation and validation in medical image analysis. IEEE transactions on medical imaging ,25(11):1451–1461, 2006.Hoel Kervadec and Marleen de Bruijne. On the dice loss gradient and the ways to mimicit. In arxiv preprint , 2023.Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXivpreprint arXiv:1412.6980 , 2014.Hugo Kuijf, Matthijs Biesbroek, Jeroen de Bresser, Rutger Heinen, Christopher Chen,Wiesje van der Flier, Barkhof, Max Viergever, and Geert Jan Biessels. Data of the WhiteMatter Hyperintensity (WMH) Segmentation Challenge, 2022. URL https://doi.org/10.34894/AECRSD .Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutionalneural networks for volumetric medical image segmentation. In 2016 fourth internationalconference on 3D vision (3DV) , pages 565–571. IEEE, 2016.Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello. Enet: Adeep neural network architecture for real-time semantic segmentation. arXiv preprintarXiv:1606.02147 , 2016.Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien Ourselin, and M Jorge Cardoso.Generalised dice overlap as a deep learning loss function for highly unbalanced segmen-tations. In Deep learning in medical image analysis and multimodal learning for clinicaldecision support , pages 240–248. Springer, 2017.4 |
a6--BnpcdB | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionUnsupervised Plaque Segmentation on Whole Slide ImagesJohann Engster1,2johann.christopher.engster@imte.fraunhofer.deNele Blum2blum@imt.uni-luebeck.deTobias Reinberger3,4,5tobias.reinberger@uni-luebeck.dePascal Stagge1,2pascal.stagge@imte.fraunhofer.deThorsten M. Buzug1,2thorsten.buzug@imte.fraunhofer.deJeanette Erdmann3,4,5jeanette.erdmann@uni-luebeck.deZouhair Aherrahrou3,4,5zouhair.aherrahrou@uni-luebeck.deMaik Stille1maik.stille@imte.fraunhofer.de1Fraunhofer IMTE, Research Institution for Individualized and Cell-Based Medical Engineering2University of L ̈ ubeck, Institute of Medical Engineering3University of L ̈ ubeck, Institute for Cardiogenetics4DZHK (German Centre for Cardiovascular Research), Partner Site Hamburg/Kiel/L ̈ ubeck5University Heart Centre L ̈ ubeckEditors: Under Review for MIDL 2023AbstractAtherosclerosis is a multifactorial disease in which deposits of fat form in the arteries. Theseplaques can cause ischemic heart disease or other follow-up diseases. To investigate etiol-ogy and possible treatment options, mice were used as models and histological whole slideimages (WSI) stained with Oil-Red-O (ORO) were obtained and analyzed. Currently, theplaque content is often estimated using a threshold-based segmentation technique, whichrequires a manual selection of reference points. To improve this process, an unsupervisedsegmentation technique is developed using the W-Net architecture. The network weightsare updated using two loss functions, the soft N-cut loss, and a reconstruction loss. Theupdate procedure of both U-networks and the weighting function in soft N-cut loss areadapted to the given task. Since no ground truth is available, the results were comparedwith a post-processed threshold segmentation. The evaluation showed that a linear de-caying pixel distance weighting achieves the highest score. The results indicate that anunsupervised learning procedure is able to correctly identify the plaque clusters.Keywords: Atherosclerosis, Unsupervised Segmentation, Plaque Segmentation, WSI1. IntroductionAtherosclerosis is the main underlying cause of cardiovascular disease (CVD), which is theleading cause of death in industrial nations. So-called plaques form in the arteries, which canlead to CVD-related diseases like ischemic heart disease, thrombosis, or stroke. To furtherstudy the initiation and development of the disease as well as fundamental medication andtreatment options, mouse studies are being conducted at the Institute for Cardiogenetics,University of L ̈ ubeck. A simple threshold-based method is currently used to identify plaqueclusters. However, this is limited in its applicability and depends on manually selectedparameter settings. Unsupervised segmentation for WSIs has been investigated multipletimes (Faust et al., 2020; Fouad et al., 2017), as human bias can be reduced in the trainingdata. An unsupervised segmentation approach proposed by Xia and Kulis in 2017 uses anarchitecture called W-Net. Both, encoder and decoder, consist of a full U-Net (Ronnebergeret al., 2015). Thus, the latent space has image dimension and provides the output segmen-tation mask, while the decoder output of the W-Net reconstructs the input image. TheW-Net training procedure was adapted and applied to the WSIs for plaque segmentation.©2023 CC-BY 4.0, J. Engster et al.Engster Blum Reinberger Stagge Buzug Erdmann Aherrahrou Stille2. MethodsFor the training process, two loss functions are used to update the network weights. Both,encoder and decoder, are updated using the original Jreconstruction loss proposed by Xia andKulis. However, the proposed Jsoft N-cut encoder lossJsoft N-cut (V, K) =K−KXk=1Pu∈Vp(u=Ak)Pu∈Vw(u, v)p(v=Ak)Pu∈Vp(u=Ak)Pt∈Vw(u, t),where wmeasures the similarity between two pixels, K is the number of classes, andp(u=Ak) orp(v=Ak) measure the probability of uorvbelonging to Ak, was adapted tothe given task. Since plaques may exist nearby, the exponential term is replaced by a lineardecaying term. in this way, the distance penalization of the default wis reduced. The newwis given bywij= exp( −∥F(i)−F(j)∥22/σ2I)∗1 if d= 01/(∥X(i)−X(j)∥22) else if d < r0 else,where X(i) is a spatial location, F(i) is the pixel value of node i, and d is the pixel distance∥X(i)−X(j)∥2. In addition, σIandσXare hyperparameters controlling the degree ofpenalization. The radius rdefines the range of the weighting. A second weighting, calledintensity weighting, ignores the pixel distance entirely. Finally, a third weighting combinesthe default weighting with the intensity weightingwij= exp( −∥F(i)−F(j)∥22/σ2I)∗exp(−∥X(i)−X(j)∥22/σ2X) if d < r 11 else if r1< d < r 20 else,using two radii r1andr2. The networks can be updated in different ways by the lossfunctions. For instance, the authors of the original publication suggest a sequential updateof encoder and decoder. However, a combined update of both networks proved to be morestable and was therefore used for further experiments.3. Experimental ResultsThe networks were trained on data provided by the Institute for Cardiogenetics consistingof 1103 ORO-stained WSIs from 104 different mice. Data was split 60/40 for training andtesting. The number of classes is set to 5 to reduce bias, as a minimum of 3 classes (whitebackground, artery, and plaques) are expected inside the WSIs.Table 1 shows the mean IOU for the different weightings compared to the threshold-based plaque segmentation for classes c= 0,1,2, sorted from highest to lowest. The classesc= 3,4 were near-empty for all approaches and are therefore ignored. For all tested meth-ods, multiple classes achieve a plaque mean IOU >0. If for example, the class correspondsto the artery, it can achieve a mean IOU >0 as the plaques are inside it.Table 1: Mean IOU for classes c= 0,1,2, compared to the threshold-based plaque mask.Weighting c= 0 c= 1 c= 2 Meandefault 0.26 0.15 0.00 0.14linear 0.41 0.12 0.01 0.18intensity 0.24 0.19 0.00 0.14two radii 0.24 0.22 0.05 0.172Unsupervised Plaque Segmentation on WSIsFigure 1 shows exemplary linear weighting masks for the classes c= 0,1,2. The red-appearing plaques are correctly segmented, with some still remaining artifacts.WSI Prediction WSI Prediction WSI PredictionFigure 1: Qualitative linear weighting W-Net examples. Class 0 = , 1 = , and 2 = .4. ConclusionThe results show that the W-Net is able to detect the plaques as a stand-alone class. Thetested linear and two radii weightings achieve higher scores than the default weighting.The intensity weighting achieves the lowest consensus, while the linear decaying weightingachieves the highest. In the future, suitable post-processing, a detailed statistical validation,and analysis of the different weightings and the update order are required.AcknowledgmentsThe authors would like to thank Annett Liebers, Maren Behrensen, Petra Bruse for technicalsupport. We are also grateful to the members of the Erdmann laboratories for feedback anddiscussions. This work was supported by the German Federal Ministry of Education andResearch (BMBF) in the context of the German Centre for Cardiovascular Research (FKZ81Z0700108, FKZ 81X2700133).ReferencesK. Faust, A. Roohi, A. Leon, E. Leroux, A. Dent, A. Evans, T. Pugh, S. Kalimuthu,U. Djuric, and P. Diamandis. Unsupervised Resolution of Histomorphologic Heterogeneityin Renal Cell Carcinoma Using a Brain Tumor-Educated Neural Network. JCO ClinicalCancer Informatics , 4:811–821, 2020.S. Fouad, D. Randell, A. Galton, H. Mehanna, and G. Landini. Unsupervised MorphologicalSegmentation of Tissue Compartments in Histopathological Images. PLoS ONE , 12, 2017.O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for BiomedicalImage Segmentation. Medical Image Computing and Computer-Assisted Intervention –MICCAI 2015 , :234–241, 2015.X. Xia and B. Kulis. W-Net: A Deep Model for Fully Unsupervised Image Segmentation,2017.3 |
_kk8KI8MiRE | Medical Imaging with Deep Learning 2023 Short Paper TrackHuman-Guided Design to Explain Deep Learning-basedPneumothorax ClassifierHan Yuan1yuan.han@u.duke.nus.eduPeng-Tao Jiang2pt.jiang@zju.edu.cnGangming Zhao3gmzhao@connect.hku.hk1Duke-NUS Medical School, National University of Singapore2College of Computer Science and Technology, Zhejiang University3Faculty of Engineering, The University of Hong KongEditors: Accepted for MIDL 2023AbstractPneumothorax (PTX) is an acute thoracic disease that can be diagnosed with chest ra-diographs. While deep learning (DL) models have proven effective in identifying PTX onradiographs, they have difficulties in gaining the trust of radiologists if the decision-makinglogic is unclear. Therefore, various methods have been proposed to explain the PTX diag-nostic decision made by DL models. However, several studies indicate that the quality ofDL model explanation is suboptimal. This paper introduces a human-guided approach toenhance the existing explanation method. Based on the IoU and Dice between the expla-nation of model-focusing regions and the ground truth lesion areas, we achieved an increaseof 60.6% and 56.5% in Saliency Map, 69.0% and 66.7% in Grad-CAM, and 137.5% and123.9% in Integrated Gradients.Keywords: Pneumothorax Diagnosis, Convolutional Neural Network, Explainable Artifi-cial Intelligence, Human-in-the-loop1. IntroductionPneumothorax (PTX) is an acute thoracic disease caused by the abnormal air collection inthe pleural space (the space between the lungs and chest wall) (Imran and Eastman, 2017).Prompt treatment of PTX prevents it from progressing into a life-threatening emergency(Thian et al., 2021). In clinical practice, radiologists use chest radiographs (chest X-rays) tofacilitate PTX diagnosis, which requires a great deal of human effort. According to severalrecent studies, such a process can be automated using deep learning (DL) models, especiallythe convolutional neural network (CNN) (Thian et al., 2021). These CNN-based classifiershave displayed high-fidelity PTX classification capability while they use a large number ofinterconnected neurons whose relationships are highly complex and difficult to comprehend.Thus, these diagnostic decisions have difficulties in gaining the trust of radiologists (Rudin,2019; Xie et al., 2022).To solve this problem, researchers incorporated various explanation methods for chestradiograph analysis to highlight the areas on chest radiographs contributed most to thedisease diagnosis (Van der Velden et al., 2022). However, a recent benchmarking studypointed out that a sophisticated CNN achieved an AUROC of 0.993 in the PTX classificationwhile its focus area (model explanation) generated by Integrated Gradients (Sundararajanet al., 2017) only overlapped with 7% of the ground truth lesion area (Saporta et al., 2022).©2023 H. Yuan, P.-T. Jiang & G. Zhao.Yuan Jiang ZhaoTherefore, there is a demand to develop improved model explanation methods (Saportaet al., 2022).The inclusion of prior expert knowledge in the model explanation process is one promis-ing direction for advancement. For example, PTX frequently occurs in the pleural spacebetween the lungs and chest wall (Imran and Eastman, 2017). An intuitive hypothesis isthat the location information of the pleural space could contribute to the explanation ofPTX classifiers. In this study, we propose a heuristic method that extracts the PTX highoccurrence area (the pleural space) from a few lesion-delineated PTX cases and uses thatinformation to guide model explanations.2. Methods and ExperimentsIn image classification, the explanation paradigm calculates each pixel’s importance towardsthe model prediction and outlines the sub-regions (focus area) consisting of the most im-portant pixels (Zhou et al., 2016; Van der Velden et al., 2022). The model is consideredtrustworthy if its focus area is precisely matched with the area on which human expertsmake decisions. In this study, we implemented three popular techniques of Saliency Map(Simonyan et al., 2013), Grad-CAM (Selvaraju et al., 2017), and Integrated Gradients (Sun-dararajan et al., 2017) to generate the model explanation (focus area) and illustrate theefficacy of our proposed method as a plug-and-play module.Based on the prior clinical knowledge (Imran and Eastman, 2017), we propose incorpo-rating the disease occurrence area into the existing explanation methods. Specifically, usingone canonical PTX delineation selected by human experts as the starting point, the PTXtemplate is generated by horizontal flipping, overlap, and dilation. Horizontal flipping andoverlap aim to spotlight both left and right pleural spaces while the dilation step enlargesthe template to cover the broader pleural space. The template is then laid over the originalmodel explanation to focus on the pleural space. Figure 1 compares the baseline and ourenhanced explanations. Besides, affine transformation (Liu et al., 2019) is implemented toeliminate the deformation such as the improper distance, angle, and displacement in theoriginal radiographs, and further upgrade the effectiveness of our method.Figure 1: Comparison of the baseline explanation and our proposed method2Human-Guided Design to Explain Deep Learning ModelsWe developed the PTX classifier with a light-weighted backbone of VGG-11 (Simonyanand Zisserman, 2015) and modified its output layer into 2 to comply with the binary diag-nosis. Stochastic Gradient Descent (Rumelhart et al., 1986) was used as the optimizer, withan initial learning rate of 1e-3 and a momentum of 0.9. The learning rate was scheduledwith a reduction factor of 0.5 if no improvement was observed for 3 epochs. Model trainingwas conducted in batches of 16 images and the loss was measured by weighted cross-entropy.The training epoch was set as 50 along with early stopping evaluated on the validation set.After the completion of model training, the classification performance was evaluated byAUROC, AUPRC, accuracy, sensitivity, and specificity on the unseen test dataset.After the binary PTX classification training, three CNN explanation methods wereutilized. With our method and affine transformation as plug-and-play modules on theexisting methods, we had a total of 12 explanation methods. The direct production of thesemethods was the pixel importance, and the focus area is extracted as the final explanation.The model explanation of the focus area was evaluated through the IoU and Dice scorecoefficient (Dice) on the ground truth lesion area of PTX-positive samples in the test dataset(Liu et al., 2019). The 95% CIs was computed by bootstrapping (Efron, 1987).3. ResultsBased on the SIIM-ACR Pneumothorax Segmentation Challenge dataset1, we analyzed12,047 chest radiographs (containing 2,668 PTX-positive cases) in terms of the PTX di-agnoses and used 60/20/20 random splitting to generate the training, validation, and testset. All radiographs contained binary diagnostic labels for the PTX classifier training andtesting. Ten PTX samples in the validation set contained pixel-level lesion annotations forthe focus area generation and all PTX samples in the test set included pixel-level lesionannotations for the focus area evaluation.Figure 2: Explanation result comparisonIn PTX binary classification, the clas-sifier trained on affine-transformed datasetsachieved marginally better results of an AU-ROC of 86.4% ( ±1.7), an AUPRC of 68.0%(±4.0), an accuracy of 80.0% ( ±1.7), a sen-sitivity of 76.8% ( ±3.4), and a specificity of80.9% ( ±2.0). With well-trained classifiersand original explanation methods, Figure 2summarizes an ablation study to verify theexplanation improvements. Adding eitheraffine transformation or our method im-proved all three explanation methods, whilethe use of both resulted in more prominentimprovements in terms of IoU and Dice:60.6% and 56.5% for Saliency Map, 69.0%and 66.7% for Grad-CAM, and 137.5% and123.9% for Integrated Gradients.1. https://www.kaggle.com/c/siim-acr-pneumothorax-segmentationhttps://www.kaggle.com/datasets/vbookshelf/pneumothorax-chest-xray-images-and-masks3Yuan Jiang ZhaoReferencesBradley Efron. Better bootstrap confidence intervals. Journal of the American StatisticalAssociation , 1987.Jonathan B Imran and Alexander L Eastman. Pneumothorax. Journal of the AmericanMedical Association , 2017.Jingyu Liu, Gangming Zhao, Yu Fei, et al. Align, attend and locate: Chest x-ray diagnosisvia contrast induced attention network with limited supervision. In Proceedings of theIEEE International Conference on Computer Vision , 2019.Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisionsand use interpretable models instead. Nature Machine Intelligence , 2019.David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representationsby back-propagating errors. Nature , 1986.Adriel Saporta, Xiaotong Gui, Ashwin Agrawal, et al. Benchmarking saliency methods forchest x-ray interpretation. Nature Machine Intelligence , 2022.Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, et al. Grad-cam: Visual expla-nations from deep networks via gradient-based localization. In Proceedings of the IEEEInternational Conference on Computer Vision , 2017.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scaleimage recognition. In Proceedings of the International Conference on Learning Represen-tations , 2015.Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutionalnetworks: Visualising image classification models and saliency maps. arXiv preprint:1312.6034 , 2013.Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks.InProceedings of the International Conference on Machine Learning , 2017.Yee Liang Thian, Dianwen Ng, James Hallinan, et al. Deep learning systems for pneumoth-orax detection on chest radiographs: A multicenter external validation study. Radiology:Artificial Intelligence , 2021.Bas HM Van der Velden, Hugo J Kuijf, Kenneth GA Gilhuijs, et al. Explainable artificialintelligence (xai) in deep learning-based medical image analysis. Medical Image Analysis ,2022.Feng Xie, Han Yuan, Yilin Ning, et al. Deep learning for temporal data representation inelectronic health records: A systematic review of challenges and methodologies. Journalof Biomedical Informatics , 2022.Bolei Zhou, Aditya Khosla, Agata Lapedriza, et al. Learning deep features for discrimina-tive localization. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , 2016.4 |
2cL0MFcxksh | Medical Imaging with Deep Learning – nnn 2023 Short Paper – MIDL 2023A Comparative Study of Unsupervised Adversarial DomainAdaptation Strategies in Multiple-instance LearningFrameworks for Digital PathologyJavier Garcia-Baroja1javier.g.baroja@gmail.comSamaneh Abbasi-Sureshjani2samaneh.abbasi@roche.comNazim Shaikh2nazim.shaikh@roche.comKonstanty Korski2konstanty.korski@roche.com1Swiss Federal Institute of Technology,R ̈ amistrasse 101, 8092 Z ̈ urich2F. Hoffmann-La Roche AG, Grenzacherstrasse 124, 4070 Basel, SwitzerlandEditors: Accepted for publication at MIDL 2023AbstractPerformance of state-of-the-art deep learning methods is often impacted when evaluatedon data coming from unseen acquisition settings, hindering their approval by the regula-tory agencies and incorporation to the clinic. In recent years, several techniques have beenproposed for improving the generalizability of models by using the target data and theircorresponding ground truths. Some of those approaches have been adopted in histopathol-ogy, however they either focus on pixel-level predictions or simple tile level classificationtasks with or without target labels. In this work, we investigate adversarial strategies inweakly supervised learning frameworks in digital pathology domain without access to thetarget labels, thereby strengthening the generalizability to unlabeled target domains. Weevaluate several strategies on Camelyon dataset for metastatic tumor detection tasks andshow that some methods can improve the average F1-score over 10% for the target domain.1. IntroductionDespite the popularity of computer aided diagnosis tools for Digital Pathology (DP),widespread use of these algorithms is hampered by the inherent variation between imagesof diverse origin (Howard et al., 2021) (in staining, thickness, patient demographics, etc.)known as domain shift . Therefore, a strategy that enables us to build more generalizablemodels is desired. Among different methods, Marini et al. (2022) propose a Domain Ad-versarial Neural Network (DANN) (Ganin et al., 2015) to tackle stain heterogeneity withan understanding of domain rooted in Whole Slide Image (WSI) coloring. The ConditionalDomain Adversarial Network (CDAN) proposed by Long et al. (2017) also facilitates do-main alignment by utilizing the discriminative information offered by main-task classifierpredictions.This paper focuses on Unsupervised Domain Adaptation (UDA) and addresses domainshift caused by scanner variations in weakly supervised metastatic tumor detection. Weexplore DANN, CDAN, and the impact of changing the position of the domain discriminatorin attention MIL and TransMIL (Shao et al., 2021) networks.©2023 CC-BY 4.0, J. Garcia-Baroja, S. Abbasi-Sureshjani, N. Shaikh & K. Korski.Garcia-Baroja Abbasi-Sureshjani Shaikh Korski2. Methods and ExperimentationsWe propose to adapt MIL models by combining the discriminators ( G) in DANN and CDANat two locations: 1) after a shallow encoder ( loci), where Gwould receive instance-levelsamples. In this way, the feature alignment between domains will be provided by a shallowencoder that maps the embeddings from the frozen encoder into an overlapping latent space;2)Gis positioned after the embedding aggregation step ( locb), that is, after the attentionmechanism in attention-MIL or mean pooling of patch tokens after the last transformerlayer in TransMIL. This ensures domain alignment on the aggregated instances that areforwarded to the final slide-level classifier. The adapted MIL pipelines for DANN andCDAN are depicted in Figure 1( a) and 1( b), illustrating each integration location.(a) (b)Figure 1: Overview of the two UDA approaches. a) shows locband b) loci.2.1. Experimental SetupThe experiments used a combination of the publicly available Camelyon16 and Camelyon17datasets (Litjens et al., 2018), which contains 1399 WSI of lymph nodes (metastatic andhealthy) stained with Hematoxylin and Eosin, from three different scanners, five hospitals.Scanner 1 (S1) data (from three different medical centers digitized by the same scanner)was used as the source ( Ns= 544), while Scanner 2 (S2) data (from two hospitals) andScanner 3 (S3) data (from one facility) comprised the target dataset ( Nt2= 253 and Nt3=100), on which the model is to improve its generalizability. The source dataset was splitinto 5 non-overlapping subsets (each 20%) stratified by medical center and tumor label, for5-fold Cross-Validation (CV). The UDA training had S2 as target and was evaluated on S3.10,000 tiles at 40 ×magnification were extracted from each WSI. Image patches werestain normalized by Tellez et al. (2019) to account for stain variation from multiple acqui-sition centers. A ResNet-50 (He et al., 2016) pre-trained on DP images via BYOL self-supervised learning strategy (Grill et al., 2020; Abbasi-Sureshjani et al., 2021) was used toextract intermediate features and the backbone weights remained frozen for computationalefficiency. The attention network in attention MIL had 5 fully connected layers, followed bybatch normalization and dropout (p=0.5). The transformer used Nystrom approximation(Xiong et al., 2021) for Self-Attention (SA) with 3 layers and 8 heads in each multi-head SAblock. The main-task classifier had 2 fully connected layers. The discriminator had 3 layersin CDAN and 2 in DANN. ReLU is used as activation function. The Adam optimizer with a2Comparative Study of Adversarial UDA Strategies in MIL for DPlearning rate of 10−3was used. The adversarial contribution to the updates of the networkparameters preceding the domain discriminator Gwas defined as λ=21+exp( −γp)−1, withp∈(0,1] the relative progress of the training.The UDA strategies were compared with three baselines: source only (S1), target only(S2), and balanced data (combining source and target data, with new stratification). Targetlabels were only used to settle the baseline and evaluating the adapted models (never forUDA). Model selection relied on macro-average validation F1-score. The performance on S3was obtained using the model with the closest average F1-score to S2 in the CV experiments.3. Results and ConclusionThe results in Table 1 show UDA improves the performance on the target, indicating higherretention of domain agnostic features. The F1-score gap for S2 is reduced by at least 10%,while the models still generalize to S3. The more severe gap for S2 than S3, beyond persistingstaining differences after stain normalization, could be attributed to the slide thickness asexplained by our pathologist. Moreover, the attention heatmaps showed the effectiveness ofUDA to reduce bias towards light coloring that may be irrelevant to the network outcome.No UDA method clearly outperformed the rest, possibly due to limited bandwidthfor domain alignment with a frozen backbone. More complex methods with additionalhyperparameters may be required. UDA led to a slight decline in source domain performancethat can be addressed by continual learning methods such as B ́ andi et al. (2022).AcknowledgmentsThe authors thank the Roche Personalized Healthcare Digital Pathology Program for spon-soring project resourcing. The authors declare the following competing interests: S.A., N.S.and K.K. are Roche employees and J.G. was employed by Roche at the time of this work.Table 1: Results of different UDA strategies on CAMELYON dataset, 5-fold CVaMethod S1 S1→S2bS3MIL Experiment avg. F1 avg. F1 F1 S1−F1S2(↓)avg. F1Balanced data 85.2(1.0) 80.4(3.2) - 87.8(2.2)Source only 86.9(3.6) 68.4(2.1) 19 .2(3.9) 83.0DANN @ loci 83.6(2.6) 74.0(6.3) 9 .6(2.0) 82.6CDAN @ locb 86.2(6.2) 80.2(3.0) 6.0(3.2) 85.2DANN @ locb 87.6(4.3) 81.4(3.2) 6.2(1.1) 86.0CDAN @ loci 83.0(3.2) 79.4(3.0) 3 .6(1.2) 83.4Attention MILTarget only (S2) - - 88 .4(6.1) -Balanced data 88.5 85.1 - 92.9Source only 86.3(4.2) 67.6(7.8) 18 .7(3.6) 84.0DANN @ locb 85.6(3.6) 79.3(2.2) 6 .3(1.5) 85.8CDAN @ locb 86.4(2.1) 79.0(2.5) 7 .0(0.4) 85.0TransMILTarget only (S2) - 90.5(5.3) - -aPercentages with standard deviation. Best in bold, second underlined;bArrow for adaptation direction.3Garcia-Baroja Abbasi-Sureshjani Shaikh KorskiReferencesSamaneh Abbasi-Sureshjani, Anıl Y ̈ uce, Simon Sch ̈ onenberger, Maris Skujevskis, UweSchalles, Fabien Gaire, and Konstanty Korski. Molecular subtype prediction for breastcancer using h&e specialized backbone. In MICCAI Workshop on Computational Pathol-ogy, pages 1–9. PMLR, 2021.P ́ eter B ́ andi, Maschenka C. A. Balkenhol, Marcory van Dijk, Bram van Ginneken, Jeroenvan der Laak, and Geert J. S. Litjens. Domain adaptation strategies for cancer-independent detection of lymph node metastases. ArXiv , abs/2207.06193, 2022.Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle,Fran ̧ cois Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial trainingof neural networks. Advances in Computer Vision and Pattern Recognition , 17:189–209,5 2015. ISSN 21916594. doi: 10.48550/arxiv.1505.07818.Jean-Bastien Grill, Florian Strub, Florent Altch ́ e, Corentin Tallec, Pierre H. Richemond,Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Moham-mad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, R ́ emi Munos, and Michal Valko.Bootstrap your own latent: A new approach to self-supervised learning, 2020.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning forimage recognition. In Proceedings of the IEEE conference on computer vision and patternrecognition , pages 770–778, 2016.Frederick M. Howard, James Dolezal, Sara Kochanny, Jefree Schulte, Heather Chen, LaraHeij, Dezheng Huo, Rita Nanda, Olufunmilayo I. Olopade, Jakob N. Kather, Nicole Cipri-ani, Robert L. Grossman, and Alexander T. Pearson. The impact of site-specific digitalhistology signatures on deep learning model accuracy and bias. Nature Communications ,12(1), July 2021. doi: 10.1038/s41467-021-24698-1.Geert Litjens, Peter Bandi, Babak Ehteshami Bejnordi, Oscar Geessink, Maschenka Balken-hol, Peter Bult, Altuna Halilovic, Meyke Hermsen, Rob van de Loo, Rob Vogels, Quirine FManson, Nikolas Stathonikos, Alexi Baidoshvili, Paul van Diest, Carla Wauters, Marcoryvan Dijk, and Jeroen van der Laak. 1399 h&e-stained sentinel lymph node sections ofbreast cancer patients: the CAMELYON dataset. GigaScience , 7(6), May 2018. doi:10.1093/gigascience/giy065.Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. Conditional adver-sarial domain adaptation, 2017.Nicco` ı O Marini, Manfredo Atzori, Sebastian Ot ́ alora, Stephane Marchand-Maillet, andHenning M ̈ uller. He-adversarial network: a convolutional neural network to learn stain-invariant features through hematoxylin eosin regression. 1 2022. doi: 10.48550/arxiv.2201.06329.Zhuchen Shao, Hao Bian, Yang Chen, Yifeng Wang, Jian Zhang, Xiangyang Ji, and yong-bing zhang. TransMIL: Transformer based correlated multiple instance learning for wholeslide image classification. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and4Comparative Study of Adversarial UDA Strategies in MIL for DPJ. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , vol-ume 34, pages 2136–2147. Curran Associates, Inc., 2021.David Tellez, Geert Litjens, P ́ eter B ́ andi, Wouter Bulten, John-Melle Bokhorst, FrancescoCiompi, and Jeroen van der Laak. Quantifying the effects of data augmentation andstain color normalization in convolutional neural networks for computational pathology.Medical Image Analysis , 58:101544, 2019. ISSN 1361-8415. doi: https://doi.org/10.1016/j.media.2019.101544.Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, YinLi, and Vikas Singh. Nystr ̈ omformer: A nyst ̈ om-based algorithm for approximating self-attention. Proc. Conf. AAAI Artif. Intell. , 35(16):14138–14148, May 2021.5 |
NiUSj5tDKf | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionExpansion Microscopy Imaging Isotropic Restoration byUnsupervised Deep LearningMeng-Yun Wu∗1wmy4fish@gmail.com, r11458006@ntu.edu.twDa-Yu Huang∗1r11458005@ntu.edu.twYa-Ding Liu∗2sf164461@gmail.comLi-An Chu†2lachu@mx.nthu.edu.twGary. Han Chang†1garyhanchang@ntu.edu.tw1Institute of Medical Device and Imaging, National Taiwan University College of Medicine, Taipei100, Taiwan2Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua Univer-sity, Hsinchu 300, TaiwanEditors: Under Review for MIDL 2023AbstractThe development of fluorescence light sheets and expansion microscopy (ExM) in recentyears enables the visualization of detailed neural structures to help unlock the secrets ofneural functioning. Deep learning techniques have then become essential tools to processthe ever-increasing amount of high-quality and high-resolution images. In this study, wedeveloped a single-scale deconvolution model for extracting multiscale deconvoluted re-sponse (MDR) from the volumes of microscopy images of neurons and generative modelsto translate images between the lateral and axial views. The results demonstrated thatdeep learning as a promising tool in approving image volume quality and comprehensionof structural information of light sheet microscopy.Figure 1: proposed workflowKeywords: domain adaptation, GAN, unsupervised deep learning, expansion microscopy1. IntroductionThe development of ExM in recent years has revolutionized the field of biological imaging byallowing visualization of biological structures in sub-millimeter scales. This breakthroughtechnology has made it possible to observe detailed morphologies of synaptic connectionsin neurons. However, reconstructing three-dimensional (3D) neuronal morphologies from∗Contributed equally†Corresponding author©2023 CC-BY 4.0, M.-Y. Wu, D.-Y. Huang, Y.-D. Liu, L.-A. Chu & G.H. Chang.Wu Huang Liu Chu Changlight-sheet microscopy imaging with ExM samples faces two main challenges. One is light-sheet microscopy imaging does not result in a 3D imaging volume with isotropic imagingresolution due to optical sectioning. The other is uneven contrast conditions, due to in-homogeneities in fluorescence labeling and distribution of neurons, often leads to visualambiguities and poor definition between neuron morphologies and the background. There-fore, there has been tremendous interest in developing deep learning methods to addressingthese challenges.In the study, a single-scale deconvolution model was pre-trained for extracting MDRfrom the volumes of real microscopy images of neurons. Unsupervised domain-adaptationwas subsequently applied by generative adversarial network (GAN) models to translateMDR as well as the original images between the lateral and axial views(Park et al., 2021).This results in an imaging volume with isotropic resolution and imaging quality.2. Methods(1) Extract Multi-scale Features with Backbone Deconvolution ModelWe constructed a backbone single-scale deconvolution model using idealized neuronstructures generated by applying a theoretical PSF, Poisson and Gaussian noise onthe synthetic ground truth of neuron structures(Weigert et al., 2018). Comparedto the structures of real neurons, the idealized structures lacked the complexities inneuron morphologies and filopodia structures but were easy to be generated in largequantities and able to provide the relationship between the neuron-like structures andthe real microscopy images. This supervised model was trained to learn the mappingfrom synthetic single-scale microscopy images to synthetic ground truth images.We then approximated the real morphologies of neurons by decomposing the origi-nal image into a combination of backbone deconvolution model outputs at 4 physicalscales. The multiscale ground-truth of neuron morphologies was subsequently approx-imated by fusing knowledge at different physical scales using a Fourier Transform. Themultiscale deconvolution response (MDR) was defined as the combined features fil-tered by a learnable filter and converted by an Inverse Fourier Transform. The MDRsat each lateral view slice were subsequently concatenated back to form a 3D volumeand resliced on the axial direction to create the MDRs at axial view.(2) Isotropic Image RestorationTo create an imaging volume with isotropic image resolution and quality, we enhancedthe images as well as MDRs from the axial view by an unsupervised domain adap-tation model with CycleGAN(Zhu et al., 2017) architecture. The model containedtwo generators to translate the data from lateral view to the axial view, and two dis-criminators to ensure the visual similarity between the enhanced and the referencedimage. The original image and the MDRs from either lateral and axial view wereconcatenated together as the model input and optimized simultaneously to improvetheir spatial resolution and imaging quality. This enabled our model to utilize thecapability of multiscale features to capture neuronal spatial structures across variousphysical levels.To create isotropic imaging volumes, the 2D axial slices of the imaging volume were2Microscopy Imaging Isotropic Restoration by Unsupervised Learningenhanced independently by the GAN model and concatenated together to from theraw 3D volumes. However, due to the absence of consideration of the adjacent 2Dslices during model inference, artifacts due to discontinuities were present in the rawimaging volume as observed from the concatenated lateral view. To overcome thisissue, we resliced the original imaging volume along arbitrary directions on the lateralplane and used the resliced slices as input to the model during the inference process.The finalized imaging volume was obtained by averaging all the imaging volumesgenerated along these different directions.3. ResultsAs Figure 2 shown, the generated axial view images had their imaging resolution, continuityand quality significantly enhanced to visually resemble the ones from the lateral view. Ourmodel also reduced the influence of imaging artifacts caused by optical slicing on the profilesof neurons. Figure 3 demonstrated the discontinuities between the generated 2D slices weremitigated.Figure 2: The results after image enhancement in the axial viewFigure 3: L: Discontinuities were present in the raw imaging volume from the lateral view.R: The imaging artifacts were absent from the finalized imaging volume.4. ConclusionThe results of the study show the ability of our proposed model to faithfully enhancethe imaging volume of ExM image isotropically, and demonstrated great potential in recon-structing detailed morphologies of neurons with great computational efficiency and requiredlimited amount of manual annotation. These findings are particularly significant in light ofthe ever-increasing demand for high-resolution imaging techniques that enable researchersto study the complex biological structures of the whole brain and other tissues, and thepresent model showed broad potential for maximizing the full potential of high-ratio expan-sion microscopy.3Wu Huang Liu Chu ChangReferencesHyoungjun Park, Myeongsu Na, Bumju Kim, Soohyun Park, Ki Hean Kim, Sunghoe Chang,and Jong Chul Ye. Axial-to-lateral super-resolution for 3d fluorescence microscopy usingunsupervised deep learning. CoRR , abs/2104.09435, 2021. URL https://arxiv.org/abs/2104.09435 .Martin Weigert, Uwe Schmidt, Tobias Boothe, Andreas M ̈ uller, Alexandr Dibrov, AkankshaJain, Benjamin Wilhelm, Deborah Schmidt, Coleman Broaddus, Siˆ an Culley, Mauri-cio Rocha-Martins, Fabi ́ an Segovia-Miranda, Caren Norden, Ricardo Henriques, MarinoZerial, Michele Solimena, Jochen Rink, Pavel Tomancak, Loic Royer, Florian Jug,and Eugene W. Myers. Content-aware image restoration: Pushing the limits of flu-orescence microscopy. Nature Methods , 2018. doi: 10.1038/s41592-018-0216-7. URLhttps://doi.org/10.1038/s41592-018-0216-7 .Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-imagetranslation using cycle-consistent adversarial networks. CoRR , abs/1703.10593, 2017.URL http://arxiv.org/abs/1703.10593 .4 |
B8e-iS9j43 | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionAutomatic Contrast Phase Detection on AbdominalComputed Tomography using Clinically-Inspired TechniquesEduardo Pontes Reis1,2edreis@stanford.edu1Stanford University, CA, USA2Hospital Israelita Albert Einstein, Sao Paulo, BrazilLouis Blankemeier1louis.blankemeier@stanford.eduJuan Manuel Zambrano Chaves1jmz@stanford.eduMalte Engmann Kjeldskov Jensen1mekj@stanford.eduSally Yao1yaohanqi@stanford.eduCesar Augusto Madid Truyts2cesar.truyts@einstein.brMarc Willis1marc.willis@stanford.eduRobert Downey Boutin1boutin@stanford.eduEdson Amaro Jr2edson.junior@einstein.brAkshay Chaudhari1akshaysc@stanford.eduEditors: Under Review for MIDL 2023AbstractAccurately determining contrast phase in an abdominal computed tomography (CT) seriesis an important step prior to deploying downstream artificial intelligence methods trainedto operate on the specific series. Inspired by how radiologists assess contrast phase status,this paper presents a simple approach to automatically detect the contrast phase. Thismethod combines features extracted from the segmentation of key anatomical structureswith a gradient boosting classifier for this task. The algorithm demonstrates high accuracyin categorizing the images into non-contrast (96.6% F1 score), arterial (78.9% F1 score),venous (92.2% F1 score), and delayed phases (95.0% F1 score), making it a valuable toolfor enhancing AI applicability in medical imaging.Keywords: Abdominal CT Scan, Contrast Phase, Organ Segmentation, Radiology, Med-ical Imaging, Machine Learning, Artificial Intelligence1. IntroductionAbdominal computed tomography (CT) scans are commonly utilized to assess internalorgans and structures. CT exams can be performed by scanning subjects in different con-ditions (phases) related to the use of intravascular contrast agents, which enhance theradiodensity of blood vessels and vascularized internal organs. Accurate determination ofphase is crucial, especially as the quantification of biomarkers in the rapidly emerging fieldof opportunistic imaging relies on it. (Pickhardt et al., 2013; Zambrano Chaves et al., 2021)This ensures that the appropriate algorithm runs on the correct series of images and thatquantitative metrics are calibrated for phase status. (Boutin et al., 2016)To the best of our knowledge, current methods for contrast phase detection in abdominalCT scans are not available through open-source platforms. (Dao et al., 2022; Ye et al., 2022)©2023 CC-BY 4.0, E.P. Reis et al.Reis Blankemeier Chaves Jensen Yao Truyts Boutin Jr ChaudhariIn this scenario, projects that analyze CT images are required to manually curate extensivedatasets for classifier training, which can be both costly and time-consuming.In order to address these limitations, we present the Contrast Phase algorithm, whichextracts radiodensity measures from relevant organs and applies a gradient boosting classi-fier for accurate contrast phase classification. The algorithm identifies four contrast phases:non-contrast, arterial, venous, and delayed. This pipeline is design to read most commonimage formats (DICOM and NIFTI), segment relevant organs, and classify contrast phases.It is made publicly available at https://github.com/StanfordMIMI/Comp2Comp2. MethodsAll data aggregation and experiments were performed under Institutional Review Boardapproval using de-identified clinical data.The data acquisition and labeling process for this study involved obtaining 739 abdom-inal CT exams from 238 unique patients. These CT exams contained 3252 series. Sagittaland coronal reformatted series, localizer series, and axial series that failed during organsegmentation were excluded. 1545 remaining axial series were split into the training set,containing 1183 examples, and the test set, containing 362 examples. We ensured that eachpatient’s data was exclusively allocated to either the training or the test set.The series were then labeled as one of 4 classes: non-contrast ,arterial ,venousand delayed . To facilitate the labeling process of contrast phases in the CT scans, theSeries Description DICOM tag was analyzed using regular expression (regex) rules to detectspecific keywords describing the phases along with their synonyms. This initial labelingprocess served as a preliminary categorization of the scans. Subsequently, a board-certifiedradiologist (5 years of experience) reviewed two slices from each scan, on the anatomicallevels of the right adrenal gland and the left kidney, where the structures to evaluate contrastphase are most discernible. Any inaccurately labeled scans were corrected.The Contrast Phase algorithm consists of three main stages: segmentation of organs,feature extraction, and classification.Segmentation of Organs: The first stage involves the segmentation of key anatomi-cal structures, including the aorta, inferior vena cava, portal vein, renal parenchyma, andrenal pelvis, using Total Segmentator, a deep learning-based open-source segmentationtool (Wasserthal et al., 2022). This approach has been shown to provide accurate andprecise organ segmentation, which is crucial for the subsequent feature extraction of theseregions.Feature Extraction: After segmentation, we computed quantitative 48 low-level ra-diomics features that characterize the radiointensity statistics, such as maximum value,minimum value, mean, median, and variance from the aforementioned anatomical struc-tures.Classification: Extreme Gradient Boosting (XGBoost) (Chen and Guestrin, 2016) wastrained on the extracted features to classify the CT images into the four distinct contrastphases.2Short TitleFigure 1: Example abdominal CT images at the anatomical level of the right adrenal glandand the left kidney, representing each of the four classes. Observe the variationsin pixel intensity across the phases in the key structures: aorta (red arrows),portal vein (yellow arrows), renal parenchyma (blue arrows), and renal pelvis(green arrows).3. Results and DiscussionThe classifier demonstrated high accuracy on the test set in identifying the four contrastphases, with an accuracy of 92.3% and F1-scores of 96.6% for non-contrast, 78.9% forarterial, 92.2% for venous, and 95.0% for delayed phase, as shown in Table 1. These resultshighlight that using segmentations of clinically relevant anatomic structures can contributeto the development of an accurate contrast phase classifier. The arterial phase was themost challenging to classify, which could be attributed to the limited number of trainingexamples for this phase. This model can serve as a valuable component in a pipeline ofother AI algorithms for abdominal CT scan analysis.Metric Non-Contrast Arterial Venous Delayed# Training examples 285 (24.0%) 49 (4.1%) 503 (42.5%) 346 (29.2%)# Test examples 76 (20.9%) 22 (6.0%) 139 (38.4%) 125 (34.5%)Precision 100.0 93.7 87.1 97.4Recall 93.4 68.1 97.8 92.8Specificity 100.0 99.7 91.0 91.0F1 score 96.6 78.9 92.2 95.0Table 1: Performance metrics and dataset distribution for the classification model3Reis Blankemeier Chaves Jensen Yao Truyts Boutin Jr ChaudhariThe algorithm has been made publicly available through the Comp2Comp InferencePipeline - Open-Source Body Composition Assessment on Computed Tomography (Blanke-meier et al., 2023) on the following GitHub repository: https://github.com/StanfordMIMI/Comp2Comp . We provide an easy-to-use command line interface that operates on DICOMand NIfTI medical image formats.4. ConclusionWe introduce an efficient algorithm for detecting contrast phases in abdominal CT scans.We show that by carefully choosing key structures to extract features, we achieve highaccuracy for contrast phase classification. While the current focus is on the abdominalregion, this method has the potential to be expanded to additional fields of view.ReferencesLouis Blankemeier, Arjun Desai, Juan Manuel Zambrano Chaves, Andrew Wentland, SallyYao, Eduardo Reis, Malte Jensen, Bhanushree Bahl, Khushboo Arora, Bhavik N Patel,et al. Comp2comp: Open-source body composition assessment on computed tomography.arXiv preprint arXiv:2302.06568 , 2023.Robert D. Boutin, Justin M. Kaptuch, Cyrus P. Bateni, James S. Chalfant, and LawrenceYao. Influence of iv contrast administration on ct measures of muscle and bone atten-uation: Implications for sarcopenia and osteoporosis evaluation. American Journal ofRoentgenology , 207(5):1046–1054, 2016. doi: 10.2214/ajr.16.16387.Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. In Proceedingsof the 22nd ACM SIGKDD International Conference on Knowledge Discovery and DataMining , KDD ’16, pages 785–794, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939785.Binh T. Dao, Thang V. Nguyen, Hieu H. Pham, and Ha Q. Nguyen. Phase recognitionin contrast-enhanced ct scans based on deep learning and random sampling. 2022. doi:10.1101/2022.03.07.22272004.Perry J Pickhardt, B Dustin Pooler, Travis Lauder, Alejandro Mu ̃ noz del Rio, Richard JBruce, and Neil Binkley. Opportunistic screening for osteoporosis using abdominal com-puted tomography scans obtained for other indications. Annals of Internal Medicine , 158(8):588–595, 2013. doi: 10.7326/0003-4819-158-8-201304160-00003.Jakob Wasserthal, Manfred Meyer, Hanns-Christian Breit, Joshy Cyriac, Shan Yang, andMartin Segeroth. Totalsegmentator: robust segmentation of 104 anatomical structures inct images. arXiv preprint arXiv:2208.05868 , 2022.Zezhong Ye, Jack M. Qian, Ahmed Hosny, Roman Zeleznik, Deborah Plana, Jirapat Lik-itlersuang, Zhongyi Zhang, Raymond H. Mak, Hugo J. Aerts, Benjamin H. Kann, andet al. Deep learning–based detection of intravenous contrast enhancement on ct scans.Radiology: Artificial Intelligence , 4(3), 2022. doi: 10.1148/ryai.210285.4Short TitleJuan M Zambrano Chaves, Akshay S Chaudhari, Andrew L Wentland, Arjun D Desai, ImonBanerjee, Robert D Boutin, David J Maron, Fatima Rodriguez, Alexander T Sandhu,R Brooke Jeffrey, et al. Opportunistic assessment of ischemic heart disease risk using ab-dominopelvic computed tomography and medical record data: a multimodal explainableartificial intelligence approach. medRxiv , 2021.5 |
iilLHaINUW | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionSAM.MD: Zero-shot medical image segmentationcapabilities of the Segment Anything ModelSaikat Roy∗1saikat.roy@dkfz-heidelberg.deTassilo Wald∗1,2tassilo.wald@dkfz-heidelberg.deGregor Koehler∗1,2g.koehler@dkfz-heidelberg.deMaximilian R. Rokuss∗1maximilian.rokuss@dkfz-heidelberg.deNico Disch∗1nico.disch@dkfz-heidelberg.deJulius Holzschuh∗1julius.holzschuh@dkfz-heidelberg.deDavid Zimmerer∗1d.zimmerer@dkfz-heidelberg.deKlaus H. Maier-Hein1,3k.maier-hein@dkfz-heidelberg.de1Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany2Helmholtz Imaging3Pattern Analysis and Learning Group, Heidelberg University Hospital, GermanyEditors: Under Review for MIDL 2023AbstractFoundation models have taken over natural language processing and image generationdomains due to the flexibility of prompting. With the recent introduction of the SegmentAnything Model (SAM), this prompt-driven paradigm has entered image segmentation witha hitherto unexplored abundance of capabilities. The purpose of this paper is to conductan initial evaluation of the out-of-the-box zero-shot capabilities of SAM for medical imagesegmentation, by evaluating its performance on an abdominal CT organ segmentation task,via point or bounding box based prompting. We show that SAM generalizes well to CTdata, making it a potential catalyst for the advancement of semi-automatic segmentationtools for clinicians. We believe that this foundation model, while not reaching state-of-the-art segmentation performance in our investigations, can serve as a highly potent startingpoint for further adaptations of such models to the intricacies of the medical domain.Keywords: medical image segmentation, SAM, foundation models, zero-shot learning1. IntroductionIn recent years, there has been an explosion in the development and use of foundationalmodels in the field of artificial intelligence. These models are trained on very large datasetsin order to generalize on various tasks and domains. In the Natural Language Processingdomain, Large Language Models (LLMs) have taken over (Brown et al., 2020). This leadsto models of increasing size culminating in the recent GPT4 by OpenAI (2023). For theimage domain, Stable Diffusion (Rombach et al., 2022) and DALL-E (Ramesh et al., 2021),are models that generate high-resolution images using text prompts. And with the recentpublication of the Segment Anything Model (SAM) (Kirillov et al., 2023) the field of imagesegmentation received a promptable model, possibly enabling a wide range of applications.In this paper, we contribute an early stage evaluation of SAM with different visual promptsdemonstrating varying degrees of accuracy on a multi-organ dataset of the CT domain.∗Contributed equally©2023 CC-BY 4.0, S. Roy et al.Roy Wald Koehler Rokuss Disch Holzschuh Zimmerer Maier-Hein(a) Image (b) GT (c) 1 Point (d) 3 Points (e) 10 Points(f) Box 0.01 (g) Box 0.05 (h) Box 0.1 (i) Box 0.25 (j) Box 0.5Figure 1: Examples of random point and jittered box prompts with subsequently generatedsegmentation masks. Prompt points and boxes are represented in green, whilethe obtained segmentations are shown in blue.2. MethodsSlice Extraction We use slices from the AMOS22 Abdominal CT Organ Segmentationdataset (Ji et al., 2022) to evaluate the zero-shot capabilities of SAM. We generate ourevaluation dataset using axial 2D slices of patients centered around the center-of-mass ofeach given label. This results in 197-240 slices per patient, per class with each image slicecontaining some foreground class and a corresponding binary mask. Given this slice andbinary mask, we generate different types of visual prompts.Visual Prompt Engineering Zero-shot approaches have recently utilized prompting tosegment novel concepts or classes not seen during training (L ̈ uddecke and Ecker, 2022; Zouet al., 2022). SAM allows a variety of prompts including text, points and boxes to enablezero-shot semantic segmentation.1In this work, we use the following limited set ofpositivevisual prompts to gauge the zero-shot capabilities of SAM on unseen concepts – 1) Point-based prompting with 1, 3 and 10 randomly selected points from the segmentation mask ofthe novel structure, 2) Bounding boxes of the segmentation masks with jitter of 0.01, 0.05,0.1, 0.25 and 0.5 added randomly, to simulate various degrees of user inaccuracy. Boxesand Points are provided in an Oracle fashion to imitate an expert clinician.3. Results and Discussion3.1. ResultsWe compare the predictions of SAM to the corresponding 2D slices extracted from pre-dictions of a trained 2D and 3D nnU-Net baseline (Isensee et al., 2018). Dice SimilarityCoefficient (DSC) of the various prompting types as well as nnU-Net are shown in Table1. To the best of our knowledge, SAM does not provide a direct text prompt interface yet.2SAM for medical image segmentationMethodOrgansAVG AVG*Spl. R.Kid. L.Kid. GallBl. Esoph. Liver Stom. Aorta Postc. Pancr. R.AG. L.AG. Duod. Blad.1 Point 0.632 0.759 0.770 0.616 0.382 0.577 0.508 0.720 0.453 0.317 0.085 0.196 0.339 0.542 0.493 0.3473 Points 0.733 0.784 0.786 0.683 0.448 0.658 0.577 0.758 0.493 0.343 0.129 0.240 0.325 0.631 0.542 0.39710 Points 0.857 0.855 0.857 0.800 0.643 0.811 0.759 0.842 0.637 0.538 0.405 0.516 0.480 0.789 0.699 0.560Boxes, 0.01 0.926 0.884 0.889 0.883 0.820 0.902 0.823 0.924 0.867 0.727 0.618 0.754 0.811 0.909 0.838 0.826Boxes, 0.05 0.920 0.883 0.894 0.879 0.814 0.883 0.818 0.923 0.862 0.727 0.609 0.746 0.805 0.907 0.834 0.819Boxes, 0.1 0.890 0.870 0.874 0.859 0.806 0.813 0.796 0.919 0.845 0.702 0.594 0.733 0.785 0.862 0.810 0.795Boxes, 0.25 0.553 0.601 0.618 0.667 0.656 0.490 0.561 0.747 0.687 0.481 0.478 0.558 0.655 0.561 0.594 0.612Boxes, 0.5 0.202 0.275 0.257 0.347 0.356 0.164 0.252 0.381 0.335 0.239 0.234 0.308 0.343 0.205 0.278 0.289nnUNet 3D 0.978 0.951 0.951 0.903 0.856 0.978 0.919 0.961 0.923 0.856 0.790 0.815 0.814 0.929 0.902 0.902nnUNet 2D 0.977 0.938 0.943 0.865 0.850 0.976 0.890 0.954 0.884 0.788 0.753 0.787 0.745 0.920 0.877 0.877Table 1: DSC of Point and Box Prompting against 2D and 3D nnUNet. All results createdafter CT clipping to -100 to 200 Hounsfield Units, except AVG* on the rightwhich is the average DSC on raw CT values.1. Box prompting, even with moderate (0.1) jitter, is seen to be highly competitive againstour baselines, compared to Point prompts.3.2. DiscussionZero-shot Medical Image Segmentation SAM is seen to segment novel target struc-tures (organs), especially with bounding box prompting at moderate jitter, to highly com-petitive accuracies compared to our baselines. Single positive bounding boxes are seen toperform considerably better than 10 positive point prompts. The performance does notdegrade on raw CT values as well (AVG*), indicating robustness of box prompting to highintensity ranges. Considering that nnU-Net is a strong automatic baseline trained on theentire dataset and SAM only sees a slice and a prompt (points or box), SAM demonstratesenormous potential as a zero-shot technique for medical image segmentation.Who is it useful for? Our experiments demonstrate that SAM could be highly beneficialfor interactive segmentation pipelines – enabling rapid semi-automatic segmentation of amajority of the structure of interest, with only a few click or bounding box prompts (orpossibly both) by an expert. Empirically, it appears that SAM may experience decreasedaccuracy in areas near class boundaries (as shown in Figure 1). However, as such areas canbe manually segmented, the use of SAM might still greatly improve the speed of a clinicalpipeline while maintaining a good level of accuracy.4. ConclusionOur study evaluates the zero-shot effectiveness of the Segment Anything Model (SAM) formedical image segmentation using few click and bounding box prompting demonstratinghigh accuracy on novel medical image segmentation tasks. We find that by using SAM,expert users can achieve fast semi-automatic segmentation of most relevant structures,making it highly valuable for interactive medical segmentation pipelines.3Roy Wald Koehler Rokuss Disch Holzschuh Zimmerer Maier-HeinReferencesTom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, PrafullaDhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Lan-guage models are few-shot learners. Advances in neural information processing systems ,33:1877–1901, 2020.Fabian Isensee, Jens Petersen, Andre Klein, David Zimmerer, Paul F. Jaeger, Simon Kohl,Jakob Wasserthal, Gregor Koehler, Tobias Norajitra, Sebastian Wirkert, and Klaus H.Maier-Hein. nnu-net: Self-adapting framework for u-net-based medical image segmenta-tion, 2018.Yuanfeng Ji, Haotian Bai, Chongjian Ge, Jie Yang, Ye Zhu, Ruimao Zhang, Zhen Li,Lingyan Zhanng, Wanling Ma, Xiang Wan, et al. Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation. Advances in Neural Infor-mation Processing Systems , 35:36722–36732, 2022.Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson,Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Doll ́ ar, and RossGirshick. Segment Anything. apr 2023. URL http://arxiv.org/abs/2304.02643 .Timo L ̈ uddecke and Alexander Ecker. Image segmentation using text and image prompts. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 7086–7096, 2022.OpenAI. GPT-4 Technical Report. mar 2023. URL http://arxiv.org/abs/2303.08774 .Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford,Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation, 2021.Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈ orn Ommer.High-resolution image synthesis with latent diffusion models, 2022.Xueyan Zou, Zi-Yi Dou, Jianwei Yang, Zhe Gan, Linjie Li, Chunyuan Li, Xiyang Dai,Harkirat Behl, Jianfeng Wang, Lu Yuan, et al. Generalized decoding for pixel, image,and language. arXiv preprint arXiv:2212.11270 , 2022.4 |
VmFdXXpVx8 | Medical Imaging with Deep Learning 2023Anomaly Detection using Cascade Variational AutoencoderCoupled with Zero Shot LearningGokul Ramasamy gramasam@asu.eduArizona State University, Tempe, AZ, USABhavik N. Patel patel.bhavik@mayo.eduMayo Clinic, Phoenix, AZ, USAImon Banerjee Banerjee.Imon@mayo.eduMayo Clinic, Phoenix, AZ, USAAbstractDetection of anomalies before they are included in the downstream diagnosis/prognosismodels is an important criterion for maintaining the medical AI imaging model perfor-mance across internal and external datasets. However, the core challenges are: (i) giventhe infinite variations of possible anomaly, curation of training data is in-feasible; (ii)making assumptions about the types of anomalies are often hypothetical. We propose anunsupervised anomaly detection model using a cascade variational autoencoder coupledwith a zero-shot learning (ZSL) network that maps the latent vectors to semantic attributespace. We present the performance of the proposed model on two different use cases – skinimages and chest radiographs and also compare against the same class of state-of-the artgenerative OOD detection models.Keywords: Anomaly detection, OOD, Zero-shot learning, medical imaging1. IntroductionThe performance of deep learning models, especially supervised learning has been shown tobe on par with health-care professionals in multiple applications (Liu et al., 2019; Rajpurkaret al., 2017). Despite the high performance, the safety and reliability of these models isquestioned, as often the models fail to retain the same performance on the unseen externaldataset. Inaccurate predictions can cause a catastrophic consequence for the patients whenapplied in clinical diagnosis and prognosis. One major reason for the performance drop onexternal dataset is that supervised learning models operate under closed-world assumption(Fei and Liu, 2016) i.e., during inference, the models can handle only samples which containsexactly similar pattern to the data that the model has been trained on. But that’s not oftenthe case after deployment in the real world. As a simple example, the model could be trainedwith a regular chest radiograph dataset and during the testing, it receives a chest radiographdataset with a mix of regular and high contrast CLAHE images, and the model fails terribly.Another important aspect of improving the diagnostic performance of supervised AImodels is the need for large amounts of high quality training data. But challenge withcurating high-quality medical data is that the datasets from different institutions can beheterogeneous with distribution shifts (Cao et al., 2020).OOD data, also called anomaly /outlier , usually refers to data that shows dissimilar-ity from the training data distribution and often AI models fail to retain performance on©2023 CC-BY 4.0, G. Ramasamy, B.N. Patel & I. Banerjee.Ramasamy Patel Banerjeethe OOD data. A successful open-world deployment of an AI model with OOD detectionshould be sensitive to unseen classes and distribution-shifted samples and also be resilientto potential adversarial attacks (Sehwag et al., 2019). To train a OOD detector with onlyin-distribution (ID) data available, learning high-quality “normality” features is the fun-damental step to identify the OOD samples during inference. We designed a novel OODarchitecture by combining generative and zero-shot learning model and performed a com-parartive analysis against the state-of-the-art GAN based OOD - f-AnoGAN (Schlegl et al.,2019) and autoencoder based CVAE model (CVAD) (Guo et al., 2021).2. MethodologyOur proposed architecture has two components - CVAE (Cascade Variational Autoencoder)and ZSL (Zero-Shot Learning) network. CVAE is similar to the generator of CVAD (Guoet al., 2021). In contrast to Vanilla VAE, a cascaded architecture provides high-qualityreconstructions and better latent representations. The CVAE has two branches of VAE. Theprimary branch comprising of E1andD1as the encoder and decoder, while the secondarybranch comprises of E2andD2as the encoder and decoder. The input to the secondarybranch is the concatenated feature vector from E11andD11to improve the quality ofthe generated images. The CVAE network is trained with KL Divergence loss and Mean-Figure 1: Proposed Architecture of OOD with CVAE and ZSL.Squared Error (MSE) loss which is used as the reconstruction loss. The ZSL network S1is designed with three linear layers and a sigmoid activation at the end. The concatenatedvector of the latent vectors from E1andE2of size 2560 (2048, 512) is the input to the ZSLnetwork. The size of the output layer is determined by the number of semantic features usedin the particular use-case. Ideally, the distance between the predicted attributes and theground truth attributes would serve as the anomaly score. But here the external dataset isnot as extensively labelled with the chosen semantic attributes. Hence, the semantic outputsof the trained ZSL model on the internal ID dataset is averaged to get a mean ID embeddingwhich would serve as a representative embedding for ID data. The predicted external IDdata embedding are closer to the representative embedding whereas the predicted externalOOD data embedding is not. The Euclidean distance between the predicted embedding andthe representative ID embedding is used to model this variation and serves as the anomalyscore. The mean distance between the representative ID embedding and the internal IDdata serves as a threshold to classify the external data as ID and OOD.2Anomaly Detection using Cascade VAE Coupled with ZSL3. Experiments and ResultsThe proposed architecture is validated on two distinct medical problems and used four pub-licly available imaging datasets: (i) Dermatology Images - ISIC Dataset (internal) and Fitz-patrick17k (external); (ii) Chest Radiographs - CheXpert Dataset (internal) and UNIFESPDataset (external). For our use cases, three distinct out-of-distribution categories are iden-tified - Type 1 (Domain Shift) - Data samples totally unrelated to the task at hand; Type2 (Quality drift) - Data samples that were acquired incorrectly; Type 3 (Interclass OOD)- Data samples that are unseen due to selection bias. Within the scope of this work, theinternal test set is tested predominantly on type 3 (interclass OOD), while the externaldata is tested on a set of images comprising of all the use cases (type 1-3). The CVAEnetwork coupled with ZSL was trained on ISIC 2020 with 21 different semantic attributes(e.g. anatomic region, skin color) and CheXpert with 23 different attributes (e.g. supportdevice, fracture, gender). The number of training and validation images for pre-trainingCVAE and training the entire network are presented in Table 1.Table 1: Training and Validation data statistics for CVAE pre-train and ZSL network trainDataset Train ValidationCVAE pre-trainISIC 2019 11193 947CheXpert 9900 1100CVAE+ZSL trainISIC 2020 4358 600CheXpert 5000 1000The performance of our proposed architecture (CVAE+ZSL) are tested against f-AnoGAN(Schlegl et al., 2017) and CVAD (Guo et al., 2021) (Table 2). The threshold was chosen asmean + 0.5∗stdby observing the mean and the standard deviations of the anomaly scoresfor AnoGAN and CVAD, and the Euclidean distance for our methodology. From our ex-perimental results, it can be observed that the proposed CVAE+ZSL model outperformedboth AnoGAN and CVAD on the external unseen data with significant data shift .Table 2: Comparative analysis of AnoGAN, CVAD and CVAE+ZSLModels AnoGAN CVAD CVAD +ZSLDataset Acc ↑ TPR ↑ FPR ↓ Acc ↑ TPR ↑ FPR ↓ Acc ↑ TPR ↑ FPR ↓ISIC ID + OOD(internal)0.575(0.561, 0.577)0.305(0.289, 0.307)0.155(0.146, 0.160)0.70(0.688, 0.703)0.72(0.702, 0.722)0.32(0.311, 0.331)0.5725(0.560, 0.577)0.35(0.334, 0.354)0.205(0.196, 0.212)Fitzpatrick17k ID + OOD(external)0.49(0.488, 0.504)1.0 1.00.334(0.325, 0.338)0.36(0.353, 0.372)0.691(0.689, 0.708)0.6511(0.643, 0.658)0.555(0.543, 0.566)0.256(0.246, 0.264)CheXpert ID + OOD(internal)0.7707(0.768, 0.776)0.9172(0.918, 0.932)0.2433(0.238, 0.248)0.7076(0.703, 0.712)0.2966(0.284, 0.305)0.2522(0.252, 0.261)0.6778(0.663, 0.674)0.4896(0.473, 0.506)0.304(0.298, 0.305)UNIFESP ID + OOD(external)0.6987(0.681, 0.699)1.00.411(0.411, 0.431)0.4226(0.412, 0.426)0.25(0.229, 0.260)0.512(0.509, 0.525)0.7112(0.701, 0.716)0.5468(0.535, 0.564)0.2286(0.224, 0.239)4. ConclusionThe proposed work designed an unsupervised anomaly detection model using a cascadecoupled with zero-shot learning network maps the latent vectors to semantic attributes.With the inclusion of semantic space, the proposed architecture generalizes well and has abetter performance on the unseen external dataset when compared against the same classof state-of-the art models.3Ramasamy Patel BanerjeeReferencesTianshi Cao, Chin-Wei Huang, David Yu-Tung Hui, and Joseph Paul Cohen. A benchmarkof medical out of distribution detection, 2020.Geli Fei and Bing Liu. Breaking the closed world assumption in text classification. InProceedings of the 2016 Conference of the North American Chapter of the Associationfor Computational Linguistics: Human Language Technologies , pages 506–514, San Diego,California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1061. URL https://aclanthology.org/N16-1061 .Xiaoyuan Guo, Judy Wawira Gichoya, Saptarshi Purkayastha, and Imon Banerjee. Cvad: Ageneric medical anomaly detector based on cascade vae. arXiv preprint arXiv:2110.15811 ,2021.Xiaoxuan Liu, Livia Faes, Aditya U. Kale, Siegfried K. Wagner, Dun Jack Fu, Al-ice Bruynseels, Thushika Mahendiran, Gabriella Moraes, Mohith Shamdas, ChristophKern, Joseph R. Ledsam, Martin K. Schmid, Konstantinos Balaskas, Eric J. Topol,Lucas M. Bachmann, Pearse A. Keane, and Alastair K. Denniston. A comparison ofdeep learning performance against health-care professionals in detecting diseases frommedical imaging: a systematic review and meta-analysis. The Lancet Digital Health , 1(6):e271–e297, Oct 2019. ISSN 2589-7500. doi: 10.1016/S2589-7500(19)30123-2. URLhttps://doi.org/10.1016/S2589-7500(19)30123-2 .Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, TonyDuan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. Chexnet:Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprintarXiv:1711.05225 , 2017.Thomas Schlegl, Philipp Seeb ̈ ock, Sebastian M. Waldstein, Ursula Schmidt-Erfurth, andGeorg Langs. Unsupervised anomaly detection with generative adversarial networks toguide marker discovery, 2017.Thomas Schlegl, Philipp Seeb ̈ ock, Sebastian M Waldstein, Georg Langs, and UrsulaSchmidt-Erfurth. f-anogan: Fast unsupervised anomaly detection with generative ad-versarial networks. Medical image analysis , 54:30–44, 2019.Vikash Sehwag, Arjun Nitin Bhagoji, Liwei Song, Chawin Sitawarin, Daniel Cullina, MungChiang, and Prateek Mittal. Analyzing the robustness of open-world machine learning.InProceedings of the 12th ACM Workshop on Artificial Intelligence and Security , pages105–116, 2019.4 |
_cft_bbodYO | Medical Imaging with Deep Learning 2023A General Stitching Solution for Whole-Brain 3D NucleiInstance Segmentation from Microscopy ImagesZiquan Wei1ziquanw@email.unc.edu1Department of Psychiatry, University of North Carolina at Chapel HillGuorong Wu1,2guorong wu@med.unc.edu2Department of Computer Science, University of North Carolina at Chapel HillAbstractHigh-throughput 3D nuclei instance segmentation (NIS) is critical to understanding thecomplex structure and function of individual cells and their interactions within the largertissue environment in the brain. Despite the significant progress in achieving accurate NISwithin small image stacks using cutting-edge machine learning techniques, there has beena lack of effort to extend this approach towards whole-brain NIS. To address this challenge,we propose an efficient deep stitching neural network built upon a knowledge graph modelcharacterizing 3D contextual relationships between nuclei. Our deep stitching model isdesigned to be agnostic, enabling existing limited methods (optimized for image stackonly) to overcome the challenges of whole-brain NIS, particularly in addressing the issue ofinter- and intra-slice gaps. We have evaluated the NIS accuracy on top of state-of-the-artdeep models, such as Cellpose, with 128 ×128×64 image stacks.Keywords: Image stitching, 3D microscopy image, Whole-brain nucleus instance segmen-tation, Graph neural network.1. IntroductionLight-sheet microscopy is a powerful imaging modality that allows for fast and high-resolutionimaging of large samples, such as the whole brain of the mouse (Yang et al., 2022; Ben-nett and Kim, 2022). Alternatively, tissue-clearing techniques enable the removal of light-scattering molecules, thus improving the penetration of light through biological samplesand allowing for better visualization of internal structures, including nuclei (Banerjee andPoddar, 2022; You et al., 2023). Together, light-sheet microscopy and tissue-clearing tech-niques have revolutionized the field of biomedical imaging and they have been widely usedfor studying the structure and function of tissues and organs at the cellular level.Accurate 3D nuclei instance segmentation plays a crucial role in identifying and delin-eating individual nuclei within three-dimensional space, which is essential for understandingthe complex structure and function of biological tissues in the brain. However, due to thehigh cost of 3D manual nuclei annotations and the complexity of learning, current end-to-end NIS models are typically limited to training and testing on small image stacks (e.g.,128×128×64). Considering these limitations, one approach for achieving whole-brain NISis dividing the whole stack into smaller stacks so that the existing NIS methods can handleeach piece individually. In such a scenario, constructing the whole-brain nuclei instancesegmentation in 3D from these smaller image stacks arises a new challenge. The gaps be-tween these smaller stacks (intra-slice) and the slices (inter-slice) require a robust stitchingmethod for accurate NIS. Although Cellpose offers a straightforward solution for 3D input,©2023 CC-BY 4.0, Z. Wei & G. Wu.Wei Wuthe extremely high RAM demand of a whole-brain image of a P4 mouse, which can be asmuch as 3TB in theory, necessitates the use of the dividing and stitching method.To address these gap issues, We propose a stitching framework for whole-brain NIS asshown in Figure 1. We evaluated this hierarchical stitching framework by the segmentationprecision and stitching accuracy (correspondence matching between 2D masks). Comparedto no stitching, Our deep stitching model has shown a significant improvement in theNIS results with different state-of-the-art models, and indicating its potential for practicalapplications in the field of neuroscience.2. Method and ExperimentIn the hierarchical stitching framework, there are two stages:1.Resolve intra-slice gap in X-Y plane. Suppose that each within-stack NIS resultoverlaps with its neighboring image stack in the X-Y plane. By doing so, each 2D nucleiinstance in the X-Y plane is expected to have at least one ”gap-free” NIS estimation thatdoes not touch the boundary of the image stack. Thus, we can resolve the intra-slice gapproblem in X-Y plane in three steps: (i) identify the duplicated 2D nuclei instances frommultiple overlapped image stacks, (ii) find the representative NIS result from the ”gap-free”image stack, and (iii) unify multiple NIS estimations by using the ”gap-free” NIS estimationas the appearance of the underlying 2D nuclei.2.Inter-slice stitching using graph contextual model. At each gap area along Z-axis, wedeploy the graph contextual model, which has two MLP components, to stitch the slicednuclei instances. Specifically, we follow the partition of the whole-brain microscopy imagein stage 1, that is a set of overlapped 3D image stacks, all the way from the top to thebottom as shown in the bottom-right corner of Figure 1. It is worth noting that each 2Dpatchesoverlap areaStage 1conventionalNISoverlap areaStage 2replace segmentationgraph #1 graph #2graph #1graph #2work flowwork flowdata flowwork flowdata flowHierarchical stitching frameworkinputoutput...slice 0slice 1slice 2slice 3p=0.1 MLPMLPMLPMLPnode #1 node # 2 update featurescorrespondence representationGNNFigure 1: The proposed hierarchical stitching framework for whole-brain NIS. Top: Resolvethe intra-slice gap in the X-Y plane by overlap. Bottom: Graph contextual modelfor inter-slice gap.2A General Stitching Solution for Whole-Brain 3D NISnuclei instance in the X-Y plane is complete, as indicated by the red-highlighted portionextending beyond the image stack. Next, we assign a stack-specific local index to each2D nuclei instance. After that, we apply the (trained) graph contextual model to each2D nuclei instance by (i) constructing the contextual graph centered at the underlying 2Dnuclei instance, (ii) predicting the spatial correspondences with respect to the neighboring2D instances.Graph contextual model. First, we construct an initial contextual graph G={V,E}for each 2D nucleus instance x(i.e., image appearance vector, gradient flow by Cellpose(Stringer et al., 2021; Pachitariu and Stringer, 2022) The set of nodes V={xi|D(x, xi)> δ}includes all neighboring 2D nuclei instances, where the distance between two is denoted byD, and δis a threshold. The matrix E∈RN×Nrepresents the edges between nodes. Second ,we train the model on a set of contextual graphs Gto recursively1.Graph feature representation learning. For the kthiteration, we enable two connectednodes to exchange their feature representations constrained by the current relationshiptopology ekijby the kthlayer of the deep stitching model. In this context, we define themessage-passing function as x(k+1)i =γ(k+1)x(k)i,Σj∈N(i)φ(k)(x(k)i, x(k)j, e(k)j,i).Followingthe popular learning scheme in knowledge graphs (Wang et al., 2021), we employ MultilayerPerceptron (MLP) to act functions γ, φ.2.Learning the link-wise similarity function to predict nuclei-to-nuclei correspondence.Given the updated node feature representations {x(k+1)i}, we train another MLP to learn thesimilarity function ψin a layer-by-layer manner. In the kthlayer, we update each 2D-to-3Dcontextual correspondence e(k+1)j,i for the next layer by e(k+1)j,i =ψ(k+1)x(k+1)i, x(k+1)j, e(k)j,i.Data and computing environment. In the following experiments, we have trainedMask-RCNN-R50, Mask-RCNN-R101, and CellPose on X-Y plane of 16 image stacks (128 ×128×64), which include in total 6,847 manually labeled 3D nuclei.Quantitative evaluation. As shown in Figure 2, there is a clear sign that NIS modelswith our hierarchical stitching method outperform IoU-based counterparts on NIS metrics,regardless of the NIS backbone models. In average, our hierarchical stitching method hasimproved 14 .0%, 5 .1%, 10 .2%, and 3 .4% in precision, recall, F1 score, and stitching accuracy,respectively compared with IoU-based no stitching results.+11.2+8.2+22.6+15.7+0.2-0.4+14.4+3.3+13.0+7.9+0.1+2.3C B AC B AC B AC B A(a)(b)(c)(d)Figure 2: The NIS precision (a), recall (b), F1 score (c), and stitching accuracy (d) bystitching or not, where the NIS backbones include Mask-RCNN-R50 (A, blue),Mask-RCNN-R101 (B, orange), CellPose (C, green).3Wei WuReferencesAbhishek Banerjee and Raju Poddar. Enhanced visualization of tissue microstructuresusing swept-source optical coherence tomography and edible oil as optical clearing agent.Optik , 267:169693, 2022.Hannah C Bennett and Yongsoo Kim. Advances in studying whole mouse brain vasculatureusing high-resolution 3d light microscopy imaging. Neurophotonics , 9(2):021902–021902,2022.Marius Pachitariu and Carsen Stringer. Cellpose 2.0: how to train your own model. NatureMethods , pages 1–8, 2022.Carsen Stringer, Tim Wang, Michalis Michaelos, and Marius Pachitariu. Cellpose: a gen-eralist algorithm for cellular segmentation. Nature methods , 18(1):100–106, 2021.Hongwei Wang, Hongyu Ren, and Jure Leskovec. Relational message passing for knowledgegraph completion. In Proceedings of the 27th ACM SIGKDD Conference on KnowledgeDiscovery & Data Mining , pages 1697–1707, 2021.Bin Yang, Merlin Lange, Alfred Millett-Sikking, Xiang Zhao, Jord ̃ ao Bragantini, ShruthiVijayKumar, Mason Kamb, Rafael G ́ omez-Sj ̈ oberg, Ahmet Can Solak, Wanpeng Wang,et al. Daxi—high-resolution, large imaging volume and multi-view single-objective light-sheet microscopy. Nature methods , 19(4):461–469, 2022.Shangting You, Yi Xiang, Henry H Hwang, David B Berry, Wisarut Kiratitanaporn, JiaaoGuan, Emmie Yao, Min Tang, Zheng Zhong, Xinyue Ma, et al. High cell density andhigh-resolution 3d bioprinting for fabricating vascularized tissues. Science Advances , 9(8):eade7923, 2023.4 |
OytzS_LCWvw | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023Combining Anomaly Detection and Supervised Learning forMedical Image SegmentationJulius C. Holzschuh1,2julius.holzschuh@dkfz.deDavid Zimmerer1david.zimmerer@dkfz.deConstantin Ulrich1constantin.ulrich@dkfz.deMichael Baumgartner1michael.baumgartner@dkfz.deGregor Koehler1gregor.koehler@dkfz.deRainer Stiefelhagen2rainer.stiefelhagen@kit.eduKlaus Maier-Hein1k.maier-hein@dkfz.de1Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany2Karlsruhe Institute of Technology, Karlsruhe, GermanyEditors: Accepted for publication at MIDL 2023AbstractFully-supervised machine learning has been established as an effective method for medicalimage segmentation. However, it requires large amounts of expert-annotated data, whichcan be a bottleneck in certain applications. Unsupervised methods like anomaly localizationhave proven their potential without relying on any labeled data, making them potentiallymuch more scalable than fully supervised methods. Despite their scalability advantages,unsupervised and self-supervised methods have not yet fully reached the performance levelof fully supervised models. As a first step to close this gap, we propose an approach thatcombines both concepts. We fine-tune a pre-trained anomaly localization model, namely aself-supervised denoising auto-encoder, using varying amounts of labeled training data ina supervised manner. Overall this approach exhibits superior performance compared to amodel trained from scratch, especially in a low labeled training data regime.Keywords: semi-supervised segmentation, self-supervised learning, anomaly localization1. IntroductionSupervised machine learning methods, such as nnUNet (Isensee et al., 2021) have beenwildly successful for medical image segmentation tasks. However, obtaining expert-labeledtraining data can be both difficult and expensive, limiting the scalability of such mod-els. Pre-training has been proven to be a promising technique to overcome this challenge,leveraging large amounts of unlabeled data to learn generic features and consecutive fine-tuning on specific tasks with much less labeled data. However, pre-training is still not thatfrequently used for medical imaging, as the tasks involved are often very specific and poten-tially usable data is mostly segmented across multiple smaller datasets. Prevailing practicesoften limit the training of models and supervised pre-training to just one or a few similardatasets, which fails to take advantage of the potential synergy from other annotated datathat could be available across different datasets. Although numerous self-supervised meth-ods have emerged in the past, they often remain designed for tasks that are very differentfrom the downstream tasks, making them not directly applicable to medical image segmen-tation.©2023 CC-BY 4.0, J.C. Holzschuh et al.Holzschuh Zimmerer Ulrich Baumgartner Koehler Stiefelhagen Maier-HeinHere, anomaly localization is more closely related to the downstream segmentation task com-pared to other self-supervised methods. However, despite its promising zero-shot anomalysegmentation performance, it was not yet considered as a pre-training approach. Thus,we here try to assess the potential of anomaly localization in the context of pre-trainingmedical image segmentation methods.Hence, we propose a novel approach that combines unsupervised anomaly localization withsupervised fine-tuning for medical image segmentation, providing a promising solution tothe challenge of obtaining large amounts of expert-labeled data. The first step of our studyconsists of training a UNet to perform denoising on healthy brain MRI images, using a well-established method for anomaly localization as the basis for our approach (Kascenas et al.,2023). In a subsequent step, we utilize this pre-trained model and assess the segmentationperformance using varying amounts of labeled training data. We initially evaluated its per-formance in the binary tumor segmentation scenario to allow a fair comparison to anomalylocalization methods, before demonstrating its transferability to a multi-class downstreamtask.2. Materials and MethodsExperimental setup Experiments were conducted using the publicly available BraTS2021 dataset (Baid et al., 2021), which comprises 1251 patient samples. Each sample in-cludes a T1, T1Gd, T2 and T2-FLAIR sequence. The dataset was partitioned into three setsfor training (n=938), validation (n=62), and testing (n=251). For comparability reasons,splits were kept the same as in (Kascenas et al., 2023). Prior to training, a slice-wise down-sampling was performed to a resolution of 128x128, and the 99th percentile foreground voxelintensity was used to scale each individual modality of each scan. The anomaly localizationmodel was trained solely on healthy slices. As architecture, a UNet with three stages fordown and upsampling was used.Unsupervised Anomaly detection In analogy to (Kascenas et al., 2023) the modelwas trained using a denoising task. Here, the best performing setup from (Kascenas et al.,2023) was chosen, i.e. Gaussian noise sampled at a low resolution of 16x16 on a per-pixelbasis and then upsampled using bi-linear interpolation to the input resolution of 128x128which was then added to the image. In order to prevent consistent upsampling patterns,the generated noise was randomly shifted. A more detailed description on the denoisingimplementation can be found in (Kascenas et al., 2023).Supervised finetuning For the supervised task, models were trained using a combina-tion of soft dice and cross entropy loss. While training models were selected and savedbased on lowest validation loss. Saved models were then evaluated on the test dataset.3. Experiments and ResultsResults for a different number of labeled data samples (3D MRI Brain scans) as trainingdata are shown in Figure 1. The lower green line and upper red line in the figure indicatethe unsupervised and supervised baseline, respectively, as proposed by (Kascenas et al.,2023) on the same split. Displayed metrics for the binary case were collected using the2Short Title(a) AP binary (b)⌈Dice ⌉binary (c) Dice multi-classFigure 1: 1( a) and 1( b) present the average precision (AP) and ⌈Dice ⌉for binary segmen-tation, respectively. In step 0, the original head was utilized. Additionally, Figure 1( c)displays the mean Dice Score across all three tumour classes except the foreground. Thehorizontal axis of all graphs is presented on a logarithmic scale.same implementation as proposed for the unsupervised baseline.For multiclass segmentation, Dice Score was calculated patient wise for each 3D scan. Meanvalues were then calculated over all patients for each class individually. In Figure 1( c) themean of all three classes (GD-enhancing tumor, peritumoral edema, non-enhancing tumorcore) is presented while excluding the background class.4. Discussion and ConclusionIn summary, the results indicate that using an anomaly localization task as pre-training canimprove segmentation models, particularly for small amounts of training data. As expectedthe benefit of the pre-training diminishes as the number of training samples increases. How-ever, the unsupervised baseline is easily outperformed with only a small number of labeledtraining samples. The not-pretrained baseline already exhibits a quite strong performancefor only few training samples which can be partially attributed to the utilization of ournnU-Net-inspired (Isensee et al., 2021) data augmentation techniques, as well as to therelatively low complexity of the BraTS dataset (and our baseline outperforms the baselinepresented in (Kascenas et al., 2023)). As this was primarily a proof-of-concept that anomalylocalization can indeed be used as and be benefical as pretraining methods, we did not yetbenchmark it to other pre-training schemes for medical image segmentation. Furthermoreas anomaly localization is currently typically conducted slice-wise in 2D (Zimmerer et al.,2022), it is still unclear whether this denoising approach can be effectively extended to 3Dsettings and if the benefits extend beyond brain MRI or still hold in a more complicatedsetup like (Isensee et al., 2021) with extensive ensembling and postprocessing. However, herewe have shown that self-supervised anomaly localization methods can be effectively used aspre-training, and present a promising comprise between fully-supervised and unsupervisedmethods, that might be especially beneficial for the medical imaging domain.3Holzschuh Zimmerer Ulrich Baumgartner Koehler Stiefelhagen Maier-HeinReferencesUjjwal Baid, Satyam Ghodasara, Suyash Mohan, Michel Bilello, Evan Calabrese, ErrolColak, Keyvan Farahani, Jayashree Kalpathy-Cramer, Felipe C Kitamura, Sarthak Pati,et al. The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation andradiogenomic classification. arXiv preprint arXiv:2107.02314 , 2021.Fabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Petersen, and Klaus H Maier-Hein.nnu-net: a self-configuring method for deep learning-based biomedical image segmenta-tion. Nature methods , 18(2):203–211, 2021.Antanas Kascenas, Pedro Sanchez, Patrick Schrempf, Chaoyang Wang, William Clackett,Shadia S Mikhael, Jeremy P Voisey, Keith Goatman, Alexander Weir, Nicolas Pugeault,et al. The role of noise in denoising models for anomaly detection in medical images.arXiv preprint arXiv:2301.08330 , 2023.David Zimmerer, Peter M Full, Fabian Isensee, Paul Jaeger, Tim Adler, Jens Petersen, Gre-gor Koehler, Tobias Ross, Annika Reinke, Antanas Kascenas, et al. IEEE Transactionson Medical Imaging , 41(10):2728–2738, 2022. doi: 10.1109/TMI.2022.3170077.4 |
4HHb2cTgbO1 | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionGeneration of Multi-modal Brain Tumor MRIs withDisentangled Latent Diffusion ModelYoonho Na1yoonho94.na@snu.ac.krKyuri Kim1kyurikim@snu.ac.krSung-Joon Ye1sye@snu.ac.krHwiyoung Kim2hykim82@yuhs.acJimin Lee3jiminlee@unist.ac.kr1Department of Applied Bioengineering, Graduate School of Convergence Science and Technology,Seoul National University, Seoul, Republic of Korea2Department of Biomedical Systems Informatics and Center for Clinical Imaging Data Science,Yonsei University College of Medicine, Seoul, Republic of Korea3Department of Nuclear Engineering, Ulsan National Institute of Science and Technology, Ulsan,Republic of KoreaEditors: Under Review for MIDL 2023AbstractDeep-learning based image generation methods have been widely used to overcome data de-ficiency. The same is true also as in medical field, where data shortage problem is frequent.In this study, we propose multi-modal brain tumor Magnetic Resonance Imaging (MRI)generation framework, called Disentangled Latent Diffusion Model (DLDM) to tackle datadeficiency in medical imaging. We train an autoencoder that disentangles the feature ofmulti-modal MR images into modality-sharing and modality-specific representations. Byutilizing the feature disentanglement learned from the autoencoder, we were able to traina diffusion model that can generate modality-sharing and modality-specific latent vector.We evaluate our approach with clean-FID and improved precision & recall. The resultswere compared with GAN-based model, StyleGAN2.Keywords: Generation, Multi-modal, MRI, Feature disentanglement, Diffusion model.1. IntroductionIn this work, we propose a novel approach for generating multi-modal brain tumor MRIsusing Diffusion Model(DM) with feature disentanglement. Existing methods for generatingmulti-modal MRIs typically rely on image-to-image translation and thus require a source im-age to obtain structural information. Our proposed model, which we call disentangled latentdiffusion model (DLDM), is capable of generating modality-sharing and modality-specific in-formation separately, eliminating the need for a source image. Using this approach, DLDMcan generate unlimited number of multi-modal MR images by learned distribution of brainstructures, in contrast to image-to-image translation based models that are limited to thebrain structures present in the acquired data. To the best of our knowledge, no prior studieshave utilized DMs for generating multi-modal MRIs and also, have generated multi-modalMRIs with fixed structure without any use of source image.©2023 CC-BY 4.0, Y. Na, K. Kim, S.-J. Ye, H. Kim & J. Lee.Na Kim Ye Kim LeeFigure 1: Overall training process of disentangled autoencoder.2. MethodsDisentangled Autoencoder: In order to train DLDM capable of generating modality-sharing and modality-specific data, which we call structure vector zstructand style vectorzstylerespectively, an autoencoder that can disentangle those representative features mustbe trained in prior. We employed four key strategies to achieve this goal. First, if therearennumber of MRI modalities, we set the latent dimension Zto have n+ 1 number ofrepresentative latent vectors. The additional dimension is for storing the structural infor-mation. Second, during the decoding stage, we use pair of zstructand randomly selectedsingle zstylefor reconstruction. This approach ensures the encoder to separate multi-modalinputs into modality-sharing and modality-specific information. Third, we randomly mixzstylewith other data inside of a mini-batch. Finally, we average the loss of randomly se-lected modalities to ensure every modalities having similar reconstruction quality. Figure 1illustrates the overall training process of disentangled autoencoder.Disentangled Latent Diffusion Model: Our proposed model diffuses and denoises thedata in latent space as in latent diffusion model (LDM) (Rombach et al., 2022). The maindifference is that DLDM can generate images with feature disentanglement by utilizingaforementioned pre-trained disentangled autoencoder. Specifically, DLDM diffuses and de-noises in pair of zstructandzstyle. Also, with this model design, applying class-label cascondition was desirable to selectively obtain the zstyleof every MRI modality. After train-ing, multi-modal MR images can be synthesized by sending the generated ( zstruct, zstyle) tothe pre-trained decoder, where zstructis fixed and zstylevaries for different modalities.3. ExperimentsDataset: Our proposed model was evaluated using brain metastasis MRI dataset, whichis provided by the Department of Radiology and the Research Institute of RadiologicalScience at Yonsei University College of Medicine. The dataset comprises five sequences,namely T1, T2, FLAIR, WB, and BB. We selected only tumor-containing slices and resizedto dimension of 256 ×256 and normalized from 0 to 1. In total, 13,106 data were obtainedin 2D slices, and were then split into 10,484 and 2,622 for train set and validation set,2Generation of Multi-modal Brain Tumor MRIs with DLDMrespectively.Evaluation: To validate that DLDM is capable of generating multi-modal MRIs, the gen-erated samples were evaluated with following metrics: clean-FID and improved precision &recall. Clean-FID is to measure the distance between distributions of real and generateddata and improved precision & recall is to measure sample coverage. We compare the re-sults of DLDM with other widely used GAN-based generative model, StyleGAN2 (Karraset al., 2020). The number of generated samples of DLDM and StyleGAN2 were equally setto 1,000 for the fair comparison.Figure 2: Multi-modal MR image samples generated with DLDM and StyleGAN2. Theoriginal MR images are shown on the left for comparison.Experimental Results: The qualitative comparison is presented in Figure 2, whichdemonstrates that DLDM generates more realistic images than StyleGAN2 in multi-modalMR image generation in human perception. The observation from this comparative studyreveals that the generated samples from StyleGAN2 exhibit frequent checkerboard artifactsand unrealistic textures, whereas samples from DLDM closely resemble original MR images,making it hard to distinguish between the two.For the quantitative results, the average clean-fid over all sequences was 0.00070 and0.00293 for DLDM and StyleGAN2, respectively. Also, the average precision & recall ofDLDM (precision: 0.91208, recall: 0.96021) showed better result than StyleGAN2 (preci-sion: 0.67771, recall: 0.68229). This results demonstrate that DLDM outperforms Style-GAN2 in both image quality and sample coverage.4. ConclusionIn this paper, we presented a novel framework, called DLDM, which leverages the strengthsof the diffusion model with feature disentanglement to generate multi-modal brain tumorMRIs. DLDM can generate structure and style vector separately, eliminating the needfor a source image when fixed structure is desired. We demonstrated that the samplesgenerated by DLDM exhibit high fidelity and diversity, surpassing the performance of thewidely adopted GAN model, StyleGAN2. Thus, we believe that data shortage problem inmulti-modal MR images can be solved by using our novel approach.3Na Kim Ye Kim LeeAcknowledgmentsThis research was supported by a grant of the Korea Health Technology RD Project throughthe Korea Health Industry Development Institute (KHIDI), funded by the Ministry ofHealth & Welfare, Republic of Korea (grant number: HI21C1161).ReferencesTero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila.Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVFconference on computer vision and pattern recognition , pages 8110–8119, 2020.Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈ orn Om-mer. High-resolution image synthesis with latent diffusion models. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 10684–10695, 2022.4 |
B2Cb5y2A6DJ | Medical Imaging with Deep Learning 2023Real-Time Quantitative Ultrasound and RadarKnowledge-Based Medical ImagingTom Sharon tom.sharon@weizmann.ac.iHila Naaman hila.naaman@weizmann.ac.iYonathan Eder yoni.eder@weizmann.ac.iYonina C. Eldar yonina.eldar@weizmann.ac.iFaculty of Math and Computer Science, Weizmann Institute of Science, Rehovot, IsrealAbstractUltrasound and radar signals are useful for medical imaging due to their non-invasive,non-radiative, low-cost, and accessible nature. However, traditional imaging techniqueslack resolution, contrast, and physical interpretation. Quantitative medical imaging isnecessary for this purpose, as it provides a visual representation of physical characteristics.However, current techniques have drawbacks, including convergence to local minima anddelayed results, which can lead to unsatisfactory outcomes. To address these limitations, wepropose a neural network that incorporates the symmetries and properties of the receivedsignals to achieve real-time quantitative mappings of physical properties. Our methodachieves high accuracy using several numerical metrics for complex shapes with less than0.15 seconds per test sample, compared to 0.75-2 hours for the competing method.Keywords: ultrasound, radar, quantitative imaging, U-net, FWI, ISP, channel data1. IntroductionUltrasound (US) and radar signals are two primary signals used in non-invasive and non-radiative medical imaging. US and radar imaging produce images based on the receivedsignals, referred to as Channel Data (CD), obtained from the received echoes (for US) ormicrowave signals (for radar) scattered by the scanned medium (Van Veen and Buckley,1988; Li et al., 2005), as illustrated in Fig. 1.a-c. However, current imaging techniques usingUS or radar signals provide limited physical interpretation, hence Quantitative MedicalImaging (QMI) is needed. QMI has the potential to visualize various Quantitative PhysicalProperties (QPPs) of the scanned medium, such as Speed-of-Sound (SoS), density, andrelative permittivity, which can be beneficial for diverse medical applications such as fattyliver diagnosis, and stroke imaging (Ruby et al., 2019; Ireland and Bialkowski, 2011).To obtain quantitative images, a non-linear Inverse Scattering Problem (ISP) mustbe solved for reconstructing the QPPs of the scanned medium using the received CD.Full Waveform Inversion (FWI) is one such method that utilizes an iterative optimizationalgorithm to solve the ISP (Shultzman and Eldar, 2022; Guasch et al., 2020). The goal ofFWI is to minimize the loss between the measured and predicted CD according to the QPPsestimation. However, FWI can be time-consuming and may converge to a local minimum.Furthermore, an initial guess close to the real solution based on some prior knowledge isrequired, which is often unavailable in realistic settings. To address these limitations, deeplearning techniques have been proposed as a potential solution for solving the ISP (Chen©2023 CC-BY 4.0, T. Sharon, H. Naaman, Y. Eder & Y.C. Eldar.Sharon Naaman Eder EldarFigure 1 : (a)-(b) depict the wave propagation from one antenna over two successive timesamples. (c) depicts a medium with 60 antennas surrounding a stroke-affected brain. (d)displays the training set creation process from simulation, utilizing the QPPs of the grid andknown pulse to obtain the channel data for network input. (e). QUARK-MI architecturefor both radar and US schemes. The channel number of each convolution block is displayed.et al., 2020; Wei and Chen, 2018; LeCun et al., 1998). Nevertheless, previous works onlyreconstructed one QPP and tested it on simple synthetic tests.Our contribution is twofold. First, we propose a Neural Network (NN) for real-time re-construction of multiple QPPs from measured CD. By incorporating the physical meaningof the input CD, our method achieves more accurate reconstructions and avoids convergingto a local minimum, as occurs in FWI. Second, we demonstrate the versatility of our pro-posed network by showing its ability to reconstruct physical properties using either radarsignals or US signals. Additionally, we demonstrate the network’s ability to reconstructmore than one property for each case, and to handle complex nonhomogeneous domainssuch as a realistic human brain with random stroke.2. Data and MethodWe introduce QUARK-MI (Quantitative-Ultrasound-and-Radar Knowledge-based MedicalImaging), a real-time NN solution that can accurately reconstruct the medium’s QPPsfrom either US or radar signals. QUARK-MI has a U-Net based architecture with skipconnections, stride convolution, and batch normalization, enabling the NN to learn fromdifferent signal channels and capture fine details while preserving information from the entireCD tensor. The input to the NN is the CD tensor consists of the receiving signals over timefor each transmission. Our training sets include simulations of one random object in asample with liver properties for the US case, and different training sets simulations for theradar case: MNIST digit (LeCun et al., 1998) in a random position and blood properties asthe scatter object, and realistic brain slices generated according to (Qureshi and Mustansar,2017), with a random stroke. We use the known wave propagation equations to create theCD inputs (Fig. 1.d).2Short TitleFigure 2 : (a) The comparison of QUARK-MI and FWI methods for the reconstruction withrespect to the Ground-Truth (GT). (b) The accuracy performance.Our approach integrates the CD physical interpretation into the network architecture.Each transmission channel offers complementary information about the QPPs and treatedas the convolution channels, while the time and receiving signal dimensions are treated asthe spatial image dimensions in traditional U-Net. Additionally, as illustrated in Fig.1.e,we employ derivable rescaling to achieve square spatial dimensions. The final convolutionblock in our model sums the transmission channels over the spatial dimensions, generatingtwo channels that depict two mappings of the medium’s QPPs.3. Evaluation, Results and ConclusionEvaluation and results We conducted a comprehensive evaluation of our QUARK-MImethod and compared it with the FWI algorithm using numerical metrics (NRMSE, PSNR,and SSIM), to assess the accuracy of object shape and position as well as quantitative valuesof pixels, as shown in Fig.2. Our NN accurately reconstructs QPPs from challenging USsignals scenarios, including two scattering objects with a uniform background (trained ononly one object in each sample), and a noisy medium (where FWI diverge). Additionally,for the radar case, we reconstructed non-defined objects from the MNIST dataset withQPPs, and a realistic brain slice with a generated stroke. QUARK-MI outperforms FWI inmost scenarios (highlighted in green), except for one case where FWI has as the initial guessthe homogeneous background value (highlighted in grey). Our NN shows a generalizationability to reconstruct multiple objects even though the training set contained only onerandom object in each image, and reconstruction of highly complex shapes, even in the caseof a human brain where the skull causes a significant decrease in signal quality, in whichboth the outline of the brain and the location and size of the stroke were restored withgreat success. Our method achieves high accuracy in complex and realistic objects, withless than 0.15 seconds per sample, compared to 0.75-2 hours for FWI.Conclusion In conclusion, our proposed NN that incorporates the symmetries and prop-erties of received signals can achieve real-time mappings of QPPs for medical imaging. Thismethod provides improved physical interpretation and can lead to the accomplishment ofnew clinical goals such as fast stroke imaging and cancer detection. With high accuracy andfast computational time, this approach has the potential to significantly impact the field ofmedical imaging.3Sharon Naaman Eder EldarAcknowledgmentThis research was supported by the European Research Council (ERC) under the Euro-pean Union’s Horizon 2020 research and innovation program (grant No. 101000967)andby the Israel Science Foundation (grant No. 3805/21) within the Israel Precision MedicinePartnership program.ReferencesXudong Chen, Zhun Wei, Maokun Li, and Paolo Rocca. A review of deep learning ap-proaches for inverse scattering problems (invited review). Progress In ElectromagneticsResearch , 167:67–81, 2020.Llu ́ ıs Guasch, Oscar Calder ́ on Agudo, Meng-Xing Tang, Parashkev Nachev, and MichaelWarner. Full-waveform inversion imaging of the human brain. NPJ digital medicine , 3(1):28, 2020.David Ireland and Marek Bialkowski. Microwave head imaging for stroke detection. ProgressIn Electromagnetics Research M , 21:163–175, 2011.Yann LeCun, L ́ eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learningapplied to document recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Xu Li, Essex J Bond, Barry D Van Veen, and Susan C Hagness. An overview of ultra-wideband microwave imaging via space-time beamforming for early-stage breast-cancerdetection. IEEE Antennas and Propagation Magazine , 47(1):19–34, 2005.Awais Munawar Qureshi and Zartasha Mustansar. Levels of detail analysis of microwavescattering from human head models for brain stroke detection. PeerJ , 5:e4061, 2017.Lisa Ruby, Sergio J Sanabria, Katharina Martini, Konstantin J Dedes, Denise Vorburger,Ece Oezkan, Thomas Frauenfelder, Orcun Goksel, and Marga B Rominger. Breast cancerassessment with pulse-echo speed of sound ultrasound from intrinsic tissue reflections:Proof-of-concept. Investigative radiology , 54(7):419–427, 2019.Avner Shultzman and Yonina C Eldar. Nonlinear waveform inversion for quantitative ul-trasound. IEEE transactions on computational imaging , 8:893–904, 2022.Barry D Van Veen and Kevin M Buckley. Beamforming: A versatile approach to spatialfiltering. IEEE assp magazine , 5(2):4–24, 1988.Zhun Wei and Xudong Chen. Deep-learning schemes for full-wave nonlinear inverse scatter-ing problems. IEEE Transactions on Geoscience and Remote Sensing , 57(4):1849–1860,2018.4 |
rE5kyC31IXQ | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023 submissionLearning Patient Rotation Using Synthetic X-ray Imagesfrom 3D CT VolumesWai Yan Ryana Fok1,2ryana.fok@siemens-healthineers.comAndreas Fieselmann1andreas.fieselmann@siemens-healthineers.comMagdalena Herbst1magdalena.herbst@siemens-healthineers.comDominik Eckert1,2dominik.eckert@siemens-healthineers.comMarcel Beister1marcel.beister@siemens-healthineers.comSteffen Kappler1steffen.kappler@siemens-healthineers.com1X-ray Products, Siemens Healthcare GmbH, Forchheim, GermanySylvia Saalfeld2sylvia.saalfeld@ovgu.de2Faculty of Computer Science, Otto-von-Guericke-University Magdeburg, GermanyAbstractDeep learning has become a standard method for pattern recognition in medical images,but curation of large-scale annotated clinical data is challenging due to scarcity or ethicalissues. Alternatively, synthetically generated data could supplementary be used to trainneural networks. In this work, we propose the novel training scheme that uses syntheticchest X-rays generated from 3D photon-counting CT volumes for quantifying the internalpatient rotation α. This can automatically inform the technician if and how re-exposure isneeded without the need of extensive image analysis. X-ray images were forward projectedwith a step size of 2◦rotation along patient axis. 1167 images and labels were trained ona modified DenseNet-121 to detect α. Results on 252 test images showed good correlationbetween true and predicted α, with R2= 0.992, with 95% confidence level of ≈ ±2◦.1Keywords: Synthetic Data, Patient Rotation Detection, Photon-counting CT, Chest X-ray1. IntroductionChest X-ray (CXR) is one of the most frequently acquired medical images. The preferredsetup is posterior-anterior (PA) CXR, where the patient is standing in front of the detector.However, for immobile patients, only anteroposterior (AP) CXR can be performed, wherethe detector is positioned behind the patient on the bed. It is not uncommon that the patientis rotated due to sickness or medical instruments. This rotation could lead to changes in lungdensity and trachea position, thus reducing diagnostic confidence. Currently, cardiothoracicratio and the clavicle-spine distance are used to determine if a CXR is rotated. However,such evaluation might require clinical expertise and hinder clinical workflow. Hence, analgorithm to quantify internal patient rotation is desired, which can automatically informthe technician if and how the re-exposure is needed.There is an emerging usage of realistic synthetic data for machine learning in medicine(Chen et al., 2021). Synthetic medical data generated by forward simulated models (Fok1. The work presented in this paper is not commercially available.©2023 CC-BY 4.0, W.Y.R. Fok, A. Fieselmann, M. Herbst, D. Eckert, M. Beister, S. Kappler & S. Saalfeld.Fok Fieselmann Herbst Eckert Beister Kappler Saalfeldet al., 2022), physical simulations (Moturu and Chang, 2018) or AI-driven generative models,helped improving learning performance. A CNN trained with synthetic X-ray using CT-derived airspace quantification achieved expert radiologist level of accuracy on real CXR(Barbosa Jr et al., 2021). Synthetic X-rays from generative networks (GAN) were used forlesion segmentation, landmark detection and surgical tool detection learning (Gao et al.,2023), which outperformed real-data-trained models due to the effectiveness of training ona larger dataset. However, GANs could be vulnerable to generalization and may fail toreproduce anatomically accurate images (Yi et al., 2019).In this study, we trained a network to estimate patient rotation using synthetic CXRgenerated by forward projecting from photon-counting CT volumes. Our proposed approachenables us to generate projections from different angles of the same CT volume, thus allowingfor the automatic generation of a large amount of training CXR and ground truth labels atthe same time. Moreover, these projections closely resemble real CXR as they are generatedfrom patient CT volumes. We hypothesize that the trained model would implicitly learnfeatures in chest rotation without the need for annotations such as cardiothoracic ratio orclavicle-spine distance.Figure 1: (a) Patient rotation along z-axis; (b) Proposed simulation of rotated (A,B) andnon-rotated (C) image; (c) Forward projection setup; (d) Examples of syntheticchest X-rays with no rotation 0◦, and maximum rotation at -20◦and 20◦.2. Methods and MaterialsSynthetic X-ray Generation A total of 80 photon-counting CT datasets were used,each with voxel size 0.5 ×0.5×0.7mm3, and≈1000 slices. Each CT volume underwent for-ward projection by ray tracing, which takes into account the cone-beam geometry of thesystem. X-rays are projected with angle αin range of [-20◦, 20◦], with a step size of 2◦and the central projection at 0◦. The X-ray source to patient distance is 150 cm, patient todetector distance is 30 cm, and the simulated detector is 1800 ×1800 pixels. Furthermore,standard radiographic image post-processing and cropping to the lung region were applied.X-ray simulation illustration and examples of generated images are shown in Figure 1.2Learning from Synthetic Medical X-rayNetwork and Experiment A total of 1680 synthetic X-ray images were generated from80 patients, each with 21 projections. Training, validation and testing consist of 1176, 252,and 252 images, respectively. All images were resized to 256 ×256 pixels, and intensitiesnormalized to [0, 1]. We used DenseNet-121 (Huang et al., 2017). We used hyperbolictangent function (Tanh) as the activation function in the final output layer, so to preservethe sign as our target labels consist of negative and positive values. We also map the outputvalues to the range of [-20, 20]. The model was trained on Nvidia RTX A40 GPU with batchsize of 16. Mean-squared error loss and Adam optimizer with learning rate of 0.01 wereused and early stopping at epoch 203.Figure 2: (a) The absolute error between αpredict andαtruealong training epochs; (b) Re-gression fit for αpredict andαtruein test data with coefficient of determinationR2and linear slope coefficients β; (c) Bland-Altman plot for the differences ofαpredict andαtruein test data. Red dashed line indicates mean difference, graydotted lines indicate 95% confidence interval.3. Results and DiscussionFrom Figure 2a, the median, 5thand 95thpercentile of absolute error between αpredict andαtruelevel off around zero after ≈150 epochs in training. On the test data (n = 252),the regression fit (Figure 2b) shows the range of prediction. Diagonal line and R2= 0.992indicate good correlation between αpredict andαtrue. In Figure 2c, the differences betweenαpredict andαtruescattered evenly across the mean difference = 0.0385◦, and close to the zeroline, which shows no bias. Most data points lies within the 95% confidence interval limits(mean ±1.96×standard deviation of the differences) at −2.25◦and 2.33◦, which agreeswell as our synthetic X-ray images were simulated with a 2◦step size. This also indicatesno systematical error in synthetic X-ray generation and the modeling of this learning task.Evaluation on real CXR will be the next step.4. ConclusionWe leveraged synthetically-generated images for learning the quantification of internal pa-tient rotation in CXR, as originally limited by the availability of rotated and labelled CXR.3Fok Fieselmann Herbst Eckert Beister Kappler SaalfeldReferencesEduardo J Mortani Barbosa Jr, Warren B Gefter, Florin C Ghesu, Siqi Liu, Boris Mailhe,Awais Mansoor, Sasa Grbic, and Sebastian Vogt. Automated detection and quantifi-cation of covid-19 airspace disease on chest radiographs: a novel approach achievingexpert radiologist-level performance using a deep convolutional neural network trainedon digital reconstructed radiographs from computed tomography-derived ground truth.Investigative radiology , 56(8):471–479, 2021.Richard J Chen, Ming Y Lu, Tiffany Y Chen, Drew FK Williamson, and Faisal Mahmood.Synthetic data in machine learning for medicine and healthcare. Nature Biomedical En-gineering , 5(6):493–497, 2021.Wai-Yan Ryana Fok, Martin Grashei, Jason G Skinner, Bjoern H Menze, and FranzSchilling. Prediction of multiple ph compartments by deep learning in magnetic reso-nance spectroscopy with hyperpolarized 13c-labelled zymonic acid. EJNMMI research ,12(1):24, 2022.Cong Gao, Benjamin D Killeen, Yicheng Hu, Robert B Grupp, Russell H Taylor, MehranArmand, and Mathias Unberath. Synthetic data accelerates the development of general-izable learning-based algorithms for x-ray image analysis. Nature Machine Intelligence ,pages 1–15, 2023.Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Denselyconnected convolutional networks. In Proceedings of the IEEE conference on computervision and pattern recognition , pages 4700–4708, 2017.Abhishek Moturu and Alex Chang. Creation of synthetic x-rays to train a neural networkto detect lung cancer. Journal Beyond Sciences Initiative, University of Toronto, inToronto , 2018.Xin Yi, Ekta Walia, and Paul Babyn. Generative adversarial network in medical imaging:A review. Medical image analysis , 58:101552, 2019.4 |
hYnou0zo0PC | Medical Imaging with Deep Learning 2023Deep Learning based Automatic Segmentation of theLevator Ani Muscle from 3D Endovaginal Ultrasound ImagesAmad Qureshi∗1AQURESH@GMU.EDU1Department of Bioengineering, George Mason University, Fairfax, VA USANada Rabbat∗1NMOHAMAD@GMU.EDUKo-Tsung Hsu1KHSU5@GMU.EDUZara Asif1ZASIF3@GMU.EDUParag V. Chitnis1PCHITNIS@GMU.EDUAbbas Shobeiri2,1ABBAS.SHOBEIRI@INOVA.ORG2INOVA Fairfax Hospital, Fairfax, VA USAQi Wei1QWEI2@GMU.EDUAbstractThe Levator Ani Muscle (LAM) avulsion is a common side effect of vaginal childbirth and islinked to pelvic organ prolapse (POP) and other pelvic floor complications. Diagnosis andtreatment of these complications require imaging and examining the pelvic floor, which isa time-consuming process subject to operator variability. We proposed using deep learning(DL) to automatically segment LAM from 3D endovaginal ultrasound images (EVUS) toimprove diagnostic accuracy and efficiency. Over one thousand 2D axial images extractedfrom 3D EVUS data consisting of healthy subjects and patients with pelvic floor disorderswere utilized for LAM segmentation. U-Net, FD-U-Net, and Attention U-Net were applied.The U-Net-based models had 0.84-0.86 mean Dice score, which demonstrated efficacy com-pared to literature in LAM segmentation. Our study showed the feasibility of using U-Netand its variants for automated LAM segmentation and the potential of AI-based diagnostictools for improved management of pelvic floor disorders.Keywords: pelvic floor muscle, ultrasound imaging, deep learning, image segmentation1. IntroductionThe Levator Ani Muscle (LAM) is a funnel-shaped structure responsible for supporting thepelvic floor, along with providing functionality, such as allowing structures to pass throughit (Gowda and Bordoni, 2021). LAM avulsion, a common side effect of vaginal births,occurs in up to 35-36% of women after the first birth, causing pelvic organ prolapse (POP)and other pelvic floor disorders (Nygaard et al., 2008). Diagnosis of LAM avulsion andPOP involves imaging the pelvic floor, usually through Magnetic Resonance Imaging orUltrasound (US) imaging, the latter of which is more cost-effective (Woodfield et al., 2010).The interpretation of ultrasound is a challenging task, where the diagnosis can take weeks.To overcome the issues, we propose the use of deep learning (DL) segmentation methods,to automatically segment the LAM from 3D endovaginal ultrasound data – which has yetto be performed on such images – to improve diagnostic accuracy and reduce the diagnosticturnover time for patients.∗Contributed equally©2023 CC-BY 4.0, A. Qureshi, N. Rabbat, K.-T. Hsu, Z. Asif, P.V. Chitnis, A. Shobeiri & Q. Wei.Qureshi Rabbat Hsu Asif Chitnis Shobeiri Wei2. Methods2.1. Dataset, Preprocessing and PreparationThe 3D UVUS images were obtained in previously conducted work approved by the Insti-tutional Review Ethics Committee (IRB) of the INOVA Health System (Asif et al., 2023).The dataset consists of 1015 2D axial images and LAM traces of 512x512 size from bothhealthy subjects (n=14) as well as patients with different degrees of pelvic floor deficiency(n=13). Several pre-processing steps were performed on the images to prepare the data forDL-based segmentation as shown in Figure 1. The healthy and unhealthy image data wereindependently split by 85% for training and 15% for testing, which resulted in combined862 training images and 153 test images.Figure 1: Flowchart of the 2D axial US image pre-processing steps2.2. DL Model ConfigurationOur paper explores several DL-segmentation models: U-Net, FD-U-Net, and Attention U-Net. The U-Net is a convolutional neural network (CNN) which uses a contracting path tocapture context of the data and symmetric expanding path to obtain precise localization(Ronneberger et al., 2015). FD-U-Net is an extension to the U-Net in which dense connec-tivity into the contracting and expanding paths of network is applied (Guan et al., 2020).Attention U-Net uses attention gates to the encoding path to focus on target features ofimage (Oktay et al., 2018). All models were implemented with TensorFlow on a Lambdaworkstation with NVIDIA RTX A5000 GPU. Each model was trained in about 16min over50 epochs with a batch size of 16. Predicted masks were subjected to Intersection overUnion (IoU) and Dice accuracy metrics assessment.3. ResultsOur proposed study utilized U-Net, FD-U-Net, and Attention U-Net models for segmentingLAM from EVUS images. The visual results (Figure 2) show that all three models werevisually similar to the ground truth (Fig. 2B). Our U-Net model (Table 1) achieved a Dice2DL-Based Segmentation of Levator Ani Muscle from 3D UVUSscore of 0.86 on the test data, with FD-U-Net and Attention U-Net achieving generallysimilar results. Furthermore, when compared to other published studies on LAM segmen-tation, the U-Net variants that our study implemented, especially the standard U-Net hadpromising results. Although comparisons were based on different datasets, such compara-tive analysis is important, as the proposed procedure is one of the few that dealt with LAMsegmentation, and one of the only, to our knowledge, performed on EVUS.Table 1: Comparison of DL-based LAM segmentationImaging Segmentation Number of Segmentation Mean MeanStudy Modality Region Images Method(s) Dice IoUU-Net 0.86 0.76Proposed EVUS LAM 1015 FD-U-Net 0.84 0.74Attention U-Net 0.85 0.75Noort,2021 TPUS LAM 100 Recurrent U-Net 0.65 -Feng,2020 MRI LAM 528 CNN + MRFP -0.61 -(van den Noort et al., 2021) (Feng et al., 2020)Figure 2: Example LAM segmentation results. (A) Raw EVUS image; (B) Expert tracedLAM; (C) U-Net; (D) FD-U-Net; (E) Attention U-Net4. ConclusionWe investigated the feasibility of using DL to segment LAM from clinical EVUS images.We found that the U-Net-based segmentation models outperform the models used in theliterature to accurately segment the LAM. This study has highlighted the potential of usingU-Net and its variants for the automatic segmentation of pelvic floor structures in EVUSand potentially other imaging modalities. It also has the potential of being implemented inAI-based diagnostic tools for improved management of pelvic floor disorders, especially inlow socioeconomic regions, where these conditions may be underdiagnosed or misdiagnosed.AcknowledgmentsThis project is funded by the Inova-GMU Research Fund and National Institute of Health(NIH) Grant: NIH EY029715.3Qureshi Rabbat Hsu Asif Chitnis Shobeiri WeiReferencesZara Asif, Roni Tomashev, Veronica Peterkin, Qi Wei, Jonia Alshiek, Baumfeld Yael, andS Abbas Shobeiri. Levator ani muscle volume and architecture in normal vs. muscle dam-age patients using 3d endovaginal ultrasound: a pilot study. International UrogynecologyJournal , 34(2):581–587, 2023.Fei Feng, James A. Ashton-Miller, John O. L. DeLancey, and Jiajia Luo. Convolu-tional neural network-based pelvic floor structure segmentation using magnetic resonanceimaging in pelvic organ prolapse. Medical Physics , 47(9):4281–4293, July 2020. doi:10.1002/mp.14377. URL https://doi.org/10.1002/mp.14377 .Supreeth N Gowda and Bruno Bordoni. Anatomy, abdomen and pelvis, levator ani muscle.InStatPearls [Internet] . StatPearls Publishing, 2021.Steven Guan, Amir A. Khan, Siddhartha Sikdar, and Parag V. Chitnis. Fully dense UNetfor 2-d sparse photoacoustic tomography artifact removal. IEEE Journal of Biomedicaland Health Informatics , 24(2):568–576, February 2020. doi: 10.1109/jbhi.2019.2912935.URL https://doi.org/10.1109/jbhi.2019.2912935 .Ingrid Nygaard, Matthew D Barber, Kathryn L Burgio, Kimberly Kenton, Susan Meikle,Joseph Schaffer, Cathie Spino, William E Whitehead, Jennifer Wu, Debra J Brody, et al.Prevalence of symptomatic pelvic floor disorders in us women. Jama , 300(11):1311–1316,2008.Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, KazunariMisawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, et al. At-tention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 ,2018.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In Medical Image Computing and Computer-AssistedIntervention–MICCAI 2015: 18th International Conference, Munich, Germany, October5-9, 2015, Proceedings, Part III 18 , pages 234–241. Springer, 2015.Frieda van den Noort, Beril Sirmacek, and Cornelis H Slump. Recurrent u-net for automaticpelvic floor muscle segmentation on 3d ultrasound. arXiv e-prints , pages arXiv–2107,2021.Courtney A Woodfield, Saravanan Krishnamoorthy, Brittany S Hampton, and Jeffrey MBrody. Imaging pelvic floor disorders: trend toward comprehensive mri. American Jour-nal of Roentgenology , 194(6):1640–1649, 2010.4 |
bVC9bi_-t7Y | Medical Imaging with Deep Learning 2023Dilation-Erosion Methods for Radiograph Annotation inTotal Knee ReplacementYehyun Suh1,2yehyun.suh@vanderbilt.eduAleksander Mika3aleksander.mika@vumc.orgJ. Ryan Martin3john.martin@vumc.orgDaniel Moyer∗1,2daniel.moyer@vanderbilt.edu1Department of Computer Science, Vanderbilt University, Nashville, TN, USA2Vanderbilt Institue for Surgery and Engineering, Nashville, TN, USA3Department of Orthopaedic Surgery, Vanderbilt University Medical Center, Nashville, TN, USAAbstractIn the present work we describe a novel training scheme for automated radiograph anno-tation, as used in post-surgical assessment of Total Knee Replacement. As we show exper-imentally, standard off-the-shelf methods fail to provide high accuracy image annotationsfor Total Knee Replacement annotation. We instead adopt a U-Net based segmentationstyle annotator, relax the task by dilating annotations into larger label regions, then pro-gressively erode these label regions back to the base task on a schedule based on trainingepoch. We demonstrate the advantages of this scheme on a dataset of radiographs withgold-standard expert annotations, comparing against four baseline cases.Keywords: Label Augmentation, X-Ray, Landmark Annotation1. IntroductionTotal Knee Replacement (TKR) is a standard therapy for advanced knee arthritis (Kimet al., 2021; Evans et al., 2019). Post-surgical evaluation of TKR relies partially on radio-graphs of the patient’s knee and implant, and the alignment of that implant to the femurand tibia. These assessments are made by the manual placement of markers by orthopedicclinicians, and may be made at regular outpatient follow-up visits in the months after theprocedure. In particular, measurements of implant alignment during recovery and laterregular use are decision criteria for possible additional correction procedures (in the eventof adverse implant positioning/alignment). Automation of this marker placement is a cleartarget for learned medical vision systems. The benefits of automated marker placementfor TKR are also clear: assessment of radiographs without expert intervention, possiblyfor in-the-field point-of-care assessment, or for reducing assessment loads when assessingretrospective studies of large databases.However, application of off-the-shelf models results in sub-standard performance. Weshow that neither direct regression using convolutional architectures(Krizhevsky et al., 2012;He et al., 2016), nor pre-trained convolutional networks (with fine-tuning), nor conventionalU-Net(Ronneberger et al., 2015) methods produce acceptable accuracy when applied toTKR annotation tasks. We instead propose a Dilation/Erosion label augmentation methodand corresponding training schedule which improves performance of a U-Net based method,taking high baseline errors (100+ pixels) to single-digit pixel errors. We include a shortdiscussion with hypotheses for why standard methods fail, and current directions for furtherexpansion to other Total Knee Replacement radiograph domains.©2023 CC-BY 4.0, Y. Suh, A. Mika, J.R. Martin & D. Moyer.Suh Mika Martin MoyerFigure 1: At Left we show the RMSE between predicted pixel and the ground truth pixelfor labels 0 thru 5 for our test cohort. At right we show the training loss (lefty-axis) and RMSE (right y-axis) across epochs with exemplar predicted outputs(red) and ground truth labels (blue) for Epoch 0, 99, and 249. Drastic changesin the training loss (e.g., at epoch 150 and 200) are due to the training schedulefor erosion and re-weighting.2. MethodInitial attempts at TKR annotation show poor performance (e.g., baselines shown in Section3). We here construct a dilation-erosion label augmentation method that improves the U-Net methods’ performance, using the same general architecture and gradient-based learning.Image labels are first dilated by a set number of image dilation iterations. These dilatedlabels are allowed to overlap. The prediction network (a U-Net) is trained using the dilatedlabels. Labels are then eroded over a schedule based on training steps taken.As we are adjusting the size of each label as training progresses, this leads to changinglabel imbalance. A common static imbalance solution is to re-weight the error function,biasing predictions away from degenerate solutions. For dynamic imbalances (induced byour training scheduled erosion), we construct a dynamic reweighting scheme, where w0is alabel weight that would have been used had we not performed dilation erosion: ̃w=w×input image size −(number of dilated pixels + number of label pixels)(number of dilated pixels + number of label pixels)(1)3. Experiments & ResultsOur dataset consists of 180 post-operative knee radiographs, which we split into a 162/18sized training/validation sets. Each image was annotated by a clinician for six anatomiclandmarks. Images and the corresponding label masks were padded to a standard size(512×512), and histogram normalized to [0,1].We compared the performance of several baseline methods: a Convolutional NeuralNetwork optionally with Positional Encoding concatenated as input, ResNet101 pretrained2Dilation-Erosion Methods for Radiograph Annotation in Total Knee ReplacementExperiment Label0 Label1 Label2 Label3 Label4 Label5 MeanCNN 227 73 40 115 59 236 149CNN w/PE 233 72 31 113 55 245 151ResNet-101 233 66 27 109 50 240 148Baseline U-Net 348 267 234 239 264 376 295D40 185 18 18 17 15 22 77D40 AW 21 15 18 20 17 21 19D40 AW ProE 5 16 8 9 10 2 9 12D60 20 17 18 29 20 23 23D60 AW 26 21 22 18 20 26 24D60 AW ProE 5 15 17 16 17 15 19 17D60 AW ProE 10 9 7 10 9 9 9 10Table 1: We show the mean RMSE per label and mean overall, across validation images, foreach method. D## is the number of dilation iterations applied at initialization,AW indicates Adaptive Weighting, and ProE ## is the number of Erosion stepsapplied every 50 epochs. Boldface values indicate best performance by category.with ImageNet(Deng et al., 2009), and a baseline U-Net architecture. The U-Net architec-ture was also trained using our proposed Dilation/Erosion method and reweighting scheme.We also train ablations of these proposed methods in order to test the efficacy of eachsub-component, and variations of the component parameters.For training the U-Net architectures we use pixel-wise cross-entropy loss. As a validationmetric we use the RMSE of the pixel-wise distance from the most likely pixel (the maxi-mal output logit value) to the groundtruth label position. For training the other baselinemethods we use the validation metric (RMSE of pixelwise distance) directly.As shown in Table 1, in comparison to all baselines both architecturally similar anddissimilar the proposed training method performs well: the mean RMSE across labels de-creased from 149 to 10. We plot the per-subject mean error in Figure 1 Left. In Figure 1Right we show the training dynamics of the proposed method.4. Conclusion & DiscussionIn this paper we introduced a method to improve automated radiograph annotations usingthe Dilation/Erosion method, as well as a custom weighting scheme. We have also shownthat direct regression or conventional U-Net methods surprisingly does not perform wellin TKR assessment. We hypothesize that this may be due to the relatively small dataset,and the fact that many common augmentations (rotations, some flips, etc.) are impossiblesince these transformations cannot be applied in a similar way to the label volumes withoutbreaking the knee radiograph context (all knees are imaged from known/chosen angles andviews). In future work we plan on expanding our dataset and evaluations to lateral viewsand annotations, and to assess inter-rater reliability in order to determine the noise ceilingof prediction accuracy.3Suh Mika Martin MoyerReferencesJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision andPattern Recognition , pages 248–255, 2009. doi: 10.1109/CVPR.2009.5206848.Jonathan T Evans, Robert W Walker, Jonathan P Evans, Ashley W Blom, Adrian Sayers,and Michael R Whitehouse. How long does a knee replacement last? a systematic reviewand meta-analysis of case series and national registry reports with more than 15 yearsof follow-up. The Lancet , 393(10172):655–663, 2019. ISSN 0140-6736. doi: https://doi.org/10.1016/S0140-6736(18)32531-5. URL https://www.sciencedirect.com/science/article/pii/S0140673618325315 .Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning forimage recognition. In Proceedings of the IEEE conference on computer vision and patternrecognition , pages 770–778, 2016.Tae Woo Kim, Seung-Baik Kang, Chong Bum Chang, Sun-Young Moon, Young-Kyun Lee,and Kyung-Hoi Koo. Current trends and projected burden of primary and revision totalknee arthroplasty in korea between 2010 and 2030. The Journal of Arthroplasty , 36(1):93–101, 2021. ISSN 0883-5403. doi: https://doi.org/10.1016/j.arth.2020.06.064. URLhttps://www.sciencedirect.com/science/article/pii/S0883540320307294 .Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deepconvolutional neural networks. In F. Pereira, C.J. Burges, L. Bottou, and K.Q. Wein-berger, editors, Advances in Neural Information Processing Systems , volume 25. CurranAssociates, Inc., 2012. URL https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf .Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. CoRR , abs/1505.04597, 2015. URL http://arxiv.org/abs/1505.04597 .4 |
rnmab4CQN_ | Progressive Learning for Physics-informed NeuralMotion PlanningRuiqi Ni and Ahmed H. QureshiDepartment of Computer Science, Purdue University{ni117,ahqureshi }@purdue.eduStart Intermediate GoalFig. 1: Physics-informed neural motion planning of a 6-DOF robot manipulator in a real-world narrow passage environment.The images from left to right show the robot’s motion sequence from its start to the desired goal configuration. In this case,the proposed approach took 0.05 seconds, whereas LazyPRM* took 2.79 seconds to find a path, making our method at least50×faster than a traditional approach.Abstract —Neural motion planners (NMPs) demonstrate fastcomputational speed in finding path solutions but require ahuge amount of expert trajectories for learning, thus addinga significant training computational load. In contrast, recentadvancements have also led to a physics-informed NMP approachthat directly solves the Eikonal equation for motion planning anddoes not require expert demonstrations for learning. However,experiments show that the physics-informed NMP approachperforms poorly in complex environments and lacks scalabilityin high-dimensional real robot settings. To overcome these limita-tions, this paper presents a novel and tractable Eikonal equationformulation and introduces a new progressive learning strategy totrain neural networks without expert data in complex, cluttered,high-dimensional robot motion planning scenarios. We show thatour approach scales to the real robot set up in a narrow passageenvironment. The proposed method’s videos and code implemen-tations are available at https://github.com/ruiqini/P-NTFields.I. I NTRODUCTIONRobots moving in their surrounding environment must findtheir feasible motion trajectory coordinating their actuatorsto move from their start configuration to goal configura-tion while satisfying all the constraints, such as collisionavoidance. Various approaches exist, from classical methods[16, 19, 7, 9, 3, 6] to learning-based neural motion planners(NMPs) [12, 13, 5, 11, 8, 1], that solve motion planningproblems.Inspired by physics-informed deep learning models [15, 17]and Fast Marching Method (FMM) [16, 19] for motionplanning, recent development has led to a physics-informedNMP called Neural Time Fields (NTFields) [10] that requireno expert training trajectories and instead directly learn tosolve the Eikonal equation for motion planning. Once trained,NTFields output the speed and time fields in the given en-vironment for the desired start and goal configuration. Timefields’ gradients are then followed to retrieve the feasible pathsolution for the underlying MP problem. Although NTFieldsfind path solutions extremely fast and require no expert data,they struggle in complex environments and do not scale wellto high-dimensional planning problems. These limitations aremainly due to the following two reasons. First, the Eikonalequation formulation has an extremely sharp feature solutionaround low-speed obstacles, making it difficult for the under-lying deep-learning model to converge and perform well incomplex scenarios. Second, training deep neural models tosolve PDEs is inherently challenging and requires advancedlearning strategies and an expressive PDE formulation with asmooth loss landscape.Therefore, this paper addresses the limitations of NTFieldsand proposes a new progressive learning method, which alsorequires no training trajectories and scales very well to com-plex scenarios, including high-dimensional, real-world robotmanipulator planning problems. The main contributions of thepaper are summarized as follows:•We highlight that the Eikonal equation formulation formotion planning in NTFields can converge to incorrectlocal minimums during training, resulting in relativelylow performance and incapability to scale to complexenvironments.•We introduce a novel progressive speed scheduling strat-egy that iteratively guides neural model training from aconstant high speed to a very low speed around obstaclesin the environment, preventing incorrect local minimumswhen training physics-informed NMPs in complex, clut-tered environments.•We propose using the viscosity term [2] based on theLaplacian operator in the Eikonal equation formulation totransform its ill-posed, non-linear behavior into a semi-linear elliptic representation with a unique smooth solu-tion around low-speed obstacles. Our novel formulationleads to physics-informed NMPs that are scalable tocomplex scenarios.•We also demonstrate our framework performance usinga 6 degree-of-freedom (DOF) UR5e robot in solvingreal-world narrow passage motion planning problems, asshown in Fig. 1.II. B ACKGROUNDThis section formally presents the background to robotmotion planning problems and their solutions through physics-informed NMPs.A. Robot Motion PlanningLet the robot’s configuration and environment space bedenoted as Q ⊂ RdandX ⊂ Rm, where {m, d} ∈Nrepresents their dimensionality. The obstacles in the environ-ment, denoted as Xobs⊂ X , form a formidable robot con-figuration space (c-space) defined as Qobs⊂ Q . Finally, thefeasible space in the environment and c-space is representedasXfree =X\X obsandQfree =Q\Q obs, respectively.The objective of robot motion planning algorithms is to finda trajectory τ⊂ Q free that connects the given robot startqs∈Qfree and goal qg∈Qfree configurations. Furthermore,additional constraints are sometimes imposed on the trajectoryconnecting the start and goal, such as having the shortestEuclidean distance or minimum travel time. The latter is oftenpreferred as it allows imposing speed constraints near obstaclesfor robot and environment safety. However, planning underspeed constraints is computationally expensive, and existingmethods rely on path-smoothing techniques when safety isdesired.B. Physics-informed Motion Planning FrameworkRecent development led to a physics-informed motion plan-ning framework called Neural Time Fields (NTFields) [10],which provide a computationally-efficient and demonstration-free deep learning method for motion planning problems. Itviews motion planning problems as the solution to a PDE,specifically focusing on solving the Eikonal equation. TheEikonal equation, a first-order non-linear PDE, allows findingthe shortest trajectory between start ( qs) and goal ( qg) underspeed constraints by relating a predefined speed model S(q)at configuration qgto the arrival time T(qs, qg)from qstoqgas follows:1/S(qg) =∥∇qgT(qs, qg)∥ (1)The∇qgT(qs, qg)is the partial derivative of the arrival timeT(qs, qg)function with respect to qg. Therefore, finding atrajectory connecting the given start and goal requires solvingthe PDE under a predefined speed model and arrival timefunction. The arrival time function in NTFields is factorizedas follows:T(qs, qg) =∥qs−qg∥/τ(qs, qg) (2)Theτ(qs, qg)is the factorized time field which is the output ofNTFields’ deep neural network for the given qsandqg. Sincethe neural network in NTfields outputs the factorized timefieldτ, the corresponding predicted speed is computed usingthe above equation. Furthermore, the NTField frameworkdetermines the ground truth speed using a predefined speedfunction:S∗(q) =sconstdmax×clip(d(p(q),Xobs), dmin, dmax) (3)where d(·,·)is the minimal distance between robot surfacepoints p(q)at configuration qand the environment obstaclesXobs. The dmin, and dmax are minimum and maximum dis-tance thresholds, and the sconst is a predefined speed constant;we normalize sconst = 1 to represent the maximum speed inthe free space, and smin=sconst×dmin/dmax represents theminimum speed in the obstacle space. Finally, the NTFieldsneural framework is trained end-to-end using a isotropic lossfunction between predicted Sand ground truth S∗speeds.III. P ROPOSED METHODAlthough NTFields demonstrate the ability for efficient mo-tion planning without expert training data, it exhibits relativelylow success rates in complex, cluttered environments, includ-ing high-dimensional problems. We observed that these limita-tions are mainly because of the ill-posed nature of the Eikonalequation and that the physics-informed loss landscapes arehard to optimize in general. To overcome these limitations,we introduce a new progressive learning algorithm comprisinga novel viscosity-based Eikonal equation formulation and aprogressive speed update strategy to train physics-informedNMPs in complex, high-dimensional scenarios.A. Viscosity-based Eikonal EquationThe Eikonal equation’s exact solution has several problemsthat lead to neural network fitting issues. First, the solutionis not differentiable at every point in space, which means aneural network cannot approximate the solution very well,especially for the sharp feature in low-speed environments.Second, the gradient ∇qgT(qs, qg)is not unique at these non-smooth points, which will also cause the neural network fittingissue because training is based on the supervision of thegradient ∇qgT(qs, qg).To fix these problems, we propose to use a viscosity termthat can provide a differentiable and unique approximationof the Eikonal equation’s solution. The viscosity term comesfrom the vanishing viscosity method [2]. It adds the Laplacian∆qgT(qs, qg)to the Eikonal equation, i.e.,1/S(qg) =∥∇qgT(qs, qg)∥+ε∆qgT(qs, qg), (4)where ε∈Ris a scaling coefficient. The resulting system inEq. 4 is a semi-linear elliptic PDE with a smooth and unique0.0 0.2 0.4 0.6 0.8 1.0FMM ε= 0.001 ε= 0.01 ε= 0.1Fig. 2: Effect of viscosity coefficient, ε, on the correctness oftime field results. It can be seen a large value of εdeviatesfrom the solution given by the expert. The expert is FMMwhich finds a solution to the Eikonal equation. The colorbarshows the speed fields range from 0 to 1.solution. Furthermore, the value of εaffects the smoothnessof the predicted time fields. In Fig 2, we compare fields withdifferent values of εto the ground truth field generated withthe FMM approach. It can be seen that by varying the ε, thecorrectness of results varies compared to the ground truth.In practice, when the coefficient ε→0, the smooth andunique solution of Eq. 4 will approach the exact solution ofthe Eikonal equation Eq. 1.B. Progressive speed schedulingThis section introduces our progressive speed scheduling ap-proach to train physics-informed motion planners in complexenvironments. The physics-based loss functions are generallychallenging to optimize as they depend on the gradient ofthe underlying neural network. In physics-informed motionplanners, the optimization becomes more difficult due to low-speed conditions near obstacles, often leading to an incorrectlocal minimum, i.e., despite small training loss, the neuralmodel behaves as if low-speed obstacles do not exist in theenvironment. To circumvent the incorrect local minimums,we observe and leverage the following two properties ofthe Eikonal equation to progressively guide the NN trainingprocess and capture the low-speed obstacle space for collisionavoidance.First, we notice the solution of the Eikonal equation (Eq.1),T(qs, qg), in a constant max speed scene ( S(q) = 1 )will become the distance between the given start and goal,which leads to trivial solution τ(qs, qg) = 1 . Second, we findthat the interpolation from the constant max-speed to the lowspeed around obstacles is continuous, and the solutions of theEikonal equation along those interpolations are also continu-ous. Based on these observations, we propose a progressivespeed alteration strategy that gradually scales down the speedfrom a constant max value to a low value around obstaclesusing a parameter α(t)∈[0,1], i.e.,S∗α(t)(q) = (1 −α(t)) +α(t)S∗(q), (5)where t∈Nrepresent the training epochs. Therefore, whenα(t) = 0 , the scene will have a constant max speed, and theEikonal equation solution will be trivial. Furthermore, whenα(t) = 1 , the scene will have low speed around obstacles. Fig3 shows the gradual progression of speed and time fields asαlinearly scales from 0 to 1. It can be seen that the speed0.0 0.2 0.4 0.6 0.8 1.0α= 0.0 α= 1/3 α= 2/3 α= 1.0 −→ −→ −→Fig. 3: Progressively decreasing the speed around obstaclesusing parameter αleads to continuous interpolation of speedand time fields in the given environment. The colorbar showsthe speed fields range from 0 to 1.and time fields are changing continuously with αchanginglinearly.To train the physics-informed motion planner, we start witha low value of α(t)and let NN fit a constant speed trivialsolution. Next, we progressively interpolate the field fromconstant max speed to low speed by gradually increasingtheα(t)over the training epochs. The NN can easily fit thetrivial solution. Then progressively decreasing obstacle speedS∗(q)guides the network to learn the interpolating lower-speed fields. Furthermore, we also observe that the speed fieldschange linearly with α(t), but the resulting time fields changemore aggressively. Thus, we also reduce the rate of change ofα(t)as the training epochs increase.C. Neural ArchitectureThis section describes our neural framework, as shown inFig. 4, for generating the speed and time fields for solving therobot motion planning problems. Our framework comprisesthe following modules. Given the robot’s initial ( qs) andtarget (qg)configurations, we use random Fourier featuresγ[18, 14] for obtaining high-frequency robot configurationembeddings. These features are further processed into a latentembedding by a C-space encoder f(·), which is a ResNet-style multi-layer perception [4]. To combine features f(γ(qs))andf(γ(qg)), we use the non-linear symmetric operatorNfrom NTFields method [10], i.e. f(γ(qs))Nf(γ(qg)) =[max( f(γ(qs)), f(γ(qg))),min(f(γ(qs)), f(γ(qg)))].Our time field generator network gis a ResNet-style multi-layer perceptron which takes the encodingf(γ(qs))Nf(γ(qg))and outputs the factorized time fieldτ(qs, qg) =g(f(γ(qs))Nf(γ(qg))). Given the τ(qs, qg), wecompute its gradient and Laplacian to determine the S(qs)andS(qg). Finally, we propose a smooth isotropic objectivefunction 6 to train our framework.L(S∗α(q), S(q)) =S∗α(qs)S(qs)+S(qs)S∗α(qs)+S∗α(qg)S(qg)+S(qg)S∗α(qg)−4(6)D. Planning pipelineOnce trained, we use the execution pipeline similar to theNTFields method. First, we predict τ(qs, qg)for the givenstartqs, goal qg. Next, the factorized time, τ, parameterizesEq. 2 and 1 for computing time T(qs, qg)and speed fieldsS(qs), S(qg), respectively. Finally, the path solution is deter-mined in a bidirectional manner by iteratively updating the1/S=∥∇T∥+ε∆Tα= 0.4−→ α= 0.7−→ α= 1.0∇TSymmetricOperatorTime FieldSpeed Fieldqsqg γ(·)C-SpaceEncoderf(·)Time FieldGeneratorg(·)Fig. 4: The neural architecture comprises the Fourier-based C-space Encoder, symmetric operator, and time-field generator.Three images on the top left show we progressively decreasethe speed around a bunny-shaped obstacle to guide the neuralnetwork training. The image on the top right shows the finaltime field from start to goal generated by the trained model.start and goal configurations as follows,qs←qs−βS2(qs)∇qsT(qs, qg)qg←qg−βS2(qg)∇qgT(qs, qg)(7)The parameter β∈Ris a predefined step size. Furthermore,at each planning iteration, the start and goal configurationsare updated using gradients to march toward each other until∥qs−qg∥< d g, where dg∈R.IV. EVALUATIONIn this section, we evaluate our method through the 6-DOFUR5e robot manipulator planning in two complex cabinetenvironments with narrow passages. For these scenarios, wepresent evaluations in both simulation and real-world.In the simulation, we directly load a cabinet mesh, whereas,for real setup, we use Dot3D with RealSense camera to scanand create a point cloud of an actual cabinet. To form our testset, we randomly sampled 2 ×100 start and goal configurationpairs for simulated and real-world environments.The table in Fig. 5 compares our method, NTField, RRT*,Lazy-PRM*, and RRT-Connect in both scenarios. We excludeIEF3D due to large data generation and training times. In thetable, it can be seen that our method achieves the highestsuccess rate with the shortest execution time, demonstratingthe effectiveness of our progressive learning approach incomplex, narrow passage environments.Fig. 5 shows the execution of our method (left) and RRT-Connect (right) in a challenging case in the simulated environ-ment and the table underneath presents the overall statisticalcomparison of the indicated methods on the testing dataset.In the presented scenario, the UR5e robot’s end effector startsfrom the middle shelf of the cabinet and crosses two relativelythin obstacles to the bottom shelf of the cabinet withoutcollision. In this particular situation, NTField could not find asolution whereas our method took 0.07 seconds to get a 0.83length path with a safe margin of 0.03, and RRT-Connect took20.13 seconds to get a 0.90 length path with a safe marginof 0.02. For real-world experiments, in Fig. 1, we show aManipulator time (sec) length safe margin sr(%)Ours 0.03±0.00 0 .43±0.10 0 .04±0.00 92.0NTFields 0.05±0.00 0 .38±0.06 0 .04±0.00 84.5RRT* 5.16±0.01 0 .52±0.36 0 .04±0.00 67.0LazyPRM* 2.79±0.48 0 .76±0.80 0 .04±0.00 86.0RRT-Connect 1.08±0.69 1 .14±0.23 0 .02±0.00 87.5Fig. 5: Our method (left) and RRT-Connect (right) in achallenging case in the simulated environment: the manipu-lator crosses two relatively thin obstacles to move from themiddle (start) to the bottom (goal) shelf. The table showsstatistical results on 2 ×100 different starts and goals for twoenvironments.challenging path that the robot went from the initial pose tomake its end effect go deep into the cabinet.V. D ISCUSSIONS , CONCLUSIONS ,AND FUTURE WORKWe propose a novel progressive learning framework totrain physics-informed NMPs by solving the Eikonal equationwithout expert demonstration. Our method deals with the PDE-solving challenges in physics-informed NMPs such as NT-Fields [10]. First, we propose a progressive speed schedulingstrategy that begins with finding a simple PDE solution atconstant high speed and then gradually decreases the speednear the obstacle for finding a new solution. Second, wepropose to use the viscosity term for the Eikonal equationand convert a nonlinear PDE to a semi-linear PDE, which iseasy for a neural network to solve. Thus our method solves theEikonal equation more precisely and efficiently and increasesthe overall performance in solving motion planning problemsthan prior methods. Additionally, our method requires fewerneural network parameters due to our progressive learningstrategy than NTFields, leading to computationally efficientphysics-informed NMPs’ training and planning. Furthermore,we also demonstrate that our method scales to complexscenarios, such as real-world narrow-passage planning witha 6-DOF UR5e manipulator.Although our method can scale to complex real-world setupsand outperform prior methods with expert demonstration data,a few limitations, highlighted in the following, will still bethe focus of our future research directions. First, our methodcannot generalize to unseen environments. Therefore, one ofour future directions will be to explore novel environmentencoding strategies to make physics-informed NMP generalizeto the novel, never-before-seen environments. Lastly, asidefrom addressing a few limitations, we also aim to explore novelPDE formulations to train physics-informed NMPs to solvemotion planning under dynamic and manifold constraints.REFERENCES[1] Devendra Singh Chaplot, Deepak Pathak, and JitendraMalik. Differentiable spatial planning using transformers.InInternational Conference on Machine Learning , pages1484–1495. PMLR, 2021.[2] Michael G Crandall and Pierre-Louis Lions. Viscositysolutions of hamilton-jacobi equations. Transactions ofthe American mathematical society , 277(1):1–42, 1983.[3] Jonathan D Gammell, Siddhartha S Srinivasa, and Timo-thy D Barfoot. Informed RRT: Optimal sampling-basedpath planning focused via direct sampling of an admissi-ble ellipsoidal heuristic. In 2014 IEEE/RSJ InternationalConference on Intelligent Robots and Systems , pages2997–3004. IEEE, 2014.[4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and JianSun. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer visionand pattern recognition , pages 770–778, 2016.[5] Brian Ichter, James Harrison, and Marco Pavone. Learn-ing sampling distributions for robot motion planning. In2018 IEEE International Conference on Robotics andAutomation (ICRA) , pages 7087–7094. IEEE, 2018.[6] Lucas Janson, Edward Schmerling, Ashley Clark, andMarco Pavone. Fast marching tree: A fast marchingsampling-based method for optimal motion planning inmany dimensions. The International journal of roboticsresearch , 34(7):883–921, 2015.[7] Sertac Karaman and Emilio Frazzoli. Sampling-based al-gorithms for optimal motion planning. The internationaljournal of robotics research , 30(7):846–894, 2011.[8] Rahul Kumar, Aditya Mandalika, Sanjiban Choudhury,and Siddhartha Srinivasa. Lego: Leveraging experiencein roadmap generation for sampling-based planning. In2019 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 1488–1495. IEEE,2019.[9] Steven M LaValle, James J Kuffner, BR Donald, et al.Rapidly-exploring random trees: Progress and prospects.Algorithmic and computational robotics: new directions ,5:293–308, 2001.[10] Ruiqi Ni and Ahmed H Qureshi. NTFields: Neural timefields for physics-informed robot motion planning. InInternational Conference on Learning Representations ,2023. URL https://openreview.net/forum?id=ApF0dmi19K.[11] Ahmed H Qureshi and Michael C Yip. Deeply informedneural sampling for robot motion planning. In 2018IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 6582–6588. IEEE, 2018.[12] Ahmed H Qureshi, Anthony Simeonov, Mayur J Bency,and Michael C Yip. Motion planning networks. In 2019International Conference on Robotics and Automation(ICRA) , pages 2118–2124. IEEE, 2019.[13] Ahmed Hussain Qureshi, Yinglong Miao, Anthony Sime-onov, and Michael C Yip. Motion planning networks:Bridging the gap between learning-based and classicalmotion planners. IEEE Transactions on Robotics , 37(1):48–66, 2020.[14] Ali Rahimi and Benjamin Recht. Random featuresfor large-scale kernel machines. Advances in neuralinformation processing systems , 20, 2007.[15] Maziar Raissi, Paris Perdikaris, and George E Karni-adakis. Physics-informed neural networks: A deep learn-ing framework for solving forward and inverse problemsinvolving nonlinear partial differential equations. Journalof Computational physics , 378:686–707, 2019.[16] James A Sethian. A fast marching level set methodfor monotonically advancing fronts. Proceedings of theNational Academy of Sciences , 93(4):1591–1595, 1996.[17] Jonathan D Smith, Kamyar Azizzadenesheli, andZachary E Ross. Eikonet: Solving the eikonal equationwith deep neural networks. IEEE Transactions on Geo-science and Remote Sensing , 59(12):10685–10696, 2020.[18] Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, SaraFridovich-Keil, Nithin Raghavan, Utkarsh Singhal, RaviRamamoorthi, Jonathan Barron, and Ren Ng. Fourier fea-tures let networks learn high frequency functions in lowdimensional domains. Advances in Neural InformationProcessing Systems , 33:7537–7547, 2020.[19] Alberto Valero-Gomez, Javier V Gomez, Santiago Gar-rido, and Luis Moreno. The path to efficiency: Fastmarching method for safer, more efficient mobile robottrajectories. IEEE Robotics & Automation Magazine , 20(4):111–120, 2013. |
heTTJfkTQC | Robotic Manipulation Learning with EquivariantDescriptor Fields: Generative Modeling,Bi-equivariance, Steerability, and LocalityJiwoo Kim∗†, Hyunwoo Ryu∗‡, Jongeun Choi§‡¶Joohwan Seo¶, Nikhil Prakash¶, Ruolin Li¶, R. Horowitz¶†School of Electrical and Electronic Engineering,‡Department of Artificial Intelligence,§School of Mechanical Engineering, Yonsei University, Seoul, Republic of KoreaEmails: {nfsshift9801, tomato1mule, jongeunchoi }@yonsei.ac.kr¶Department of Mechanical Engineering, University of California, Berkeley, CA, USAEmails: {joohwan seo, nikhilps, ruolin li, horowitz }@berkeley.edu∗Equal ContributionAbstract —Conventional end-to-end visual robotic manipula-tion learning methods often face challenges related to data ineffi-ciency and limited generalizability. To mitigate these challenges,recent works have proposed incorporating equivariance intotheir designs. This paper presents a fresh perspective on thedesign principles of SE(3)-equivariant methods for end-to-endvisual robotic manipulation learning. Specifically, we examinethe recently introduced concept of Equivariant Descriptor Fields(EDFs), focusing on four key underlying principles: generativemodeling, bi-equivariance, steerable representation, and locality.These principles enable EDFs in achieving impressive dataefficiency and out-of-distribution generalizability, even in theabsence of prior knowledge. By comparing EDFs with othercontemporary equivariant methods based on the four criteria,this paper underscores the importance of these design principlesand aims to establish a guiding framework for future researchonSE(3)-equivariant robotic manipulation.I. I NTRODUCTIONRecently, equivariant methods have gained notable attentiondue to their data efficiency, robustness and generalizability.Incorporating equivariance has shown promising results invarious fields, including protein [15, 11], molecule [12, 4],3D object segmentation [17, 6], shape reconstruction [1, 2],and reinforcement learning [27, 18, 31].For learning manipulation tasks, the prerequisite for numer-ous demonstrations and rollouts [8, 14, 7, 39, 16] is a criticalweakness. Recent works reveal that incorporating equivariancecan improve data efficiency and generalizability. The SE(2)-equivariance (planar roto-translation equivariance) has beenused to improve the efficiency of behavior cloning [40, 13, 21]and reinforcement learning methods [30, 28, 29, 41] for planartasks. For highly spatial tasks, the SE(3)-equivariance (spatialroto-translation equivariance) is required. Neural DescriptorFields (NDFs) [23] and their variants [24, 3] leverage thisproperty to achieve remarkable data efficiency and generaliz-ability. However, they cannot be end-to-end trained; instead,they require pre-training and object segmentations.To overcome this challenge, Equivariant Descriptor Fields(EDFs) [20] has been proposed. EDFs are end-to-end trainablemodels for SE(3)-equivariant visual manipulation learning.Different from previous SE(3)-equivariant methods, EDFsare capable of learning manipulation tasks from only a fewdemonstrations without requiring any prior knowledge, suchas pre-training and object segmentation.In this paper, we examine the four key design principles ofEDFs and compare them with other recent works. By doingso, we seek to offer a novel perspective that can pave theway for subsequent studies on equivariant methods for roboticmanipulation learning.II. P RELIMINARIES : REPRESENTATION THEORYA representation Dis a map from a group Gto an invertiblematrix GL(N)∈RN×Nthat satisfies D(g)D(h) =D(gh)for every g, h∈ G. In particular, any representation of SO(3)can be expressed as a block-diagonal matrix composed of realWigner D-matrices by a change of basis. A real Wigner D-matrix Dl(R)∈R(2l+1)×(2l+1)of degree l∈ {0,1,2, ...}areorthogonal matrices that are irreducible , meaning that theycannot be block-diagonalized anymore. Therefore, Wigner D-matrices constitute the building blocks of any representationsofSO(3). A type-lvector is a ( 2l+ 1)-dimensional vectorthat is transformed by Dl(R)under rotation R∈SO(3).Type- 0vectors are invariant to rotations (i.e. scalars) such thatD0(R) =I. On the other hand, type- 1vectors are rotatedaccording to the 3D rotation matrices, that is, D1(R) =R.LetObe the set of all possible colored point clouds. Apoint cloud is given by O={(xi, ci) :i∈ I} , where xi∈R3andci∈R3are point i’s position and color. A type- lvectorfield f:R3× O → R2l+1generated by O∈ O isSE(3)-equivariant if Dl(R)f(x|O) =f(gx|g·O),∀g= (p, R)∈SE(3), x, p∈R3, O∈ O andg·O={(gxi, ci) :i∈ I} .III. E QUIVARIANT DESCRIPTOR FIELDS :THEFOUR KEYMODEL PROPERTIESIn what follows, we will delve into EDFs and comparethem with other equivariant models, focusing on the fourkey principles, viz., generative modeling ,bi-equivariance ,steerable representation andlocality (see Table I).TABLE I: Comparison of recently proposed equivariant methods for robotic manipulation learning.Method Bi-Equivariance Locality Steerable Generative End-to-endLeft Equiv. Right Equiv. Representations Modeling TrainingTransporter Networks [40] SE(2) Translation ○␣ Invariant × ○␣Equivariant Transporter Networks [13] SE(2) SE(2) ○␣ Equivariant × ○␣Equivariant RL (SAC/DQN) [28, 29, 30] SE(2) Z2 ○␣ Equivariant × ○␣NDFs [23] SE(3) × × Invariant × ×L-NDFs [3] SE(3) × ○␣ Invariant × ×R-NDFs [24] SE(3) SE(3) × Invariant × ×EDFs [20] SE(3) SE(3) ○␣ Equivariant ○␣ ○␣A. Generative ModelingIn practice, expert demonstration policies for robotic manip-ulation tasks are rarely unimodal. To illustrate this, consider amug-picking task. The human expert may occasionally chooseto grasp the mug by the rim and at other times by the handle.To properly learn such multimodalities, generative modeling isrequired for the policy distributions [19] (see Fig. 1). As shownin Fig. 1, naively regressing or discretizing the policy results insuboptimal policy distributions. On the other hand, generativemodels such as energy-based models (EBMs) and diffusionmodels capture the behavior more accurately. EDFs utilizeEBMs’ approach to model the policy distribution, enablingboth end-to-end training and sampling. This is in contrastto the energy minimization method used by NDFs variants[23, 24, 3], which requires frozen pre-trained networks.The EDFs’ energy-based policy conditioned by the pointcloud observations of the scene Osceneand the grasped objectOgraspis defined on the SE(3)manifold asP(g|Oscene, Ograsp) =exp[−E(g|Oscene, Ograsp)]Zwhere Z=ZSE(3)dgexp[−E(g|Oscene, Ograsp)],(1)where Eis an energy function which will be defined later.B. Bi-equivarianceTo successfully perform object picking tasks, it is crucialfor the end-effector pose to be equivariant to changes in theinitial pose of the target object within the scene . To illustratethis scene equivariance , consider a task in which the end-effector pose gWE∈SE(3)in the world frame Wshouldbe inferred from the observation of the scene Oscene. Here,gWE:= (pWE, RWE)∈SE(3)denotes the specification of theconfiguration of the end-effector frame Erelative to W. Now,consider a new world frame W′. The reference frame change∆gW=gW′W∈SE(3)induces the following transformationsin the scene observation and end-effector pose.OsceneW′= ∆gW·OsceneW =gW′W·OsceneWgW′E= ∆gWgWE=gW′WgWEThe corresponding equivariant probabilistic policy1Pagainst∆gthen must satisfyP(∆gWgWE|∆gW·OsceneW) =P(gW′E|OsceneW′)1The equivariant probabilistic policy implies invariance of the conditionalprobabilities when the state and action are equivariantly transformedObs. op(a|o) MSE DiscretizedEBM.DiffusionFig. 1: Comparison of behavior cloning methods: Generative models (EBM and Diffu-sion) accurately capture multimodal behaviors of the oracle policy p(a|o)compared toregression (MSE) or discretized methods. Reproduced with authors’ permission [19].Since the perturbation ∆gWappears on the left side of g,we refer to this scene equivariance as left equivariance . Weillustrate left equivariance in Fig. 2.However, as it turns out, left equivariance alone is insuf-ficient to successfully perform object placing tasks. Unlikepicking tasks, which only require observing the scene, placingtasks also requires the observation of the grasp, which addsanother layer of complexity to the problem. Furthermore,the grasp pose inferred by a pick policy learned from afew expert demonstrations may not be optimal. As a result,the grasped object may be in a pose that has never beenshown by the expert demonstrations. Hence, object placingtasks require another type of equivariance, namely the graspequivariance . Consider the same object pose Bbeing graspedin two different manners, respectively EandE′. LetOgraspE bethe observation of the object grasped by an end-effector withframe E. We assume that frame Bis attached to the graspedobject such that gEBis the pose of Brelative to frame E. Atransformation of the grasped object pose due to a change ∆gbetween end-effector frames EandE′, as shown in Fig. 3,induces the transformed observation relative to frame E′:OgraspE′= ∆gE·OgraspE =gE′E·OgraspE.To keep the relative pose between the scene and the graspedobject invariant for equivariance of the probabilistic policy, theend-effector pose must be transformed by ∆gEsuch thatgWB=gWEgEB=gWE ′gE′B=gWE ′gE′EgEB=gWE ′∆gEgEB⇒gWE ′=gWE∆g−1EA probabilistic policy Punder such an equivariance requiresP(gWE∆g−1E|OsceneE,∆gE·OgraspE)=P(gWE ′|OsceneE′, OgraspE′)Left equivariance (Scene Equivariance)ExyzWEWxyzW’Δgg=ggWW′WWggWWWWggWW′WW=ggWW′WWggWWWW=ΔggggWWWWggWWWWFig. 2: The left equivariance illustrates that the target pose is equivariant to thetransformation of the scene , as such the perturbation ∆gis on the left of g.Notice that such a grasp equivariance is a right equivariancesince the inverse of the perturbation ∆g−1Eappears on theright side of g. We illustrate the right equivariance in Fig. 3.Combining both the left and right equivariances, we finallydefine bi-equivariance [20] as follows.P(g|Oscene, Ograsp)=P(∆gWg|∆gW·Oscene, Ograsp)=P(g∆g−1E|Oscene,∆gE·Ograsp)(2)Among SE(2)-equivariant methods, Transporter Networks[40] and recently proposed equivariant reinforcement learn-ing methods [28, 29, 30] are left equivariant, but not fullyright equivariant (only translation equivariant). On the otherhand, Equivariant Transporter Networks [13] incorporate fullSE(2)bi-equivariance, thereby achieving significant increasein data efficiency over Transporter Networks. Among SE(3)-equivariant methods, Neural Descriptor Fields (NDFs) [23]and Local Neural Descriptor Fields (L-NDFs) [3] are uni-equivariant methods. Since NDFs and L-NDFs assume afixed placement target pose, bi-equivariance is not required.However, to solve more general tasks such as object rear-rangement tasks, bi-equivariance becomes essential. RelationalNeural Descriptor Fields (R-NDFs) [24] are a bi-equivariantmethod for object rearrangement tasks. However, pre-trainedNDFs and a human annotated object keypoint are required toequivariantly align query points for the training.On the other hand, EDFs [20] directly infer query points us-ing an SE(3)-equivariant query density model that can be end-to-end trained. EDFs achieve bi-equivariance for the policy in(1) with a bi-equivariant energy function E(g|Oscene, Ograsp).The specific design of this energy function will be introducedsubsequently.C. Steerable RepresentationTo achieve robust equivariant manipulation, a model mustutilize symmetric feature representations from the observa-tions. Steerable representations are proficient in representingthese features due to their orientation sensitivity [33] (seeFig. 4). Moreover, due to continuous expressions, steerablerepresentations acquire rigorous information compared to thediscretization methods and demonstrate better precision asevidenced by [1].Importantly, compared to rotation invariant features, steer-able features are superior in encoding the orientations of localgeometries. To encode orientation information using rotationinvariant features, they must be spatially distributed, breakingRight equivariance (Gripper EquivariancEE’MAINΔgg=ggEEEEEB BxyzWxyzWFig. 3: The right equivariance implies that the target pose is equivariant to the graspstate, in which the perturbation ∆gis located on the right of the g.locality. For example, the color vector (red, green, blue) issuch a rotation invariant feature. To determine the rigid-bodyorientation, at least three non-collinear points of differentcolors are required. Conversely, one can represent orientationwith only a single point, using rotation equivariant, or steerablefeatures. Thus, orientation information can be localized into asingle point, better capturing the local geometry. This makesthe learned features more generalizable and less sensitive todisturbances.Transporter Networks [40] and Neural Descriptor Fieldsvariants [23, 24, 3] utilize rotation invariant feature fields toobtain equivariance (e.g., Feature map of CNNs can be thoughtof as 2-dimensional feature fields). Alternatively, Huang et al.[13], Wang et al. [30, 28, 29] utilize the steerable features oftheCngroup (discretized SO(2)group), thereby significantlyimproving data efficiency.An EDF φ(x|O)is defined as the concatenation of NSO(3)-steerable vector fields that are SE(3)-equivariantφ(x|O) =NMn=1φ(n)(x|O)where φ(n)(x|O) :R3×O → R2ln+1is anSE(3)-equivarianttype-lnvector field generated by O. Therefore, φ(x|O)istransformed according to g= (p, R)∈SE(3)asφ(gx|g·O) =D(R)φ(x|O)=Dl1(R)··· ∅.........∅ ··· Dln(R)φ(x|O)where D(R)is a block diagonal of Wigner D-matrices.TheSE(3)bi-equivariant energy function for the EBM inEq. (1) can be constructed with EDFs asE(g|Oscene, Ograsp) =ZR3d3x ρ(x|Ograsp)∥φ(gx|Oscene)−D(R)ψ(x|Ograsp)∥2(3)where φ(x|Oscene)is the key EDF ,ψ(x|Ograsp)is the queryEDF , and the ρ(x|Ograsp)is the query density , which are allSE(3)-equivariant and learnable neural fields.D. LocalityFor a robotic manipulation model to be robust, it must beable to pick and place objects in previously unseen poses.Fig. 4: Visualization of type-lfeatures (l= 0,1,2, ...) for a sphere (top), airplane(middle), and table (bottom). Higher-type features are sensitive to the orientations of localgeometries such as planes and corners. Reproduced with the authors’ permission [1].If the model can learn local geometric structures that areshared across different objects, it would greatly increase itsgeneralizability. For example, if a model was trained to pick amug by holding the rim, the similarities in the local geometricfeatures can be utilized to grasp other objects by the rim.Consequently, locality is critical for generalizability and dataefficiency. Recent studies in various fields such as robotics[3], point cloud segmentation [6], and shape reconstruction [1]highlight the importance of incorporating locality in equivari-ant methods.Another benefit of imposing locality to equivariant methodsis that the target object does not require to be segmented fromthe backgrounds. For unsegmented observations, only equiv-ariance to the target object is desired, and the equivarianceto backgrounds must be suppressed. We name this propertyaslocal equivariance , in contrast to global equivariance (seeFig. 5). However, naively applying Eq. (2) can only guaranteeglobal equivariance. Therefore, special care must be taken indesigning methods to respect the locality of the tasks so as toobtain local equivariance.For example, Transporter networks and their variants [40,21, 13] naturally exploit the locality of convolutional neuralnetworks. Therefore, Transporter Networks and their variantscan be used without object segmentation pipelines or any otherobject centric assumptions. On the other hand, NDFs [23] andR-NDFs [24] rely on centroid subtraction methods to achievetranslational equivariance. Due to the highly non-local natureof centroid subtraction, these methods require the target objectto be segmented from the background.EDFs utilize a Tensor Field Network (TFN) [25] model forthe final layer and SE(3)-transformers [10] in other layers.These methods rely on spatial convolutions, enabling the easyacquisition of locality by using convolution kernels with finitesupport. This is in contrast to the Vector Neurons [5] methodthat were used for NDFs and R-NDFs.We provide more details on the training, sampling, and theimplementation details in Appendix A. Mathematical proofscan be found in the original paper of EDFs [20].IV. E XPERIMENTAL RESULTSTo evaluate the EDFs’ generalization performance withother methods, Ryu et al. [20] conducted experiments with amug-hanging task and a bowl/bottle pick-and-place task. Theobjective is to pick a mug or bowl/bottle and place it on arandomly posed hanger or plate. For the evaluation, multipleLocal equivarianceGlobal equivariancexyzxyzΔggxyzΔggFig. 5: The difference of global equivariance and local equivariance. The globalequivariance represents the translation of the whole scene, while the local equivariancedenotes the translation of the target object.scenarios including unseen poses, unseen distracting objects,and unseen instances in randomized poses were used.First, EDFs were compared with SE(3)Transporter Net-works [40], which are the extensions to the original Trans-porter Networks that regress additional degrees of freedom(height, roll, pitch). Table II of Appendix B shows thatEDFs out-preform Transporter Networks in all three tasks.By comparing the results, EDFs turn out to be more robustthan Transformer Networks, illustrating the significance of theSE(3)-equivariance when it comes to highly spatial tasks.In comparing EDFs to NDFs [23], it was necessary toaccount for some of NDFs’ limitations such as the factthat NDFs require segmentations and a fixed pose of theplacement target. Thus, EDFs were compared against an NDF-like constructed baseline model, which uses only the type-0descriptor features. From Table III of Appendix B, EDFs,which use higher type descriptors, surpass the performance ofthe NDF-like model. Additional experimental descriptions andresults can be found in Appendix B and the original paper [20].V. C ONCLUSIONWe introduce EDFs and emphasize the importance of thefollowing four properties: 1) generative modeling, 2) bi-equivariance, 3) steerable representations, and 4) locality; inorder to synthesize noteworthy equivariant robotic manipula-tion learning models. We demonstrate the effectiveness andthe generalization of EDFs in inferring the target pose in spiteof previously unseen instances, unseen poses, and distractingobjects using only a few demonstrations.For future research, it could be beneficial to integrateSE(3)-equivariant shape reconstruction and SLAM methods[38, 1, 2, 9] with EDFs to overcome incomplete and noisypoint cloud observations. Expanding EDFs to trajectory-levelproblem is also an important issue. For kinematic and dy-namic trajectory planning, one might consider incorporat-ing guided diffusion methods [26] and geometric impedancecontrol framework [22] respectively. Lastly, to improve thespeed of the MCMC sampling required for EDFs, techniquessuch as amortized sampling [36, 32] and cooperative learning[34, 35, 37] could be explored.ACKNOWLEDGMENTSThis work was supported by the National ResearchFoundation of Korea (NRF) grants funded by the Ko-rea government (MSIT) (No.RS-2023-00221762 and No.2021R1A2B5B01002620). This work was also partially sup-ported by the Korea Institute of Science and Technology(KIST) intramural grants (2E31570).REFERENCES[1] Evangelos Chatzipantazis, Stefanos Pertigkiozoglou,Edgar Dobriban, and Kostas Daniilidis. SE(3)-equivariant attention networks for shape reconstructionin function space. In The Eleventh International Con-ference on Learning Representations , 2023. URL https://openreview.net/forum?id=RDy3IbvjMqT.[2] Yunlu Chen, Basura Fernando, Hakan Bilen, MatthiasNießner, and Efstratios Gavves. 3d equivariant graphimplicit functions. In Computer Vision–ECCV 2022: 17thEuropean Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part III , pages 485–502. Springer,2022.[3] Ethan Chun, Yilun Du, Anthony Simeonov, TomasLozano-Perez, and Leslie Kaelbling. Local neural de-scriptor fields: Locally conditioned object representationsfor manipulation. arXiv preprint arXiv:2302.03573 ,2023.[4] Gabriele Corso, Hannes St ̈ark, Bowen Jing, ReginaBarzilay, and Tommi Jaakkola. Diffdock: Diffusion steps,twists, and turns for molecular docking, 2023.[5] Congyue Deng, Or Litany, Yueqi Duan, Adrien Poule-nard, Andrea Tagliasacchi, and Leonidas Guibas. Vectorneurons: a general framework for SO(3)-equivariant net-works. arXiv preprint arXiv:2104.12229 , 2021.[6] Congyue Deng, Jiahui Lei, Bokui Shen, Kostas Dani-ilidis, and Leonidas Guibas. Banana: Banach fixed-point network for pointcloud segmentation with inter-partequivariance. arXiv preprint arXiv:2305.16314 , 2023.[7] Coline Devin, Payam Rowghanian, Chris Vigorito, WillRichards, and Khashayar Rohanimanesh. Self-supervisedgoal-conditioned pick and place, 2020.[8] Yuqing Du, Daniel Ho, Alexander A. Alemi, Eric Jang,and Mohi Khansari. Bayesian imitation learning for end-to-end mobile manipulation, 2022.[9] Jiahui Fu, Yilun Du, Kurran Singh, Joshua B Tenenbaum,and John J Leonard. Neuse: Neural se (3)-equivariant em-bedding for consistent spatial understanding with objects.arXiv preprint arXiv:2303.07308 , 2023.[10] Fabian B. Fuchs, Daniel E. Worrall, V olker Fischer, andMax Welling. SE(3)-transformers: 3d roto-translationequivariant attention networks. In Advances in NeuralInformation Processing Systems 34 (NeurIPS) , 2020.[11] Octavian-Eugen Ganea, Xinyuan Huang, CharlotteBunne, Yatao Bian, Regina Barzilay, Tommi S. Jaakkola,and Andreas Krause. Independent SE(3)-equivariantmodels for end-to-end rigid protein docking. In Inter-national Conference on Learning Representations , 2022.URL https://openreview.net/forum?id=GQjaI9mLet.[12] Jiaqi Guan, Wesley Wei Qian, Xingang Peng, Yufeng Su,Jian Peng, and Jianzhu Ma. 3d equivariant diffusion fortarget-aware molecule generation and affinity prediction,2023.[13] Haojie Huang, Dian Wang, Robin Walters, and RobertPlatt. Equivariant transporter network. arXiv preprintarXiv:2202.09400 , 2022.[14] Mohi Khansari, Daniel Kappler, Jianlan Luo, Jeff Bing-ham, and Mrinal Kalakrishnan. Action image representa-tion: Learning scalable deep grasping policies with zeroreal world data, 2020.[15] Jae Hyeon Lee, Payman Yadollahpour, Andrew M.Watkins, Nathan C Frey, Andrew Leaver-Fay, StephenRa, Kyunghyun Cho, Vladimir Gligorijevi ́c, Aviv Regev,and Richard Bonneau. Equifold: Protein structure predic-tion with a novel coarse-grained structure representation.bioRxiv , 2023.[16] Jeong-Hoon Lee and Jongeun Choi. Hierarchical primi-tive composition: Simultaneous activation of skills withinconsistent action dimensions in multiple hierarchies.IEEE Robotics and Automation Letters , 7(3):7581–7588,2022.[17] Jiahui Lei, Congyue Deng, Karl Schmeckpeper, LeonidasGuibas, and Kostas Daniilidis. Efem: Equivariant neuralfield expectation maximization for 3d object segmenta-tion without scene supervision. In Proceedings of theIEEE/CVF Conference on Computer Vision and PatternRecognition , pages 4902–4912, 2023.[18] Arnab Kumar Mondal, Pratheeksha Nair, and KaleemSiddiqi. Group equivariant deep reinforcement learning,2020.[19] Tim Pearce, Tabish Rashid, Anssi Kanervisto, DaveBignell, Mingfei Sun, Raluca Georgescu, Sergio Valcar-cel Macua, Shan Zheng Tan, Ida Momennejad, KatjaHofmann, and Sam Devlin. Imitating human behaviourwith diffusion models, 2023.[20] Hyunwoo Ryu, Hong in Lee, Jeong-Hoon Lee, andJongeun Choi. Equivariant descriptor fields: SE(3)-equivariant energy-based models for end-to-end visualrobotic manipulation learning. International Conferenceon Learning Representations (ICLR) , 2023.[21] Daniel Seita, Pete Florence, Jonathan Tompson, ErwinCoumans, Vikas Sindhwani, Ken Goldberg, and AndyZeng. Learning to rearrange deformable cables, fabrics,and bags with goal-conditioned transporter networks. In2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 4568–4575. IEEE, 2021.[22] Joohwan Seo, Nikhil Potu Surya Prakash, AlexanderRose, and Roberto Horowitz. Geometric impedancecontrol on SE(3) for robotic manipulators. InternationalFederation of Automatic Control (IFAC) World Congress ,2023.[23] Anthony Simeonov, Yilun Du, Andrea Tagliasac-chi, Joshua B. Tenenbaum, Alberto Rodriguez, PulkitAgrawal, and Vincent Sitzmann. Neural descriptor fields:SE(3)-equivariant object representations for manipula-tion. arXiv preprint arXiv:2112.05124 , 2021.[24] Anthony Simeonov, Yilun Du, Lin Yen-Chen, AlbertoRodriguez, Leslie P. Kaelbling, Tomas L. Perez, andPulkit Agrawal. SE(3)-equivariant relational rearrange-ment with neural descriptor fields. In Conference onRobot Learning (CoRL) . PMLR, 2022.[25] Nathaniel Thomas, Tess Smidt, Steven Kearnes, LusannYang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor fieldnetworks: Rotation- and translation-equivariant neuralnetworks for 3d point clouds, 2018.[26] Julen Urain, Niklas Funk, Jan Peters, and Georgia Chal-vatzaki. SE(3)-diffusionfields: Learning smooth costfunctions for joint grasp and motion optimization throughdiffusion. IEEE International Conference on Roboticsand Automation (ICRA) , 2023.[27] Elise van der Pol, Daniel Worrall, Herke van Hoof,Frans Oliehoek, and Max Welling. Mdp homomorphicnetworks: Group symmetries in reinforcement learning.In H. Larochelle, M. Ranzato, R. Hadsell, M.F.Balcan, and H. Lin, editors, Advances in NeuralInformation Processing Systems , volume 33, pages4199–4210. Curran Associates, Inc., 2020. URLhttps://proceedings.neurips.cc/paper files/paper/2020/file/2be5f9c2e3620eb73c2972d7552b6cb5-Paper.pdf.[28] Dian Wang, Mingxi Jia, Xupeng Zhu, Robin Walters, andRobert Platt. On-robot learning with equivariant models.In6th Annual Conference on Robot Learning , 2022. URLhttps://openreview.net/forum?id=K8W6ObPZQyh.[29] Dian Wang, Robin Walters, and Robert Platt. SO(2)-equivariant reinforcement learning. In InternationalConference on Learning Representations , 2022. URLhttps://openreview.net/forum?id=7F9cOhdvfk .[30] Dian Wang, Robin Walters, Xupeng Zhu, and RobertPlatt. Equivariant qlearning in spatial action spaces.InConference on Robot Learning , pages 1713–1723.PMLR, 2022.[31] Dian Wang, Jung Yeon Park, Neel Sortur, Lawson L.S.Wong, Robin Walters, and Robert Platt. The sur-prising effectiveness of equivariant models in domainswith latent symmetry. In The Eleventh InternationalConference on Learning Representations , 2023. URLhttps://openreview.net/forum?id=P4MUGRM4Acu.[32] Dilin Wang and Qiang Liu. Learning to draw samples:With application to amortized mle for generative adver-sarial learning. arXiv preprint arXiv:1611.01722 , 2016.[33] Maurice Weiler, Mario Geiger, Max Welling, WouterBoomsma, and Taco S Cohen. 3d steerable cnns:Learning rotationally equivariant features in volumetricdata. In S. Bengio, H. Wallach, H. Larochelle,K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors,Advances in Neural Information Processing Systems ,volume 31. Curran Associates, Inc., 2018. URLhttps://proceedings.neurips.cc/paper files/paper/2018/file/488e4104520c6aab692863cc1dba45af-Paper.pdf.[34] Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu,and Ying Nian Wu. Cooperative training of descriptorand generator networks. IEEE transactions on patternanalysis and machine intelligence , 42(1):27–45, 2018.[35] Jianwen Xie, Zilong Zheng, Xiaolin Fang, Song-ChunZhu, and Ying Nian Wu. Cooperative training of fastthinking initializer and slow thinking solver for condi-tional learning. IEEE Transactions on Pattern Analysisand Machine Intelligence , 44(8):3957–3973, 2021.[36] Jianwen Xie, Zilong Zheng, and Ping Li. Learningenergy-based model with variational auto-encoder asamortized sampler. In Proceedings of the AAAI Confer-ence on Artificial Intelligence , volume 35, pages 10441–10451, 2021.[37] Jianwen Xie, Yaxuan Zhu, Jun Li, and Ping Li. A taleof two flows: Cooperative learning of langevin flow andnormalizing flow toward energy-based model. In Inter-national Conference on Learning Representations , 2022.URL https://openreview.net/forum?id=31d5RLCUuXC.[38] Yinshuang Xu, Jiahui Lei, and Kostas Daniilidis. Se (3)-equivariant reconstruction from light field. arXiv preprintarXiv:2212.14871 , 2022.[39] Kevin Zakka, Andy Zeng, Johnny Lee, and ShuranSong. Form2fit: Learning shape priors for generalizableassembly from disassembly, 2020.[40] Andy Zeng, Pete Florence, Jonathan Tompson, StefanWelker, Jonathan Chien, Maria Attarian, Travis Arm-strong, Ivan Krasin, Dan Duong, Vikas Sindhwani, et al.Transporter networks: Rearranging the visual world forrobotic manipulation. In Conference on Robot Learning ,pages 726–747. PMLR, 2021.[41] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su,Robin Walters, and Robert Platt. Sample efficientgrasp learning using equivariant models. Proceedingsof Robotics: Science and Systems (RSS) , 2022.APPENDIXA. Equivariant Descriptor FieldsFor a thorough understanding of the EDFs [20], we re-produce the training, sampling, and implementation details inthis section. We denote the learnable parameters as θ. Furtherdetails and proofs can be found in the original paper [20]. Theoverview of the methodology is illustrated in Fig. 6.a) Training: For the training of the energy-basedmodel Eq.1, the gradient of the log-likelihood at the demon-strated target end-effector pose gtarget is be approximated as∇θlogPθ(gtarget|Oscene, Ograsp)≈− ∇ θEθ(gtarget|Oscene, Ograsp)+1NNXn=1[∇θEθ(gn|Oscene, Ograsp)]where gn∼Pθ(gn|Oscene, Ograsp)is the n-th negativesample, which is sampled from the model.Key Descriptorsφφ(x|OOssssssssss)Input OOssssssssssB)A)Input OOggggggssggQuery EDFρρ(xx|OOggggggssgg)ψψ(xx|OOggggggssgg)Query Points Query DescriptorsType -0DescriptorsType -1DescriptorsC)High EnergyQuery -Key AlignmentLow EnergyQuery -Key MisalignmentFig. 6: A) The query points and the query EDF are generated from the grasp point cloud Ograsp. Each query point is assigned with the corresponding query descriptor, which isthe field values of the query EDF at the query points. The type-0 descriptors are visualized as colors and type-1 descriptors as arrows. The higher descriptors are not visualized.B) Similarly, the key EDFs are generated from Oscene. C) The query descriptors are transformed and matched to the key descriptors to produce the energy value. As shown inthe visualization, the lower energy case has a better alignment of the query and the key descriptors, while the high energy case fails to do so. MCMC methods are used to sampleend-effector configurations according to their energy (lower energy means exponentially higher probability). Reproduced and modified with the authors’ permission [20].b) Sampling: Energy-based models typically do not al-low direct sampling. Therefore, Ryu et al. [20] utilize Monte-Carlo Markov Chain (MCMC) methods to sample end-effectorposes from Eq.1. In particular, two-stage sampling strategy isused. First, the Metropolis-Hastings algorithm(MH) is used torapid explore the workspace. Next, the Langevin dynamics onthe SE(3) manifold is employed. The samples gained fromMH are used as the initial seeds for the Langevin dynamics.In the quaternion-translation parametrization, the differentialequation for the langevin dynamics on the SE(3)manifold isdz=dˆhdv=−LLT∇zE(z)dt+√2L dwL=LSO(3)04×303×3I3×3LSO(3)=12−h2−h3−h4h1−h4h3h4h1−h2−h3h2h1where z= (ˆh, v)∼=S3×R3⊂R7is the quaternion-translationparameterization of SE(3)withˆh=h1+h2ˆi+h3ˆj+h4ˆk.c) Implementation: As mentioned in Section III-D, Ryuet al. [20] employed the SE(3)-Transformers [10] and TensorField Networks (TFNs) [25] for the implementation of thetwo EDFs φ(x|Oscene)andψ(x|Ograsp)in Eq. 3. For thetractability of the integral in Eq. 3, Ryu et al. [20] modeledthe equivariant query density field ρ(x|Ograsp)as weightedsum of query points Q∈RNQ×3such thatρθ(x|Ograsp) =NQXi=1hwθ(x|Ograsp)δ(3)(x−Qi;θ(Ograsp))i(4)where Qi;θ(Ograsp)is the query point model that infersthe position of the i-th query point from Ograsp, andwθ(x|Ograsp)is an equivariant scalar field that bestows theweight to each query points. Here, δ(3)=Q3i=1δ(xi)denotesthe Dirac-delta function on R3. Instead of using separatemodel for Qi;θ(Ograsp), Ryu et al. [20] used Stein VariationalGradient Descent (SVGD) method to equivariantly draw querypoints from wθ(·|O). Note that wθ(·|O)can be considered asa special case of EDFs with only a single type- 0descriptor.Therefore, SE(3)-Transformers and TFNs can be utilized forthe implementation.With Eq. 4, the integral in the energy function Eq. 3 can bewritten as a tractable summation form as followsEθ(g|Oscene, Ograsp)=NQXi=1wi;θ∥φθ(g Qi;θ|Oscene)−D(R)ψθ(Qi;θ|Ograsp)∥2where Qi;θ=Qi;θ(Ograsp)andwi;θ=wi;θ(Qi;θ|Ograsp).B. Experimental ResultsThis section reproduces the details on the experiments fromRyu et al. [20] that were conducted to compare EDFs withprior methods. The mug-hanging, and bowl/bottle pick andplace tasks were employed for comparison. The models weretrained with ten demonstrations for each task where the cup,bowl, and bottle were positioned upright as shown in Fig. 7.For evaluation, the models were given various scenes with anunseen instance, in a random posture, with various distractingobjects nearby, as shown in Fig. 8.First, Table II compares EDFs with the state-of-the-artend-to-end visual manipulation method, Transporter Networks[40]. Specifically, the SE(3)-extended version of the originalTransporter Networks ( SE(3)-TNs) proposed in [40] is used.SE(3)-TNs directly regress the additional three degrees offreedom (height, roll, pitch) of the planar Transporter Net-works. Therefore, despite its name, SE(3)-TNs are SE(2)-equivariant methods. For each of the three tasks, four differentscenarios were tested: 1) the target object is an unseen targetinstance, 2) the target instance is positioned in a randomFig. 7: The scenes that are used to train the methods. For each demonstration, there areeither a cup, bowl, or bottle pose only upright in random locations. Reproduced with theauthors’ permission [20].Fig. 8: The scenarios that are given to evaluate the models. New instances are given thatwere not seen during training, and they are positioned in random postures. In addition,there are several distracting objects around the target instance. Reproduced with theauthors’ permission [20].orientation, 3) the target instance is surrounded by variousunseen distracting objects, and finally 4) all of the three unseenconditions are combined.As can be seen in Table II, EDFs significantly outperformTransporter Networks in all of the four unseen scenarios.Especially, Transporter Networks completely fail when thetarget object is provided in previously unseen poses (Scenario1), due to the lack of the spatial SE(3)-equivariance. Forexample, as shown in Fig. 9-A, Transporter Networks fail topick the target instance when positioned in an unseen poseand anticipate to grab the instance as if it were positionedupright as it was during training. On the other hand, EDFssuccessfully infer appropriate end-effector poses in all of thecases, evidencing the importance of the SE(3)bi-equivariantmodeling.Next, Ryu et al. [20] conducted another experiment tovalidate the importance of steerable representations. For thecomparison, an ablated model without steerable representa-tions that is analogous to NDFs variants [23, 24, 3] was used.Notably, unlike these previous works [23, 24, 3], this ex-periment did not use category-level pre-training, necessitatinggreater generalization capabilities for the model to successfullypick-and-place unseen object instances. The results are sum-marized in Table III. The ablated model utilizes only the type-0descriptors, which are invariant to rotations. Therefore, asillustrated in Fig.9-B, the ablated method struggles to correctlyinfer the orientations of the target poses for previously unseeninstances. In contrast, EDFs utilize higher descriptors, henceare capable of accurately inferring the target poses. The exper-imental results show that steerable representations are crucialfor improving the orientational accuracy and generalizabilityof inferred pick-and-place poses.Lastly, Ryu et al. [20] conducted an experiment to assessthe robustness of EDFs under significant multimodality inthe demonstrations. In this experiment, EDFs were trainedwith three different demonstration sets for mug-hanging task:A)B)Fig. 9: A) Transformer Networks exhibit the inability to pick the target instance thatis posed in an unseen posture due to their lack of SE(3)-equivarince. B) NDF-likemodels, which only use the type-0 descriptors, fail to place the cup on the hanger due tothe lack of orientational sensitivity of the target instance. Reproduced with the authors’permission [20].1) unimodal, low-variance demonstrations (only picking aspecific point on the mug), 2) diverse but consistent demon-strations (multimodal, but always picks by the rim of the mug),and 3) diverse and inconsistent demonstrations (multimodal,picking the mug by either the rim or the handle). The re-sults are summarized in Table IV. Comparing the results oftraining demonstration set 1 (unimodal) and 2 (multimodaland consistent), we observe that EDFs are robust to themultimodality in the demonstrations. Furthermore, the exper-imental results suggest that EDFs actually benefit from thediversity of multimodal demonstrations. This can be attributedto the nature of generative models, that are flexible enoughto leverage diverse pick-and-place strategies. Moreover, thisgenerative nature of EDFs allows them to be tolerable to highlyinconsistent demonstrations. As can be seen in the results fordemonstration set 3 (multimodal and inconsistent), EDFs areshown to be robust to inconsistency in the demonstrations.These comprehensive experiments reveal the importance ofthe four criteria in designing equivariant methods for end-to-end visual robotic manipulation. Further experimental resultsand explanations can be found in the original paper [20].TABLE II: Pick-and-place success rates in various out-of-distribution settings. Reproduced with authors’ permission [20].Mug Bowl BottlePick Place Total Pick Place Total Pick Place TotalUnseen InstancesSE(3)-TNs [40] 1.00 0.36 0.36 0.76 1.00 0.76 0.20 1.00 0.20EDFs (Ours) 1.00 0.97 0.97 0.98 1.00 0.98 1.00 1.00 1.00Unseen PosesSE(3)-TNs [40] 0.00 N/A 0.00 0.00 N/A 0.00 0.00 N/A 0.00EDFs (Ours) 1.00 1.00 1.00 1.00 1.00 1.00 0.95 1.00 0.95Unseen Distracting ObjectsSE(3)-TNs [40] 1.00 0.63 0.63 1.00 1.00 1.00 0.96 0.92 0.88EDFs (Ours) 1.00 0.98 0.98 1.00 1.00 1.00 0.99 1.00 0.99Unseen Instances, ArbitraryPoses & Distracting ObjectsSE(3)-TNs [40] 0.25 0.04 0.01 0.09 1.00 0.09 0.26 0.88 0.23EDFs (Ours) 1.00 0.95 0.95 0.95 1.00 0.95 0.95 1.00 0.95TABLE III: Success rate and inference time of the ablated model and EDFs. All the evaluations are done in the unseen instances, poses & distracting objects setting. Reproducedwith authors’ permission [20].Mug Bowl BottleDescriptor Type Pick Place Total Pick Place Total Pick Place TotalNDF-like (Type- 0Only)Inference Time 5.7s 8.6s 14.3s 6.1s 9.9s 16.0s 5.8s 17.3s 23.0sSuccess Rate 0.84 0.77 0.65 0.60 0.95 0.57 0.66 0.95 0.63EDFs (Type- 0∼3)Inference Time 5.1s 8.3s 13.4s 5.2s 10.4s 15.6s 5.2s 11.5s 16.7sSuccess Rate 1.00 0.95 0.95 0.95 1.00 0.95 0.95 1.00 0.95TABLE IV: Success rate of EDFs for mug-hanging task with different demonstrations. Reproduced with authors’ permission [20].Low Var. & Unimodal Grasps Diverse and Consistent Grasps Diverse and Inconsistent Grasps(Rim Only) (Handle & Rim)Setup Pick Place Total Pick Place Total Pick Place TotalUnseen Poses (P) 1.00 0.96 0.96 1.00 1.00 1.00 1.00 0.99 0.99Unseen Instances (I) 0.99 0.90 0.89 1.00 0.97 0.97 1.00 0.92 0.92Unseen Distractors (D) 1.00 1.00 1.00 1.00 0.98 0.98 0.96 0.99 0.95Unseen P+I+D 0.99 0.83 0.82 1.00 0.95 0.95 0.90 0.89 0.80 |
YeOtYX-WB1 | Geometric Regularity with Robot IntrinsicSymmetry in Reinforcement LearningShengchao Yan, Yuan Zhang, Baohe Zhang, Joschka Boedecker, Wolfram Burgard†Department of Computer Science, University of Freiburg, Germany†Department of Engineering, University of Technology Nuremberg, GermanyAbstract —Geometric regularity, which leverages data sym-metry, has been successfully incorporated into deep learningarchitectures such as CNNs, RNNs, GNNs, and Transformers.While this concept has been widely applied in robotics to addressthe curse of dimensionality when learning from high-dimensionaldata, the inherent reflectional and rotational symmetry of robotstructures has not been adequately explored. Drawing inspira-tion from cooperative multi-agent reinforcement learning, weintroduce novel network structures for deep learning algorithmsthat explicitly capture this geometric regularity. Moreover, weinvestigate the relationship between the geometric prior andthe concept of Parameter Sharing in multi-agent reinforcementlearning. Through experiments conducted on various challengingcontinuous control tasks, we demonstrate the significant potentialof the proposed geometric regularity in enhancing robot learningcapabilities.I. I NTRODUCTIONRobots have the ability to undertake tasks that are dangerousor difficult for humans. With more degrees of freedom, theycan perform increasingly complex tasks. For example, hu-manoid robots and quadrupedal robots can walk over challeng-ing terrain, while robot arms and hands can achieve dexterousmanipulation. However, controlling robots with a large numberof degrees of freedom becomes increasingly difficult as theobservation and action space grows exponentially. Althoughdeep reinforcement learning has been employed to solvevarious robot control problems [8, 11, 20, 3], learning effectivecontrol strategies for these robots remains a challenging task.Training neural networks on high-dimensional data isknown to be challenging due to the curse of dimensionality [4].To overcome this challenge, researchers have developed net-work architectures and incorporated various inductive biasesthat respect the structure and symmetries of the correspondingdomains. For example, convolutional neural networks (CNNs)leverage the strong geometric prior of images by incorporatingtranslation equivariance into the design of convolutional layers.This ensures that the extracted features move along with theoriginal image, regardless of the direction it is shifted in.Similarly, graph neural networks (GNNs) take advantage of thegeometric prior of permutation invariance in other domains tocapture the relationships among objects. Overall, incorporatingdomain-specific inductive biases and symmetries can greatlyimprove the ability of neural networks to learn from high-dimensional data.However, in the realm of deep reinforcement learning re-search, the potential benefits of utilizing symmetry structuresFig. 1: We design tasks (except TriFinger [3]) challenging forcurrent deep reinforcement learning baseline algorithms.present in environments, such as reflectional and rotationalsymmetry, have not attracted much attention and thus, howto combine these prior knowledge to effectively improvethe existing approaches still is worth to be investigated.To bridge the research gap, we propose to reformulate thecontrol problems under Multi-Agent Reinforcement Learning(MARL) framework to better leverage the symmetry struc-tures. We demonstrate the surprising effectiveness of ourapproach by combining the new architectures with model-free deep reinforcement learning methods. Additionally, weestablish a connection between our proposed geometric priorand the important concept of ”Parameter Sharing” in multi-agent reinforcement learning, which excessively reduces theoptimization space and speeds up the learning process. Wealso design a set of challenging robot control tasks (see Fig. 1)and evaluate our method on them. Our experimental resultsshow that our proposed method significantly improves theperformance of robot control learning tasks.II. B ACKGROUND AND RELATED WORKA. Multi-Agent Reinforcement Learning (MARL)MARL is an extended reinforcement learning method fordecision-making problems, where multiple agents can interactand learn in one environment. The most popular mathematicalframework for MARL problems is Markov games. A Markovgame is a tuple ⟨N,S,O,A, P, R i, γ⟩.Nis the set of allagents and Sis the set of states. OiandAiare observationspace and action space for agent i, while O=×i∈NOiandA=×i∈NAirepresent joint observation space andjoint action space. Define ∆|S|and∆|A|be the probabilitymeasure on SandArespectively. Then Pis the transitionprobability P(s′|s, a) :S × A → ∆S. Each agent imaintainsa specific reward function Ri(s, a) :S × A → R, and thefuture rewards are discounted by the discount factor γ∈[0,1].LetΠi={πi(ai|oi) :Oi→∆Ai}be the policy space foragent i, then the objective for agent iis represented asmax πiEπ,PhP+∞t=0γtRi(st, at)i. In practice, the state spaceand the observation space can be identical if the observationhas already fully described the system. Our paper also followsthis assumption and hence uses observation alone.Multi-Agent Mujoco [13] is a popular benchmark forMARL algorithms which divides a single robot into severaldistinct parts with separate action space. However, the state-of-the-art MARL algorithms still couldn’t match the performanceof the single-agent algorithms on this benchmark. Differentfrom their work, in which they arbitrarily divide robots intoparts and ignore the geometric structures of the robots, weleverage ideas from geometric regularity during the MARLtraining and our results show that MARL can outperformsingle-agent algorithms by a substantial margin.B. Symmetry in Robot LearningIn robot learning domain, two groups of symmetric struc-tures have been used to improve performance and learningefficiency. 1) Extrinsic Symmetry : By extrinsic symmetry werefer to the symmetries existing in the Exteroceptive sensors ofthe robot such as camera input. Some work [18, 24, 17, 19]have been proposed to integrate these symmetries into sys-tem identification via the neural network, especially CNN-structured network. These methods can largely improve theperformance for manipulation tasks, but they are mostlyaround manipulation tasks with image input and gripper with-out roll-pitch movement. Van der Pol et al. [16] introduceMDP homomorphic networks to numerically construct equiv-ariant network layers.However, the proposed network onlyconsiders a pole balancing task with discrete action. Moreover,additional calculation is required to design the network evenif the domain specific transformation is given. Mondal et al.[12] propose to learn symmetry directly from data in thelatent space but is still limited to representation learningfrom images. 2) Intrinsic Symmetry : Different from extrinsicsymmetries, intrinsic symmetries mostly naturally come fromthe physical constraints in the control system. For example, ahumanoid robot control task exhibits reflectional symmetry. Asymmetric control policy on such robot is usually more naturaland effective. Mavalankar [10] proposes a data-augmentationmethod to improve reinforcement learning method for rotationinvariant locomotion. Abdolhosseini et al. [2] investigate fourdifferent methods to encourage symmetric motion of bipedalsimulated robots. They are implemented via specific policynetwork, data augmentation or auxiliary loss function. Eventhough the robots’ motions become more natural-looking, theydo not show a major improvement on different tasks. Thepolicy network method in [2] is similar to ours in this work.But instead of a specific network merely for locomotion taskswith reflectional symmetry, we propose a generic equivariantpolicy network for both reflectional and rotational symmetries,(a) Reflectional symmetry (b) Rotational symmetryFig. 2: Agent partitioning considering symmetry struc-tures : Humanoid and Cheetah robots split into left and rightparts by reflectional symmetry; TriFinger and Ant robots splitinto 3 and 4 parts by rotational symmetry, where each part iscontrolled individually by a dedicated agent. The central part(grey) is controlled by all agents.which are predominant symmetry features in robotic systemsand animal biology. Moreover, we approach the control taskin the field of multi-agent systems. Finally, we get substantialperformance improvement in experiments by reducing thepolicy search space.III. S INGLE ROBOT CONTROL AS MARLInstead of learning a single-agent policy to control thewhole robot, which will lead to a large observation-actionspace that is difficult to optimize, we introduce multiple agentsthat are responsible for each individual component of therobot inspired by MARL. We further propose a frameworkdriven by the presence of symmetry structures in many robotsand exploit such inductive biases to facilitate the training byapplying parameter sharing techniques.The overview structure of our method is to (1) identifythe geometric structures of different robots and divide singlerobots into multiple parts accordingly; (2) reformulate thecontrol problem as a MARL framework; (3) optimize policieswith parameter sharing technique.A. Dividing Single Robots into Multiple PartsPrevious research [13] also divides a single robot intomultiple parts to evaluate the performance of MARL meth-ods. However, its irregular partitioning makes the multi-agentmethods hard to compete with the single-agent methods. Inthis paper, we reconsider partitioning in a more reasonableway, which is achieved by taking into account the symmetrystructures of robots when dividing them into multiple agents.As shown in Fig. 2a, robots with reflectional symmetrycan be partitioned into left (blue), right (green) and central(grey) parts. The robots with rotational symmetry in Fig. 2bare partitioned into parts with the same number of symmetriclimbs (colour) and a central part (grey). For a robot with any ofthese symmetric structures, we split the whole robot’s originalobservation-action space O × A byO=Oc×Qi∈NOs,iandA=Ac×Qi∈NAs,i.Oc× A crepresents the centralobservation-action pair, which consists of measurements andactuators that do not have symmetric counterparts, such asthe position, orientation, velocity and joints of the torso,target direction, or states of the manipulated objects. Raw(a) Symmetric states of TriFinger. (b) Policy network (c) Value networkFig. 3: a) TriFinger robot moves a sphere towards a target position. From left to right are the original state, rotated by 120◦,and rotated by 240◦. Note that the actions of different body parts should be equivariant with regard to the transformation. Thered arrow represents the desired moving direction of the manipulated object. b) Equivariant policy network with parameter Φ.candsstand for central and symmetric actions. c) Invariant value network with parameter Ψ,Θ.sensor data such as images and point clouds also belongsto central observation. Os,i× A s,icorresponds to symmetricobservation-action spaces, whose measurements may includejoint positions and velocities from the limbs, contact sensormeasurements of the feet or fingers, and so on. The symmetricobservation-action spaces are exactly the same for any i∈ Ndue to the robots’ symmetric property.B. Multi-agent Reinforcement Learning FormulationAssume the original observation and action of the wholerobot be o∈ O anda∈ A respectively and the numberof agents |N|, equal to the number of symmetry parts of therobots. For each agent i∈ N , there is a unique transformationfunction Tito obtain its own observation oi=Ti(o). Detailedexplanation of Tican be found in Appendix A1. Each agentgenerates the local action ai, consisting of ac,i∈ A candas,i∈ As,ifor central and symmetric actions, by its own policynetwork. Finally, the whole robot’s action ais recovered bygathering all symmetric actions as,iand merging all centralactions ac,iintoac.Regarding the reward function, our formulation follows thecooperative MARL setup, where Rifor all i∈ N are identicalat every time step. This shared reward is calculated by atask-related reward function R(o, a)which depends on thewhole robot’s observation and action. To optimize the policiesπi, we adopt the multi-agent version of Proximal PolicyOptimization (PPO) [14] methods. PPO is a popular model-free actor-critic reinforcement learning algorithm in differentdomains [22, 3, 11] for its stability, good performance and easeof implementation. Its multi-agent version also achieves com-petitive performance on different MARL benchmarks [23, 6].C. Geometric RegularizationParameter Sharing has been recognized as a crucial elementin MARL for efficient training [7]. By enabling agents toshare parameters in their policy networks, parameter sharingnot only facilitates scalability to a large number of agentsbut also enables agents to leverage shared learned repre-sentations, leading to reduced training time and improvedoverall performance. However, it is shown by Christianos et al.[5] that indiscriminately applying parameter sharing couldhurt the learning process. Successful utilization of parametersharing relies on the presence of homogeneous agents as avital requirement. In other words, agents should execute thesame action once they are given the same observation. Thisassumption ensures the transformation equivariance of theoverall policy regarding the symmetry structures.Take the simplified TriFinger Move task as an example,where the TriFinger robot has to move the sphere towards atarget position. As shown in Fig. 3a, if the whole system isrotated by 120◦or240◦around the zaxis of the robot base,the actions should also shift circularly among the three fingersfor the optimal policy. Given the whole robot’s observation o,this relationship can be denoted by:As,j(Ti(o)) =As,i(Tj(o)), A c(Ti(o)) =Ti(Ac(o)) (1)where As,jis the symmetric action of the jth agent, Acisthe central action, Tiis the symmetry transformation betweenagents iand0(see definition in Appendix A1). The transfor-mation for observation and action are so similar that we won’tdistinguish between them in this work for simplicity. Note thatthe the corresponding robot parts of agents can be definedarbitrarily. It does not influence the equivariance/invariance.Based on the equivariance represented by Eq. 1, we designthe multi-agent actor-critic network structure in Fig. 3b, 3c.Agent igets a transformed observation Ti(o)as the input ofthe policy network, the output action value consists of ac,iandas,i. The central joints are controlled by the mean action overall agents’ output ac,i, while as,iwill be used as the action totake for the robot part i. The policy network parameters areshared among agents. The value network gets the observationsfrom all agents as input. The observations first go throughthe shared feature learning layers in the value network. Thenthe latent features are merged by a set operator ( mean in thiswork). The value is finally calculated with the merged feature.The proposed policy network is equivariant with respect tosymmetric transformations we consider in this work, whilethe value network is an invariant function (see proof inAppendix A2). By sharing the same policy network among all0.0 0.5 1.0Timesteps 1e9050010001500 Episode ReturnsSASASAMAMASA(a) Humanoid Dribbling0.0 0.5 1.0Timesteps 1e9020004000 (b) Humanoid Tightrope0.0 2.5 5.0 7.5Timesteps 1e8010002000 (c) A1 Beam0 1 2Timesteps 1e9200040006000 (d) Trifinger Move0 1 2 3Timesteps 1e80200400600 (e) Ant AcrobaticFig. 4: Learning curves on robot control tasks. The x-axis is environment time steps and the y-axis is episodic returns duringtraining. All graphs are plotted with median and 25%-75% percentile shading across 5 random seeds.agents, we are able to incorporate the geometric regularizationand reduce the dimension of the observation-action space.IV. E XPERIMENTS AND DISCUSSIONA. Experimental Setup1) Challenging Tasks: Previous robotic control bench-marks [15] evaluate algorithms on fundamental tasks, e.g.controlling agents to walk. The movements in these tasks arelimited and it’s relatively easy to learn an optimal policy. Inthis work, we design several more challenging robotic controltasks, where current state-of-the-art methods fail to achievegood performance. The tasks are shown in Fig. 1: HumanoidTightrope, Humanoid Dribbling, A1 Beam, Trifinger Moveand Ant Acrobatic. The detailed introduction of the tasks canbe found in Appendix B2. All experiments are carried outbased on the NVIDIA Isaac Gym [9] robotics simulator.2) Baselines: For each task, we compare our method,named as Multi-agent with Symmetry Augmentation ( MASA ),with a set of baselines including:•Single-agent ( SA): We first compare the single-agent re-inforcement learning algorithm, which optimize all of therobot parts jointly. This baseline can provide an intuitivecomparison of our proposed framework to previous classicreinforcement learning works. The state space is kept thesame as the multi-agent one for a fair comparison.•Single-agent with Symmetry Augmentation ( SASA ): Thisbaseline follows the SA’s setup and is augmented with asymmetry loss [2]. Specifically, for any received obser-vation o, we calculate its symmetric representation Ti(o).We regulate the policy function πand the value functionVin PPO with extra symmetry losses by minimizing∥Ti(A(o))−A(Ti(o))∥2and|V(o)−V(Ti(o))|, where AandVare the gathered action and critic value of the robot.•Multi-agent without Symmetry Augmentation ( MA): Thisbaseline uses the same architecture as MASA . However, itdoes not involve the transformations in Fig. 3b 3c. Thus thegeometric regularity of symmetry is ignored, which followsthe previous research [13]. We concatenate a one-hot idencoding to each agent’s observation as a common operationfor non-homogeneous agents.We conclude the hyperparameters in Appendix B1.B. Main ResultsFigure 4 presents the average return of all methods ondifferent tasks during training. The proposed method MASAsignificantly outperforms other baselines across all 5 tasks.Further, the advantages over other baselines rise with theincreasing difficulties of the task, which can be indicated bythe increased number of joints, the extended state dimensionand the enlarged state space in the task. Humanoid Tightropeand Humanoid Football control the same robot. However, inthe tightrope task, the robot only needs to walk forward,while the football task involves random turns and manipulatingan external object, so that other baselines can hardly learnmeaningful behaviours in this task.By comparing the results of MASA ,MA and SASA , wecould observe that both of the two factors in MASA , multi-agent framework and symmetry structure, play an importantrole. Utilizing symmetry data structure alone ( SASA ) cangradually learn to solve a few tasks but with aparently lowerdata efficiency. Because the optimization space is not reducedand thus larger than that of MASA method. The multi-agentstructure itself ( MA) cannot guarantee meaningful results atall, which follows the criticism of naively sharing parametersamong non-homogeneous agents [5].In the Humanoid Dribbling task, MASA initially under-performs compared to other baselines. This is because thebaseline methods prioritize self-preservation and struggle tofind a policy that balances dribbling and staying alive. Byfocusing on avoiding falling down and kicking the ball too faraway, they learn to stand still near the ball while disregardingthe rewards associated with ball movement. Consequently, thebaseline agents are able to survive longer at the beginning,resulting in higher returns compared to MASA .C. DiscussionOur proposed multi-agent method exhibits impressive per-formance in challenging control tasks. The network structureswe introduce are not limited to on-policy reinforcement learn-ing algorithms and can be adapted for off-policy learning,imitation learning, and model-based learning methods. Whileour approach is straightforward to implement with observationtransformations, it still requires domain knowledge. We believeour method can enhance robot learning in more demandingtasks, serving as a guide for designing robots with increaseddegrees of freedom while managing the observation-actionspace growth linearly. Future research directions include ex-ploring additional symmetric structures and automating theprocess of identifying robots’ intrinsic symmetries.REFERENCES[1] Unitree A1. Unitree. a1: More dexterity, more posibility,2018. https://www.unitree.com/a1/, January 2018.[2] Farzad Abdolhosseini, Hung Yu Ling, Zhaoming Xie,Xue Bin Peng, and Michiel Van De Panne. On learn-ing symmetric locomotion. In Motion, Interaction andGames , pages 1–10, Newcastle upon Tyne United King-dom, October 2019. ACM.[3] Arthur Allshire, Mayank MittaI, Varun Lodaya, Vik-tor Makoviychuk, Denys Makoviichuk, Felix Widmaier,Manuel W ̈uthrich, Stefan Bauer, Ankur Handa, and Ani-mesh Garg. Transferring dexterous manipulation fromgpu simulation to a remote real-world trifinger. In 2022IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 11802–11809. IEEE, 2022.[4] Michael M Bronstein, Joan Bruna, Taco Cohen, andPetar Veli ˇckovi ́c. Geometric deep learning: Grids,groups, graphs, geodesics, and gauges. arXiv preprintarXiv:2104.13478 , 2021.[5] Filippos Christianos, Georgios Papoudakis, Muham-mad A Rahman, and Stefano V Albrecht. Scaling multi-agent reinforcement learning with selective parametersharing. In International Conference on Machine Learn-ing, pages 1989–1998. PMLR, 2021.[6] Christian Schroeder de Witt, Tarun Gupta, DenysMakoviichuk, Viktor Makoviychuk, Philip HS Torr,Mingfei Sun, and Shimon Whiteson. Is independentlearning all you need in the starcraft multi-agent chal-lenge? arXiv preprint arXiv:2011.09533 , 2020.[7] Jayesh K Gupta, Maxim Egorov, and Mykel Kochender-fer. Cooperative multi-agent control using deep reinforce-ment learning. In Autonomous Agents and MultiagentSystems: AAMAS 2017 Workshops, Best Papers, S ̃aoPaulo, Brazil, May 8-12, 2017, Revised Selected Papers16, pages 66–83. Springer, 2017.[8] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel,Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, andDaan Wierstra. Continuous control with deep reinforce-ment learning. arXiv preprint arXiv:1509.02971 , 2015.[9] Viktor Makoviychuk, Lukasz Wawrzyniak, YunrongGuo, Michelle Lu, Kier Storey, Miles Macklin, DavidHoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, andGavriel State. Isaac gym: High performance GPU basedphysics simulation for robot learning. In Proceedingsof the Neural Information Processing Systems Trackon Datasets and Benchmarks 1, NeurIPS Datasets andBenchmarks , 2021.[10] Aditi Mavalankar. Goal-conditioned batch reinforcementlearning for rotation invariant locomotion. arXiv preprintarXiv:2004.08356 , 2020.[11] Takahiro Miki, Joonho Lee, Jemin Hwangbo, LorenzWellhausen, Vladlen Koltun, and Marco Hutter. Learningrobust perceptive locomotion for quadrupedal robots inthe wild. Science Robotics , 7(62):eabk2822, 2022.[12] Arnab Kumar Mondal, Vineet Jain, Kaleem Siddiqi, andSiamak Ravanbakhsh. Eqr: Equivariant representationsfor data-efficient reinforcement learning. In InternationalConference on Machine Learning , pages 15908–15926.PMLR, 2022.[13] Bei Peng, Tabish Rashid, Christian Schroeder deWitt, Pierre-Alexandre Kamienny, Philip Torr, WendelinB ̈ohmer, and Shimon Whiteson. Facmac: Factored multi-agent centralised policy gradients. Advances in NeuralInformation Processing Systems , 34:12208–12221, 2021.[14] John Schulman, Filip Wolski, Prafulla Dhariwal, AlecRadford, and Oleg Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[15] Saran Tunyasuvunakool, Alistair Muldal, Yotam Doron,Siqi Liu, Steven Bohez, Josh Merel, Tom Erez, TimothyLillicrap, Nicolas Heess, and Yuval Tassa. dm control:Software and tasks for continuous control. SoftwareImpacts , 6:100022, 2020.[16] Elise Van der Pol, Daniel Worrall, Herke van Hoof,Frans Oliehoek, and Max Welling. Mdp homomorphicnetworks: Group symmetries in reinforcement learning.Advances in Neural Information Processing Systems , 33:4199–4210, 2020.[17] Dian Wang and Robin Walters. So (2) equivariantreinforcement learning. In International Conference onLearning Representations , 2022.[18] Dian Wang, Mingxi Jia, Xupeng Zhu, Robin Walters, andRobert Platt. On-robot learning with equivariant models.InConference on robot learning , 2022.[19] Dian Wang, Robin Walters, Xupeng Zhu, and RobertPlatt. Equivariant qlearning in spatial action spaces.InConference on Robot Learning , pages 1713–1723.PMLR, 2022.[20] Philipp Wu, Alejandro Escontrela, Danijar Hafner, PieterAbbeel, and Ken Goldberg. Daydreamer: World modelsfor physical robot learning. In Conference on RobotLearning , pages 2226–2240. PMLR, 2023.[21] Manuel W ̈uthrich, Felix Widmaier, Felix Grimminger,Joel Akpo, Shruti Joshi, Vaibhav Agrawal, Bilal Ham-moud, Majid Khadiv, Miroslav Bogdanovic, VincentBerenz, Julian Viereck, Maximilien Naveau, LudovicRighetti, Bernhard Sch ̈olkopf, and Stefan Bauer. Trifin-ger: An open-source robot for learning dexterity, January2021.[22] Shengchao Yan, Tim Welschehold, Daniel B ̈uscher, andWolfram Burgard. Courteous behavior of automatedvehicles at unsignalized intersections via reinforcementlearning. IEEE Robotics and Automation Letters , 7(1):191–198, 2021.[23] Chao Yu, Akash Velu, Eugene Vinitsky, Jiaxuan Gao,Yu Wang, Alexandre Bayen, and Yi Wu. The surprisingeffectiveness of ppo in cooperative multi-agent games.Advances in Neural Information Processing Systems , 35:24611–24624, 2022.[24] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su,Robin Walters, and Robert Platt. Sample efficientgrasp learning using equivariant models. Proceedingsof Robotics: Science and Systems (RSS) , 2022.APPENDIXA. Extra Method Details1) Transformation Functions: As mentioned in Sec. III-C,T0of the base agent is identity transformation. In this sectionwe describe in detail the transformation function of otheragents. For convenience, we only explain observation transfor-mation in detail, the transformations are actually the same foractions, for which we only need to change the observation tothe corresponding action components. By default, we assumethe original observation o= [oc, os,0, os,1, . . . , o s,|N|− 1]is inthe local coordinate system of the robot base for convenience.a) Reflectional Symmetry: For robots with reflectionalsymmetry, two robot parts in Fig. 2a are controlled by agents{0,1}. We define T1(o) = [Tc,1(oc), os,1, os,0], where Tc,1(oc)is a reflectional function, which reflects the central observationthrough the plane of symmetry. As a result of T1(o)differentobservation components are transformed as follows:•symmetric observations directly switch their values;•some of the central observation values are negated;–humanoid robot: ytorso,vtorso,y,ωtorso,x,ωtorso,z,αtorso,γtorso,θlower waist ,x,θpelvis,x,ωlower waist ,x,ωpelvis,x,alower waist ,x,apelvis,x–A1 robot: ytorso,vtorso,y,ωtorso,x,ωtorso,z,αtorso,γtorso–external objects: yball,vball,y•other central observation values stay the same.b) Rotational Symmetry: For robots with rotationalsymmetry, the robot parts in Fig. 2b are controlledby agents {0,1, . . . ,|N| − 1}. We define Ti(o) =[Tc,i(oc), os,i, os,i+1, . . . , o s,|N|− 1, os,0, os,1, . . . , o s,i−1], whereTc,i(oc)is a rotational transformation for central observationsaround the axis of symmetry. The degree of rotation is theangular distance from the robot part of agent ito that of agent0along the axis of symmetry. Taking the TriFinger robot inFig. 3 as an example, the rotation angles are 0,120 and240degrees for the three agents. As a result of Ti(o)differentobservation components are transformed as follows,•symmetric observations circularly shift their values;•the central observation components are rotated.2) Proof of Transformation Equivariance/Invariance: Atthe beginning we summarize the properties of the symmetrytransformations in this work. They are:•commutative: Tj(Ti(o)) =Ti+j(o) =Ti(Tj(o))•distributive: Tj(Ti(o) +Tk(o)) =Tj(Ti(o)) +Tj(Tk(o))•cyclic: Ti(o) =Ti+|N|(o)The equivariance of the policy for symmetric actions in Eq. 1is proved as follows:As,j(Ti(o)) =Φ s(Tj(Ti(o))) = Φ s(Tj(Ti(o)))=As,i(Tj(o))The equivariance for the central action is proved as follows:Ac(Ti(o)) =1|N||N|− 1Xj=0T|N|− 1−j(Φc(Tj(Ti(o))))=1|N|2|N|− i−1Xj=|N|− iT|N|− 1−j(Φc(Ti+j(o)))=1|N|2|N|− 1Xk=|N|T|N|+i−1−k(Φc(Tk(o)))=1|N||N|− 1Xk=0Ti(T|N|− 1−k(Φc(Tk(o))))=Ti(1|N||N|− 1Xk=0T|N|− 1−k(Φc(Tk(o))))=Ti(Ac(o))The invariance of the value network is proved as follows:V(Ti(o)) =Θ(1|N||N|− 1Xj=0Ψ(Tj(Ti(o))))=Θ(1|N|2|N|− i−1Xj=|N|− iΨ(Ti+j(o)))=Θ(1|N||N|− 1Xk=0Ψ(Tk(o)))=V(o) =V(Tj(o))B. Extra Experimental Setups1) Hyperparameters: Each baseline is run with 5 randomseeds. All experiments are carried out on GPU card NVIDIAA100 and rtx3080 GPU. The hyperparameters of all baselinesare consistent for a fair comparison. The detailed values canbe accessed in Table I.2) Tasks Details:a) Humanoid Tightrope: In this task, the agent learnsto control a humanoid robot to walk on a tightrope. Thehumanoid robot has 21 controllable motors. The tightropeis extremely narrow with a diameter of only 10 cm , whichchallenges the efficiency of learning algorithms. The agentis rewarded with a forward speed on the tightrope and aproper posture. At each non-terminating step, the rewardr=wv×rv+walive×ralive+wup×rup+wheading×rheading +waction×raction+wenergy×renergy +wlateral×rlateral, where•rvis the robot’s forward velocity, wv= 1.0;•ralive= 1,walive= 2.0;•rup= 1 ifeup,z>0.93, where eupis the basis vector oftorso’s zaxis in the global coordinate system, otherwisethe value is 0,wup= 0.1;•rheading =eforward ,x, where eforward is the basis vector oftorso’s xaxis in global coordinate system, wforward = 0.1;•raction=∥a∥22, where ais joints action, waction=−0.01•renergy is the joints power consumption, wenergy =−0.05TABLE I: Hyperparameters of all experiments.HYPERPARAMETERS HUMANOID TIGHTROPE HUMANOID FOOTBALL TRIFINGER MOVE A1 B EAM ANTACROBATICBATCH SIZE 4096×32 4096 ×32 16384 ×16 4096 ×24 4096 ×16MIXED PRECISION TRUE TRUE FALSE TRUE TRUENORMALIZE INPUT TRUE TRUE TRUE TRUE TRUENORMALIZE VALUE TRUE TRUE TRUE TRUE TRUEVALUE BOOTSTRAP TRUE TRUE TRUE TRUE TRUENUM ACTORS 4096 4096 16384 4096 4096NORMALIZE ADVANTAGE TRUE TRUE TRUE TRUE TRUEGAMMA 0.99 0.99 0.99 0.99 0.99GAMMA 0.95 0.95 0.95 0.95 0.95E-CLIP 0.2 0.2 0.2 0.2 0.2ENTROPY COEFFICIENT 0.0 0.0 0.0 0.0 0.0LEARNING RATE 5.E-4 5. E-4 3. E-4 3. E-4 3. E-4KL THRESHOLD 0.0008 0.0008 0.0008 0.0008 0.0008TRUNCATED GRAD NORM 1.0 1.0 1.0 1.0 1.0HORIZON LENGTH 32 32 16 24 16MINIBATCH SIZE 32768 32768 16384 32768 32768MINI EPOCHS 5 5 4 5 4CRITIC COEFFICIENT 4.0 4.0 4.0 2.0 2.0MAX EPOCH 10K 10K 10K 10K 5KPOLICY NETWORK [400,200,100] [400,200,100] [256,256,128,128] [256, 128, 64] [256, 128, 64]CRITIC NETWORK [400,200,100] [400,200,100] [256,256,128,128] [256, 128, 64] [256, 128, 64]ACTIVATION FUNCTION ELU ELU ELU ELU ELU•rlateral =vtorso,yis the penalty for lateral velocity,wlateral=−1.0The reward is −1for termination step. The action is the forceapplied to all joints.b) Humanoid Dribbling: In this task, the robot learnsto dribble along routes with random turns. The observationspace is augmented with features of the ball compared withthe tightrope task. For observation calculation, the globalcoordinate system changes with the new target route at theturning position. At each non-terminating step, the rewardr=wv×rv+walive×ralive+wdist×rdist+wheading×rheading +waction×raction+wenergy×renergy +wlateral×rlateral, where•rvis the ball’s forward velocity, wv= 2.0;•ralive= 1,walive= 0.2;•rdist=e−dwhere dis the 2d distance from torso to theball,wdist= 0.2;•rheading =eforward ,x, where eforward is the basis vector oftorso’s xaxis in the global system, wforward = 1.0;•raction, renergy are the same with Humanoid Tightrope•rlateral=vball,yis the penalty for the ball’s lateral velocity,wlateral=−0.5The reward is −1for termination step. The action is the forceapplied to all joints.c) A1 Beam: In this task, the agent controls thequadruped robot Unitree A1 [1] to walk on a balance beamwith width of 10 cm following a predefined speed. Consideringthe width of A1 and the balance beam, it is much harder thanwalking on the ground. There are overall 12 motors for UnitreeA1, 3 for each leg. At each non-terminating step, the rewardr=wv×rv+walive×ralive+wheading×rheading +waction×raction+wlateral×rlateral, where•rv=e−|vtorso,x−vtarget|is speed tracking reward, wv= 1.0;•ralive= 1,walive= 1.0;•rheading =eforward ,x, where eforward is the basis vector oftorso’s xaxis in global coordinate system, wforward = 1.0;•raction=∥a∥22, where ais the joints action, waction=−0.5•rlateral =vtorso,yis penalty for lateral velocity, wlateral =−1.0The reward is −1for termination step. The robot has a low-level joint controller. The action is the target angular positionof all joints.d) Trifinger Move: Trifinger [21] is a 3-finger manipu-lator for learning dexterity. The goal of the task is to move acube from a random initial pose to an arbitrary 6-DoF targetposition and orientation. The environment is the same as thatof [3], except that we remove the auxiliary penalty for fingermovement, which increases the difficulty of the task. The robothas a low-level joint controller. The action is the target angularposition of all joints.e) Ant Acrobatic: In this task, an ant learns to do com-plex acrobatics (e.g. heading a pole) on a ball, which extremelychallenges the ability of agents to maintain balance. The actionspace is 8 dimensions. At each non-terminating step, thereward r=walive×ralive+waction×raction+wenergy×renergy ,where•ralive= 1,walive= 0.5;•raction=∥a∥22, where ais joints action, waction=−0.005•renergy is joints power consumption, wenergy =−0.05The reward is −1for termination step. The action is the forceapplied to all joints.We conclude the observation space for each task in Table IIfor easier reading.TABLE II: Tasks InformationHUMANOID TIGHTROPE HUMANOID FOOTBALL TRIFINGER MOVE A1 B EAM ANTACROBATICOBSERVATION DIMENSION 74 80 41 47 57oCTORSOyTORSO yTORSO yTORSO xTORSOzTORSO zTORSO zTORSO yTORSOvTORSO ,x vTORSO ,x vTORSO ,x zTORSOvTORSO ,y vTORSO ,y vTORSO ,y vTORSO ,xvTORSO ,z vTORSO ,z vTORSO ,z vTORSO ,yωTORSO ,x ωTORSO ,x ωTORSO ,x vTORSO ,zωTORSO ,y ωTORSO ,y ωTORSO ,y ωTORSO ,xωTORSO ,z ωTORSO ,z ωTORSO ,z ωTORSO ,yαTORSO αTORSO αTORSO ωTORSO ,zβTORSO βTORSO βTORSO αTORSOγTORSO γTORSO γTORSO βTORSOγTORSOTORSO JOINTSθLOWER WAIST ,x θLOWER WAIST ,xθLOWER WAIST ,y θLOWER WAIST ,yθPELVIS ,x θPELVIS ,xωLOWER WAIST ,x ωLOWER WAIST ,xωLOWER WAIST ,y ωLOWER WAIST ,yωPELVIS ,x ωPELVIS ,xaLOWER WAIST ,x aLOWER WAIST ,xaLOWER WAIST ,y aLOWER WAIST ,yaPELVIS ,x aPELVIS ,xEXTERNAL OBJECTSxBALL xCUBE xPOLEyBALL yCUBE yPOLEzBALL zCUBE zPOLEvBALL,x HCUBE,x vPOLE,xvBALL,y HCUBE,y vPOLE,yvBALL,z HCUBE,z vPOLE,zHCUBE,w ωPOLE,xxCUBE TARGET ωPOLE,yyCUBE TARGET ωPOLE,zzCUBE TARGET UP POLE,xHCUBE TARGET ,x UP POLE,yHCUBE TARGET ,y UP POLE,zHCUBE TARGET ,z xBALLHCUBE TARGET ,w yBALLzBALLvBALL,xvBALL,yvBALL,zωBALL,xωBALL,yωBALL,zoS,i LIMB JOINTSθUPPER ARM ,x θUPPER ARM ,x θFINGER UPPER θFRONT HIPθUPPER ARM ,z θUPPER ARM ,z θFINGER MIDDLE θFRONT THIGHθLOWER ARM ,x θLOWER ARM ,x θFINGER LOWER θFRONT CALFθTHIGH,x θTHIGH,x ωFINGER UPPER θREAR HIPθTHIGH,y θTHIGH,y ωFINGER MIDDLE θREAR THIGHθTHIGH,z θTHIGH,z ωFINGER LOWER θREAR CALFθKNEE,x θKNEE,x aFINGER UPPER ωFRONT HIPθFOOT,x θFOOT,x aFINGER MIDDLE ωFRONT THIGHθFOOT,y θFOOT,y aFINGER LOWER ωFRONT CALFωUPPER ARM ,x ωUPPER ARM ,x ωREAR HIPωUPPER ARM ,z ωUPPER ARM ,z ωREAR THIGHωLOWER ARM ,x ωLOWER ARM ,x ωREAR CALFωTHIGH,x ωTHIGH,x aFRONT HIPωTHIGH,y ωTHIGH,y aFRONT THIGHωTHIGH,z ωTHIGH,z aFRONT CALFωKNEE,x ωKNEE,x aREAR HIPωFOOT,x ωFOOT,x aREAR THIGHωFOOT,y ωFOOT,y aREAR CALFaUPPER ARM ,x aUPPER ARM ,xaUPPER ARM ,z aUPPER ARM ,zaLOWER ARM ,x aLOWER ARM ,xaTHIGH,x aTHIGH,xaTHIGH,y aTHIGH,yaTHIGH,z aTHIGH,zaKNEE,x aKNEE,xaFOOT,x aFOOT,xaFOOT,y aFOOT,y|N| 2 2 3 2 4ACTION DIMENSION 21 21 9 12 8 |
BbFl6GOleK | Geometric Algebra TransformersJohann Brehmer Pim de Haan Sönke Behrends Taco CohenQualcomm AI Research1{jbrehmer, pim, sbehrend, tacos}@qti.qualcomm.comAbstract —Problems involving geometric data arise in a varietyof fields, including computer vision, robotics, chemistry, andphysics. Such data can take numerous forms, such as points,direction vectors, planes, or transformations, but to date there isno single architecture that can be applied to such a wide variety ofgeometric types while respecting their symmetries. In this paperwe introduce the Geometric Algebra Transformer (GATr), ageneral-purpose architecture for geometric data. GATr representsinputs, outputs, and hidden states in the projective geometricalgebra, which offers an efficient 16-dimensional vector spacerepresentation of common geometric objects as well as operatorsacting on them. GATr is equivariant with respect to E(3) , thesymmetry group of 3D Euclidean space. As a transformer, GATris scalable, expressive, and versatile. In experiments with n-bodymodeling and robotic planning, GATr shows strong improvementsover non-geometric baselines.I. I NTRODUCTIONFrom molecular dynamics to astrophysics, from materialdesign to robotics, fields across science and engineering dealwith geometric data: points, directions, surfaces, orientations,and so on. The geometric nature of data provides a richstructure: a notion of common operations between geometrictypes (computing distances between points, applying rotationsto orientations, etc.), a well-defined behaviour of data undertransformations of a system, and the independence of certainproperties of coordinate system choices.When learning relations from geometric data, incorporatingthis rich structure into the architecture has the potential toimprove the performance, especially in the low-data regime. Toimplement such an inductive bias, it is useful to first categorizeinputs, outputs, and internal data into certain object types, forinstance group representations. Next, the functions mappingbetween these types have certain regularity constraints imposed,for instance based on equivariance [6].In this spirit, we introduce the Geometric Algebra Trans-former (GATr), a general-purpose network architecture forgeometric data. GATr brings together three key ideas.Geometric algebra: To naturally describe both geometricobjects as well as their transformations in three-dimensionalspace, GATr represents data as multivectors of the projectivegeometric algebra G3,0,1. Geometric algebra is an elegant,versatile and practical mathematical framework for geomet-rical computations. The particular algebra G3,0,1extendsthe vector space R3to 16-dimensional multivectors, whichcan natively represent various geometric types and E(3)poses. In this framework, common interactions between1Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.geometric data types can be computed with few operations,in particular the geometric product.Equivariance: To behave consistently under transformations,GATr is equivariant with respect to E(3) , the symmetrygroup of three-dimensional space. To this end, we developseveral new E(3) -equivariant primitives mapping betweenmultivectors, including equivariant linear maps, an attentionmechanism, nonlinearities, and normalization layers.Transformer: Due to its favorable scaling properties, ex-pressiveness, trainability, and versatility, the transformerarchitecture [ 23] has become the de-facto standard for awide range of problems. GATr is based on the transformerarchitecture, and hence inherits these benefits.GATr hence combines two lines of research: the representa-tion of geometric objects with geometric algebra [ 9,10,18],popular in computer graphics and physics and recently gainingtraction in deep learning [ 3,19,21], and the encodingof symmetries through equivariant deep learning [ 7]. Theresult—to the best of our knowledge the first E(3) -equivariantarchitecture with internal geometric algebra representations—isa versatile network for problems involving geometric data. Wedemonstrate GATr in a robotic planning problem, where itsignificantly outperforms non-geometric baselines.II. G EOMETRIC ALGEBRA IN A NUTSHELLWe begin with the briefest of introductions to geometricalgebra. For an in-depth introduction, we point the interestedreader to Refs. [9, 10, 18, 19].Whereas a plain vector space like R3allows us to takelinear combinations of elements xandy(vectors), a geometricalgebra additionally has a bilinear associative operation: thegeometric product , denoted simply by xy. By multiplyingvectors, one obtains so-called multivectors , which can representboth geometrical objects and operators . Multivectors canbe expanded on a multivector basis, characterized by theirdimensionality or grade, such as scalars (grade 0), vectorsei(grade 1), bivectors eiej(grade 2), all the way up tothepseudoscalar e1···ed(grade d). The symmetric andantisymmetric parts of the geometric product are called theinterior and exterior (wedge) product. Finally, we will requireis the dualization operator x7→x∗. It acts on basis elementsby swapping “empty” and “full” dimensions, e. g. sendinge17→e23.In order to represent three-dimensional objects as well asarbitrary rotations and translations acting on them, we workwith the projective geometric algebra G3,0,1[9,18,19]. Hereone adds a fourth homogeneous coordinate x0e0to the 3DObject / operatorScalar Vector Bivector Trivector PS1 e0ei e0ieij e0ije123 e0123Scalar λ∈R λ 0 0 0 0 0 0 0Plane w/ normal n∈R3, origin shift d∈R 0 d n 0 0 0 0 0Line w/ direction n∈R3, orthogonal shift s∈R30 0 0 s n 0 0 0Point p∈R30 0 0 0 0 p 1 0Pseudoscalar μ∈R 0 0 0 0 0 0 0 μReflection through plane w/ normal n∈R3, origin shift d∈R 0 d n 0 0 0 0 0Translation t∈R31 0 012t 0 0 0 0Rotation expressed as quaternion q∈R4q0 0 0 0 qi 0 0 0Point reflection through p∈R30 0 0 0 0 p 1 0TABLE I: Embeddings of common geometric objects and transformations into the projective geometric algebra G3,0,1. The columns showdifferent components of the multivectors with the corresponding basis elements, with i, j∈ {1,2,3}, j̸=i, i.e. ij∈ {12,13,23}. Forsimplicity, we fix gauge ambiguities (the weight of the multivectors) and leave out signs (which depend on the ordering of indices in thebasis elements).vector space, yielding a 24= 16 -dimensional geometric algebra.The metric of G3,0,1is such that e20= 0 ande2i= 1 fori= 1,2,3.We can use G3,0,1to represent transformations: a vector urepresents the reflection of other elements in the hyperplaneorthogonal to u. Since any orthogonal transformation is equalto a sequence of reflections, this allows us to express anysuch transformation as a geometric product of (unit) vectors,u=u1···uk. These form the Pin group , which turns out tobe the double cover of E(3) . In order to apply elements of thePin group to an arbitrary multivector x, one uses the sandwichproduct :ρu(x) =(uxu−1ifuis evenuˆxu−1ifuis odd(1)Here ˆxis the grade involution , which flips the sign of odd-grade elements such as vectors and trivectors, while leavingeven-grade elements unchanged.Following Refs. [ 9,18,19], we represent planes with vectors,and require that the intersection of two geometric objects isgiven by the wedge product of their representations. Lines (theintersection of two planes) are thus represented as bivectors,points (the intersection of three planes) as trivectors. Thisleads to a duality between objects and operators, where objectsare represented like transformations that leave them invariant.Table I provides a dictionary of these embeddings. It is easyto check that this representation is consistent with using thesandwich product for transformations.We construct network layers that are equivariant with respecttoE(3) , or equivalently its double cover Pin(3 ,0,1). Afunction f:G3,0,1→G3,0,1isPin(3 ,0,1)-equivariant withrespect to the representation ρ(orPin(3 ,0,1)-equivariant forshort) if f(ρu(x)) = ρu(f(x))for any u∈Pin(3 ,0,1)andx∈G3,0,1.III. T HEGEOMETRIC ALGEBRA TRANSFORMERA. Architecture overviewThe Geometric Algebra Transformer (GATr) is designedbased on three principles outlines in the introduction: a stronginductive bias for geometric data through a representationbased on geometric algebra, symmetry awareness throughE(3) equivariance, and scalability and versatility through atransformer architecture.We sketch GATr in Fig. 1. In the top row, we show theoverall workflow. If necessary, raw inputs are first preprocessedinto geometric types. The geometric objects are then embeddedinto multivectors of the geometric algebra G3,0,1, followingthe recipe described in Tbl. I.The multivector-valued data are processed with a GATrnetwork. We show this architecture in more detail in thebottom row of Fig. 1. GATr consists of Ntransformer blocks,each consisting of an equivariant multivector LayerNorm, anequivariant multivector self-attention mechanism, a residualconnection, another equivariant LayerNorm, an equivariantmultivector MLP with geometric bilinear interactions, andanother residual connection. The architecture is thus similar toa typical transformer [ 23] with pre-layer normalization [ 1,24],but adapted to correctly handle multivector data and be E(3)equivariant. We describe the individual layers below.Finally, from the outputs of the GATr network we extract thetarget variables, again following the mapping given in Tbl. I.B. GATr primitivesa) Linear layers: We begin with linear layers betweenmultivectors. In Appendix A, we show that the equivariancecondition severely constrains them:Proposition 1. Any linear map φ:Gd,0,1→Gd,0,1that isequivariant to Pin(d,0,1)is of the formφ(x) =d+1Xk=0wk⟨x⟩k+dXk=0vke0⟨x⟩k (2)for parameters w∈Rd+2, v∈Rd+1. Here ⟨x⟩kis the bladeprojection of a multivector, which sets all non-grade- kelementsto zero.Thus, E(3) -equivariant linear maps between G3,0,1multivec-tors can be parameterized with nine coefficients, five of whichare the grade projections and four include a multiplication withthe homogeneous basis vector e0. We thus parameterize affinelayers between multivector-valued arrays with Eq. (2), withlearnable coefficients wkandvkfor each combination of input=Equilinear×N🐊GATrEquilayer normGeo attn.Equilinear+EquilinearEquilinearRaw inputsGeometric typesMultivector& scalar inputsMultivector& scalar outputsRaw outputsPreprocessEmbed in geometric algebra🐊GATrExtract from geometric algebra11Geo bilinearEquilayer normEquilinearGated GELUEquilinear+Fig. 1: Overview over the GATr architecture. Boxes with solid lines are learnable components, those with dashed lines are fixed.channel and output channel. In addition, there is a learnablebias term for the scalar components of the outputs (biases forthe other components are not equivariant).b) Geometric bilinears: Equivariant linear maps are notsufficient to build expressive networks. The reason is that theseoperations allow for only very limited grade mixing. For thenetwork to be able to construct new geometric features fromexisting ones, such as the translation vector between two points,two additional primitives are essential.The first is the geometric product x, y7→xy, the fundamentalbilinear operation of geometric algebra. It allows for substantialmixing between grades: for instance, the geometric productof vectors consists of scalars and bivector components. Thegeometric product is equivariant (Appendix A).The second geometric primitive we use derived from theso-called join2x, y7→(x∗∧y∗)∗. This map may appearcomplicated, but it plays a simple role in our architecture:an equivariant map that involves the dual x7→x∗. Includingthe dual in an architecture is essential for expressivity: inG3,0,1, without any dualization it is impossible to representeven simple functions such as the Euclidean distance betweentwo points [ 9]; we show this in Appendix A. While thedual itself is not Pin(3 ,0,1)-equivariant (w. r. t. ρ), the joinoperation is equivariant to even (non-mirror) transformations.To make the join equivariant to mirrorings as well, we multiplyits output with a pseudoscalar derived from the networkinputs: x, y, z 7→EquiJoin( x, y, z ) =z0123(x∗∧y∗)∗, wherez0123∈Ris the pseudoscalar component of a referencemultivector z.We define a geometric bilinear layer thatcombines the geometric product and the joinof the two inputs as Geometric( x, y;z) =Concatenate channels (xy,EquiJoin( x, y;z)). In GATr,this layer is included in the MLP.c) Nonlinearities and normalization: We use scalar-gatedGELU nonlinearities [ 12]GatedGELU( x) = GELU( x1)x,where x1is the scalar component of the multivector x.Moreover, we define an E(3) -equivariant LayerNorm operation2Technically, the join has an anti-dual, not the dual, in the output. We leavethis detail out for notational simplicity.for multivectors as LayerNorm( x) =x/pEc⟨x, x⟩, where theexpectation goes over channels and we use the invariant innerproduct ⟨·,·⟩ofG3,0,1.d) Attention: Given multivector-valued query, key, andvalue tensors, each consisting of niitems (or tokens) andncchannels (key length), we define the E(3) -equivariantmultivector attention asAttention( q, k, v )i′c′=XiSoftmax i Pc⟨qi′c, kic⟩√8nc!vic′.(3)Here the indices i, i′label items, c, c′label channels, and ⟨·,·⟩is the invariant inner product of the geometric algebra. Justas in the original transformer [ 23], we thus compute scalarattention weights with a scaled dot product; the difference isthat we use the inner product of G3,0,1.We extend this attentionmechanism to multi-head self-attention in the usual way.C. Extensionsa) Auxiliary scalar representations: While multivectorsare well-suited to model geometric data, many problems containnon-geometric information as well. Such scalar informationmay be high-dimensional, for instance in sinosoidal positionalencoding schemes. Rather than embedding into the scalarcomponents of the multivectors, we add an auxiliary scalarrepresentation to the hidden states of GATr. Each layer thushas both scalar and multivector inputs and outputs. They havethe same batch dimension and item dimension, but may havedifferent number of channels.This additional scalar information interacts with the multi-vector data in two ways. In linear layers, we allow the auxiliaryscalars to mix with the scalar component of the multivectors.In the attention layer, we compute attention weights both fromthe multivectors, as given in Eq. (3), and from the auxiliaryscalars, using a regular scaled dot-product attention. The twoattention maps are summed before computing the softmax,and the normalizing factor is adapted. In all other layers, thescalar information is processed separately from the multivectorinformation, using the unrestricted form of the multivectormap. For instance, nonlinearities transform multivectors withMethod RewardGATr-Diffuser (ours) 74.8±1.7Transformer-Diffuser 69.8±1.9Diffuser [15] (reproduced) 57.7±1.8Diffuser [15] 58.7±2.5EDGI [5] 62.0±2.1CQL [17] 24.4BCQ [11] 0.0TABLE II: Diffusion-based robotic planning. We show the normalizedcumulative rewards achieved on a robotic block stacking task [ 15],where 100 is optimal and means that each block stacking task iscompleted successfully, while 0 corresponds to a failure to stackany blocks. We show the mean and standard error over at least 100evaluation episodes. The top three results were computed in the GATrcode base, the bottom four taken from the literature [5, 15].equivariant gated GELUs and auxiliary scalars with regularGELU functions.b) Rotary positional embeddings: GATr assumes the datacan be described as a set of items (or tokens). If these items aredistinguishable and form a sequence, we encode their positionusing rotary position embeddings [ 22] in the auxiliary scalarvariables.c) Axial attention over objects and time: The architectureis flexible about the structure of the data. In some use cases,there will be a single dimension along which objects areorganized, for instance when describing a static scene or thetime evolution of a single object. But GATr also supports theorganization of a problem along multiple axes, for examplewith one dimension describing objects and another time steps.In this case, we follow an axial transformer layout [ 13],alternating between transformer blocks that attend over differentdimensions. (The not-attended dimensions in each block aretreated like a batch dimension.)IV. R OBOTIC PLANNING THROUGH INVARIANT DIFFUSIONIn Appendix C, we demonstrate Kuka on a synthetic n-bodyregression problem. We find that it outperforms non-geometricbaselines and the E(3) -equivariant SEGNN in terms of sampleefficiency and generalization.In this section of the main paper, we restrict ourselves toa robotics experiment. We show how GATr defines an E(3) -invariant diffusion model, that it can be used for model-basedreinforcement learning and planning, and that this combinationis well-suited to solve robotics problems.We follow Janner et al. [15], who propose to treat learning aworld model and planning within that model as a unified genera-tive modeling problem. After training a diffusion model [ 20] onoffline trajectories, one can use it in a planning loop, samplingfrom it conditional on the current state, desired future states,or to maximize a given reward, as needed.We embed a GATr model in this algorithm and call thiscombination GATr-Diffuser . GATr is equivariant with respecttoE(3) and the object permutation group Sn. When usedtogether with a base density that is E(3)×Sn-invariant, thediffusion model is also E(3)×Sn-invariant [ 2,16]. Often,a particular task requires breaking this symmetry: imagine,for instance, that a particular object needs to be moved to101102103104Training trajectories020406080RewardRobotic block stackingGATr-Diffuser (ours)Transformer-DiffuserDiffuser (reproduced) EDGI [Brehmer '23]Diffuser [Janner '22]CQL [Kumar '20]BCQ [Fujimoto '18]Fig. 2: Diffusion-based robotic planning. We show normalized rewards(higher is better) as in Tbl. II as a function of training dataset size.GATr ( ) is more successful at block stacking and more sample-efficient than the baselines, including the original Diffuser model [ 15]( ) and our modification of it based on a Transformer ( ). Ingrey, we show results reported in the literature [5, 15].a particular location. The Diffuser approach is an excellentmatch for such situations, as conditioning on the current state,future state, or a reward model as proposed by Janner et al.[15] can softly break the symmetry group as desired [5].GATr-Diffuser is demonstrated on the problem of a Kukarobotic gripper stacking blocks using the “unconditional”environment introduced by Janner et al. [15]. We train aGATr-Diffuser model on the offline trajectory dataset publishedwith that paper. To facilitate a geometric interpretation, weparameterize the data in terms of geometric quantities likeobject positions and orientations. In particular, we use theposition and pose of the robotic endeffector as features andmap to joint angles with an inverse kinematics model. We thentest GATr-Diffuser on its ability to stack four blocks on eachother. We compare our GATr-Diffuser model to a reproductionof the original Diffuser model (based on the published code,but using our data parameterization) and a new transformerbackbone for the Diffuser model. In addition, we show thepublished results of Diffuser [ 15], the equivariant EDGI [ 5],and the offline RL algorithms CQL [ 17] and BCQ [ 11] aspublished in Ref. [ 15]. The problem and hyperparameters aredescribed in detail in Appendix D.As shown in Tbl. II and Fig. 2, GATr-Diffuser is able tosolve the block-stacking problem better than all baselines. It isalso clearly more sample-efficient, matching the performance ofa Diffuser model trained on the full dataset even when trainingonly on 1% of the trajectories. The fact that GATr-Diffuser alsooutperforms the E(3) -equivariant EDGI model [ 5] is evidencethat equivariance alone is not the key to its success, hintingthat the geometric algebra provides a useful inductive bias.REFERENCES[1]Alexei Baevski and Michael Auli. Adaptive inputrepresentations for neural language modeling. arXivpreprint arXiv:1809.10853 , 2018.[2]Avishek Joey Bose and Ivan Kobyzev. Equivariant finitenormalizing flows. arXiv preprint arXiv:2110.08649 ,2021.[3]Johannes Brandstetter, Rianne van den Berg, Max Welling,and Jayesh K Gupta. Clifford neural layers for PDEmodeling. arXiv preprint arXiv:2209.04934 , 2022.[4]Johannes Brandstetter, Rob Hesselink, Elise van der Pol,Erik J Bekkers, and Max Welling. Geometric and physicalquantities improve E(3) equivariant message passing. InInternational Conference on Learning Representations ,2022.[5]Johann Brehmer, Joey Bose, Pim De Haan, and TacoCohen. EDGI: Equivariant Diffusion for Planning withEmbodied Agents. ICLR workshop on ReincarnatingReinforcement Learning , 2023.[6]Michael M Bronstein, Joan Bruna, Taco Cohen, andPetar Veli ˇckovi ́c. Geometric deep learning: Grids, groups,graphs, geodesics, and gauges. 2021.[7]Taco Cohen and Max Welling. Group equivariantconvolutional networks. In International conference onmachine learning , pages 2990–2999. PMLR, 2016.[8]Erwin Coumans and Yunfei Bai. PyBullet, a Pythonmodule for physics simulation for games, robotics andmachine learning. http://pybullet.org, 2016–2019.[9]Leo Dorst. A guided tour to the plane-based geometricalgebra pga. 2020. URL https://geometricalgebra.org/downloads/PGA4CS.pdf.[10] Leo Dorst, Daniel Fontijne, and Stephen Mann. Geomet-ric Algebra for Computer Science: An Object-orientedApproach to Geometry . Morgan Kaufmann Series inComputer Graphics. Morgan Kaufmann, Amsterdam,2007. ISBN 978-0-12-369465-2.[11] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration.InInternational conference on machine learning , pages2052–2062. PMLR, 2019.[12] Dan Hendrycks and Kevin Gimpel. Gaussian error linearunits (gelus). arXiv preprint arXiv:1606.08415 , 2016.[13] Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, andTim Salimans. Axial attention in multidimensionaltransformers. arXiv:1912.12180 [cs] , December 2019.[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoisingdiffusion probabilistic models. In Neural InformationProcessing Systems , 2020.[15] Michael Janner, Yilun Du, Joshua Tenenbaum, and SergeyLevine. Planning with diffusion for flexible behaviorsynthesis. In International Conference on MachineLearning , 2022.[16] Jonas Köhler, Leon Klein, and Frank Noé. Equivariantflows: exact likelihood generative learning for symmet-ric densities. In International conference on machinelearning , pages 5361–5370. PMLR, 2020.[17] Aviral Kumar, Aurick Zhou, George Tucker, and SergeyLevine. Conservative q-learning for offline reinforcementlearning. Advances in Neural Information ProcessingSystems , 33:1179–1191, 2020.[18] Martin Roelfs and Steven De Keninck. Graded sym-metry groups: plane and simple. arXiv preprintarXiv:2107.03771 , 2021.[19] David Ruhe, Jayesh K Gupta, Steven de Keninck, MaxWelling, and Johannes Brandstetter. Geometric cliffordalgebra networks. arXiv preprint arXiv:2302.06594 , 2023.[20] Jascha Sohl-Dickstein, Eric Weiss, NiruMaheswaranathan, and Surya Ganguli. Deep unsupervisedlearning using nonequilibrium thermodynamics. InInternational Conference on Machine Learning , pages2256–2265. PMLR, 2015.[21] Matthew Spellings. Geometric algebra attention networksfor small point clouds. arXiv preprint arXiv:2110.02393 ,2021.[22] Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and YunfengLiu. Roformer: Enhanced transformer with rotary positionembedding. arXiv preprint arXiv:2104.09864 , 2021.[23] Ashish Vaswani, Noam Shazeer, Niki Parmar, JakobUszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser,and Illia Polosukhin. Attention is all you need. InAdvances in Neural Information Processing Systems ,volume 30, 2017.[24] Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, ShuxinZheng, Chen Xing, Huishuai Zhang, Yanyan Lan, LiweiWang, and Tieyan Liu. On layer normalization in thetransformer architecture. In International Conference onMachine Learning , pages 10524–10533. PMLR, 2020.APPENDIXA. Theoretical resultsIn this section, we state or prove several properties ofequivariant maps between geometric algebras that we use inthe construction of GATr.The grade involution is a linear involutive bijection b·:Gn,0,r:Gn,0,r, which sends a k-blade xtobx= (−1)kx.Note that this is an algebra automorphism cxy=bxby, andalso an ∧-algebra automorphism. The reversal in a linearinvolutive bijection e·:Gn,0,r:Gn,0,rwhich sends a k-bladex=x1∧x2∧...∧xkto the reverse: ex=xk∧...∧x2∧x1=±xwith +xifk∈ {0,1,4,5, ...,8,9, ...}and−xotherwise.Note that this is an anti-automorphism (contravariant functor):fxy=eyex.Here we denote the sandwich action of u∈Pin(n,0, r)on a multivector xnot as ρu(x), but as u[x]. For odd u,u[x] =ubxu−1, while for even u,u[x] =uxu−1. The sandwichaction is linear by linearity of the b·and bilinearity of thegeometric product. Furthermore, note that for any particular u∈Pin(n,0, r), the action is a geometric algebra homomorphism:u[ab] =ubabu−1=ubau−1ubbu−1=u[a]u[b]. By linearity anda symmetrization argument [ 10, Sec 7.1], one can show that italso a ∧-algebra homomorphism (outermorphism): u[a∧b] =u[a]∧u[b].Letl≥k. Given a k-vector aandl-vector b, define the leftcontraction asa⌋b:=⟨ab⟩l−k, which is a l−k-vector. Fork= 1, and ba blade b=b1∧...∧bl. Geometrically, a⌋bis theprojection of ato the space spanned by the vectors bi. Thuswe have that a⌋b= 0⇐⇒ ∀ i,⟨a, bi⟩= 0 [10, Sec 3.2.3], inwhich case we define aandbto be orthogonal . In particular,two vectors a, bare orthogonal if their inner product is zero.Futhermore, we define a vector ato be tangential to blade bifa∧b= 0.In the projective algebra, a blade xis defined to be ideal ifit can be written as x=e0∧yfor another blade y.1) Linear maps: We begin with Pin-equivariant linear maps.After some technical lemmata, we prove the most general formof linear equivariant maps in the Euclidean geometric algebraGn,0,0, and then also in projective geometric algebra Gn,0,1.Proposition 2. The grade projection ⟨·⟩kis equivariant [ 10,Sec 13.2.3].Proof:Choose an l-blade x=a1∧a2∧...∧al. Letube a 1-versor.As the action uis an outermorphism, u[x] =u[a1]∧...∧u[al]is an l-blade. Now if l̸=k, then⟨x⟩k= 0 and thus u[⟨x⟩k] =⟨u[x]⟩k. Ifl=k, then ⟨x⟩k=xand thus u[⟨x⟩k] =⟨u[x]⟩k.As the grade projection is linear, equivariance extends to anymultivector.Proposition 3. The following map is equivariant: φ:G3,0,1→G3,0,1:x7→e0x.Proof: Let ube a 1-versor, then uacts on a multivectorasx7→u[x] =uˆxu−1, where ˆxis the grade involution.Note that e0is invariant: u[e0] =−ue0u−1=e0uu−1=e0,where ue0=−e0ubecause uande0are orthogonal: ue0=⟨u, e0⟩+u∧e0=−e0∧u=−e0∧u. Then φis equivariant, asthe action is an algebra homomorphism: u[φ(x)] =u[e0x] =ude0xu−1=uˆe0u−1uˆxu−1=u[e0]u[x] =e0u[x] =φ(u[x]).It follows that φis also equivariant to any product of vectors,i.e. any versor u.a) Euclidean geometric algebra: Before constructing themost general equivariant linear map between multivectors inprojective geometric algebra, we begin with the Euclidean caseGn,0,0.Theorem 1 (Cartan-Dieuodonné) .Every orthogonal transfor-mation of an n-dimensional space can be decomposed into atmost nreflections in hyperplanes.Proof: This theorem is proven in Roelfs and De Keninck[18].Lemma 1. In the n-dimensional Euclidean geometric algebraGn,0,0, the group Pin(n,0,0)acts transitively on the space ofk-blades of norm λ∈R>0.Proof: As the Pingroup preserves norm, choose λ= 1without loss of generality. Any k-blade xof unit norm can bewritten by Gram-Schmidt factorization as the wedge productofkorthogonal vectors of unit norm x=v1∧v2∧...∧vk.Consider another k-blade y=w1∧w2∧...∧wkwith wiorthonormal. We’ll construct a u∈Pin(n,0,0)such thatu[x] =y.Choose n−kadditional orthonormal vectors vk+1, ..., v nandwk+1, .., .w nto form orthonormal bases. Then, thereexists a unique orthogonal transformation Rn→Rnthatmaps viinto wifor all i∈ {1, ..., n}. By the Cartan-Dieuodonné theorem 1, this orthogonal transformation canbe expressed as the product of reflections, thus there existsau∈Pin(n,0,0)such that u[vi] =wi. As the uaction isa∧-algebra homomorphism ( u[a∧b] =u[a]∧u[b], for anymultivectors a, b), we have that u[x] =y.Lemma 2. In the Euclidean ( r= 0) or projective ( r= 1)geometric algebra Gn,0,r, letxbe a k-blade. Let ube a 1-versor. Then u[x] =x⇐⇒ u⌋x= 0 andu[x] =−x⇐⇒u∧x= 0.Proof: Let xbe a k-blade and ua vector of unit norm.We can decompose uintou=t+vwitht∧x= 0 (the parttangential to the subspace of x) and v⌋x= 0 (the normalpart). This decomposition is unique unless xis ideal in theprojective GA, in which case the e0component of uis bothnormal and tangential, and we choose tEuclidean.In either case, note the following equalities: xt=(−1)k−1tx;xv= (−1)kvx;vt=−tvand note ∄λ̸= 0 suchthatvtx=λx, which can be shown e. g. by picking a basis.Then:u[x] = (−1)k(t+v)x(t+v)= (t+v)(−t+v)x= (−∥t∥2+∥v∥2)x−2vtx .We have u[x]∝x⇐⇒ vtx= 0. Ifxis not ideal, this impliesthat either v= 0 (thus u∧x= 0 andu[x] =−x) ort= 0(thus u⌋x= 0 andu[x] =x). Ifxis ideal, this implies thateither v∝e0(thus u∧x= 0 andu[x] =−x) ort= 0 (thusu⌋x= 0 andu[x] =x).Lemma 3. Letr∈ {0,1}. Any linear Pin(n,0, r)-equivariantmapφ:Gn,0,r→Gn,0,rcan be decomposed into a sum ofequivariant maps φ=Plkmφlkm, with φlkmequivariantlymapping k-blades to l-blades. If r= 0 (Euclidean algebra) ork < n + 1, such a map φlkmis defined by the image of anyone non-ideal k-blade, like e12...k. Instead, if r= 1 (projectivealgebra) and k=n+ 1, then such a map is defined by theimage of a pseudoscalar, like e01...n.Proof: The Pin(n,0, r)group action maps k-vectors tok-vectors. Therefore, φcan be decomposed into equivariantmaps from grade kto grade l:φ(x) =Plkφlk(⟨x⟩k), withφlkhaving l-vectors as image, and all k′-vectors in the kernel,fork′̸=k. Let xbe an non-ideal k-blade (or pseudoscalarifk=n+ 1) . By lemmas 1 and 4, in both Euclidean andprojective GA, the span of the k-vectors in the orbit of xcontains any k-vector. So φlkis defined by the l-vector y=φlk(x). Any l-vector can be decomposed as a finite sum of l-blades: y=y1+...yM. We can define φlkm(x) =ym, extendedto all l-vectors by equivariance, and note that φlk=Pmφlkm.Proposition 4. For an n-dimensional Euclidean geometricalgebra Gn,0,0, any linear endomorphism φ:Gn,0,0→Gn,0,0that is equivariant to the Pin(n,0,0)group (equivalently toO(n)) is of the type φ(x) =Pnk=0wk⟨x⟩k, for parametersw∈Rn+1.Proof: By decomposition of Lemma 3, let φmap fromk-blades to l-blades. Let xbe ak-blade. Let ube a 1-versor.By Lemma 2, if uis orthogonal to x,u[φ(x)] = φ(u[x]) =φ(x)anduis also orthogonal to φ(x). Ifu∧x= 0, thenu[φ(x)] = φ(u[x]) = φ(−x) =−φ(x)andu∧φ(x) = 0 .Thus any vector in xis inφ(x)and any vector orthogonal toxis orthogonal to φ(x), this implies φ(x) =wkx, for somewk∈R. By Lemma 3, we can extend φtoφ(y) =wkyforanyk-vector y.b) Projective geometric algebra: How about equivariantlinear maps in projective geometric algebra? The degeneratemetric makes the derivation more involved, but in the end wewill arrive at a result that is only slightly more general.Lemma 4. The Pin group of the projective geometric algebra,Pin(n,0,1), acts transitively on the space of k-blades withpositive norm ∥x∥=λ > 0. Additionally, the group actstransitively on the space of zero-norm k-blades of the formx=e0∧y(called ideal blades), with ∥y∥=κ.Proof: Let x=x1∧...∧xkbe a k-blade with positivenorm λ. All vectors xican be written as xi=vi+δie0, for anonzero Euclidean vector vi(meaning with no e0component)andδi∈R, because if vi= 0, the norm of xwould have been0. Orthogonalize them as x′2=x2−⟨x1, x2⟩x1, etc., resultinginx=x′1∧ ··· ∧ x′kwithx′i=v′i+δ′ie0with orthogonal v′i.Define the translation t= 1 +12Piδ′ie0∧v′i, which makesx′Euclidean: t[x′] =v′1∧...∧v′k. By Lemma 1, the EuclideanPin group Pin(n,0,0), which is a subgroup of Pin(n,0,1),acts transitively on Euclidean k-blades of a given norm. Thus,in the projective geometric algebra Pin(n,0,1), any two k-blades of equal positive norm λare related by a translationto the origin and then a Pin(n,0,0)transformation.For the ideal blades, let x=e0∧y, with ∥y∥=κ. Wetakeyto be Euclidean without loss of generality. For anyg∈Pin(n,0,1),g[e0] =e0, sog[x] =e0∧g[y]. Consideranother x′=e0∧y′with∥y′∥=κand taking y′Euclidean.AsPin(n,0,0)acts transitively on Euclidean (k−1)-bladeswith norm κ, letg∈Pin(n,0,0)such that g[y] =y′. Theng[x] =x′.We can now construct the most general equivariant linearmap between projective geometric algebras, a key ingredientfor GATr:Proposition 5. For the projective geometric algebra Gn,0,1,any linear endomorphism φ:Gn,0,1→Gn,0,1that isequivariant to the group Pin(n,0, r)(equivalently to E(n))is of the type φ(x) =Pn+1k=0wk⟨x⟩k+Pnk=0vke0⟨x⟩k, forparameters w∈Rn+2, v∈Rn+1.Proof: Following Lemma 3, decompose φinto a linearequivariant map from k-blades to l-blades. For k < n + 1,letx=e12...k. Then following Lemma 2, for any 1≤i≤k,ei∧x= 0, ei[x] =−x, and ei[φ(x)] =φ(ei[x]) =φ(−x) =−φ(x)and thus ei∧φ(x) = 0 . Therefore, we can write φ(x) =x∧y1∧...∧yl−k, forl−kvectors yjorthogonal to x.Also, again using Lemma 2, for k < i≤n,ei⌋x= 0 = ⇒ei[φ(x)] = φ(x) =⇒ei⌋φ(x) = 0 = ⇒ ∀ i,⟨ei, yj⟩= 0.Thus, yjis orthogonal to all eiwith1≤i≤n. Hence, l=korl=k+ 1andy1∝e0.Fork=n+ 1, letx=e012...k. By a similar argument, allinvertible vectors utangent to xmust be tangent to φ(x), thuswe find that φ(x) =x∧yfor some blade y. For any non-zeroφ(x),y∝1, and thus φ(x)∝x. By Lemma 3, by equivarianceand linearity, this fully defines φ.2) Bilinear maps: Next, we turn towards bilinear operations.In particular, we show that the geometric product and the joinare equivariant.For the geometric product, equivariance is straightforward:Any transformation u∈Pin(n,0, r), gives a homomor-phism of the geometric algebra, as for any multivectors x, y,u[xy] =ucxyu−1=ubxbyu−1=ubxu−1ubyu−1=u[x]u[y]. Thegeometric product is thus equivariant.a) Dual and join in Euclidean algebra: For the join andthe closely related dual, we again begin with the Euclideangeometric algebra, before turning to the projective case later.The role of the dual is to have a bijection ·∗:Gn,0,0→Gn,0,0that maps k-vectors to (n−k)-vectors. For the Euclideanalgebra, with a choice of pseudoscalar I, we can define a dualas:x∗=xI−1=x ̃I (4)This dual is bijective, and involutive up to a sign: (y∗)∗=y ̃I ̃I=±y, with +y= 1 forn∈ {1,4,5,8,9, ...}and−yforn∈ {2,3,6,7, ...}. We choose ̃Iinstead of Iin the definitionof the dual so that given nvectors x1, ..., x n, the dual of themultivector x=x1∧...xn, is given by the scalar of the orientedvolume spanned by the vector. We denote the inverse of thedual as x−∗=xI. Expressed in a basis, the dual yields thecomplementary indices and a sign. For example, for n= 3andI=e123, we have (e1)∗=−e23,(e12)∗=e3.Via the dual, we can define the bilinear join operation, formultivectors x, y:x∨y:= (x∗∧y∗)−⋆= ((x ̃I)∧(y ̃I))I.Lemma 5. In Euclidean algebra Gn,0,0, the join isSpin( n,0,0)equivariant. Furthermore, it is Pin(n,0,0)equiv-ariant if and only if nis even.Proof: The join is equivariant to the transformations fromthe group Spin( n,0,0), which consists of the product of aneven amount of unit vectors, because such transformationsleave the pseudoscalar Iinvariant, and the operation consistsotherwise of equivariant geometric and wedge products.However, let e12...n=I ∈ Pin(n,0,0)be the pointreflection, which negates vectors of odd grades by the gradeinvolution: I[x] = ˆx. Let xbe a k-vector and yanl-vector.Then x∨yis a vector of grade n−((n−k)+(n−l)) =k+l−n(and zero if k+l < n ). Given that the join is bilinear, theinputs transform as (−1)k+lunder the point reflection, whilethe transformed output gets a sign (−1)k+l−n. Thus for oddn, the join is not Pin(n,0,0)equivariant.To address this, given a pseudoscalar z=λI, we can createan equivariant Euclidean join via:EquiJoin( x, y, z =λI) :=λ(x∨y) =λ(x∗∧y∗)−∗.(5)Proposition 6. In Euclidean algebra Gn,0,0, the equivariantjoinEquiJoin isPin(n,0,0)equivariant.Proof: The EquiJoin is a multilinear operation, so fork-vector xandl-vector y, under a point reflection, the inputgets a sign (−1)k+l+nwhile the output is still a k+l−nvector and gets sign (−1)k+l−n. These signs differ by even(−1)2n= 1 and thus EquiJoin isPin(n,0,1)-equivariant.We prove two equalities of the Euclidean join which we uselater.Lemma 6. In the algebra Gn,0,0, letvbe a vector and x, ybe multivectors. Thenv⌋(x∨y) = (v⌋x)∨y (6)andx∨(v⌋y) =−(−1)ndv⌋x∨y . (7)Proof: For the first statement, let abe ak-vector and banl-vector. Then note the following two identities:a∨b=⟨a∗b ̃I⟩2n−k−lI=⟨a∗b⟩n−(2n−k−l) ̃II=⟨a∗b⟩k+l−n=a∗⌋b ,(v⌋a)∗=⟨va⟩k−1 ̃I=⟨va ̃I⟩n−k+1=⟨va∗⟩n−k+1=v⌋(a∗).Combining these and the associativity of ⌋gives:(v⌋a)∨b= (v⌋a)∗⌋b=v⌋(a∗)⌋b=v⌋(a∨b)For the second statement, swapping k-vector aandl-vectorbincurs a∨b= (a∗∧b∗)−∗= (−1)(n−k)(n−l)(b∗∧a∗)−∗=(−1)(n−k)(n−l)(b∨a). Then we get:a∨(v⌋b) = (−1)(n−k)(n−l−1)(v⌋b)∨a= (−1)(n−k)(n−l−1)v⌋(b∨a)= (−1)(n−k)(n−l−1)+(n−k)(n−l)v⌋(a∨b)= (−1)(n−k)(n−l−1)+(n−k)(n−l)(v⌋a)∨b= (−1)(n−k)(2n−2l−1)(v⌋a)∨b= (−1)k−n(v⌋a)∨b=−(−1)k−1−n(v⌋a)∨b=−(−1)n[(v⌋a)∨b .This generalizes to multivectors x, y by linearity.b) Dual and join in projective algebra: For the projectivealgebra Gn,0,1with its degenerate inner product, the dualdefinition of Eq. 4 unfortunately does not yield a bijectivedual. For example, e0^e012...n= 0. For a bijective dual thatyields the complementary indices on basis elements, a differentdefinition is needed. Following Dorst [9], we use the rightcomplement. This involves choosing an orthogonal basis andthen for a basis k-vector xto define the dual x∗to be thebasis n+ 1−k-vector such that x∧x∗=I, for pseudoscalarI=e012...n. For example, this gives dual e∗01=e23, so thate01∧e23=e0123.This dual is still easy to compute numerically, but it can nolonger be constructed solely from operations available to us inthe geometric algebra. This makes it more difficult to reasonabout equivariance.Proposition 7. In the algebra Gn,0,1, the join a∨b= (a∗∧b∗)−∗is equivariant to Spin( n,0,1).Proof: Even though the dual is not a Gn,0,1operation, wecan express the join in the algebra as follows. We decomposeak-vector xasx=tx+e0pxinto a Euclidean k-vector txand a Euclidean (k−1)-vector px. Then Dorst [9, Eq (35) ]computes the following expression(tx+e0px)∨(ty+e0py)= ((tx+e0px)∗∧(ty+e0py)∗)−∗=tx∨Eucpy+ (−1)ncpx∨Eucty+e0(px∨Eucpy),(8)where the Euclidean join of vectors a, bin the projective algebrais defined to equal the join of the corresponding vectors in theEuclidean algebra:a∨Eucb:= ((a^e12...n)∧(b^e12...n))e12...nThe operation a∨EucbisSpin( n,0,0)equivariant, asdiscussed in Lemma 5. For any rotation r∈Spin( n,0,1)(which is Euclidean), we thus have r[a∨Eucb] =r[a]∨Eucr[b].This makes the PGA dual in Eq. (8)equivariant to the rotationalsubgroup Spin( n,0,0)⊂Spin( n,0,1).We also need to show equivariance to translations. Letvbe a Euclidean vector and τ= 1−e0v/2a translation.Translations act by shifting with e0times a contraction: τ[x] =x−e0(v⌋x). This acts on the decomposed xin the followingway: τ[tx+e0px] =τ[tx] +e0px=tx+e0(px−v⌋tx).We thus get:τ[x]∨τ[y]= (τ[tx] +e0px)∨(τ[ty] +e0py)= (tx+e0(px−v⌋t))∨(ty+e0(py−v⌋ty))=x∨y−tx∨Euc(v⌋ty)−(−1)ndv⌋tx∨Eucty−e0(px∨Euc(v⌋ty) + (v⌋tx)∨Eucpy)(used (8)& linearity)=x∨y−e0(px∨Euc(v⌋ty) + (v⌋tx)∨Eucpy)(used (7))=x∨y−e0−(−1)ndv⌋px∨Eucty+ (v⌋tx)∨Eucpy(used (7))=x∨y−e0((−1)n(v⌋cpx)∨Eucty+ (v⌋tx)∨Eucpy)=x∨y−e0(v⌋{(−1)ncpx∨Eucty+tx∨Eucpy})(used (6))=τ[x∨y].The join is thus equivariant3to translations and rotationsand is therefore Spin( n,0,1)equivariant.Similar to the Euclidean case, we obtain full Pin(n,0,1)equivariance via multiplication with a pseudoscalar. We thusalso use the EquiJoin from Eq. (5) in the projective case.3) Expressivity: As also noted in Ref. [ 9], in the projectivealgebra, the geometric product itself is unable to compute manyquantities. It is thus insufficient to build expressive networks.This follows from the fact that the geometric product preservesnorms.Lemma 7. For the algebra Gn,0,r, for multivectors x, y, wehave∥xy∥=∥x∥∥y∥.Proof: ∥xy∥2=xyfxy=xy ̃y ̃x=x∥y∥2 ̃x=x ̃x∥y∥2=∥x∥2∥y∥2.Hence, any null vector in the algebra can never be mappedto a non-null vector, including scalars. The projective algebracan have substantial information encoded as null vector, suchas the position of points. This information can never influencescalars or null vectors. For example, there is no way to computethe distance (a scalar) between points just using the projectivealgebra. In the GATr architecture, the input to the MLPs thatoperate on the scalars, or the attention weights, thus couldnot be affected by the null information, had we only used thegeometric product on multivectors.To address this limitation, we use besides the geometricproduct also the join. The join is able to compute suchquantities. For example, given the Euclidean vector e12...n,3The authors agree with the reader that there must be an easier way toprove this.we can map a null vector x=e012...kto a non-null vectorx∨e12...n∝e12...k.B. ArchitectureIn this section, we provide some details on the GATrarchitecture that did not fit into the main paper.a) Equivariant join: One of the primitives in GATr is theequivariant join EquiJoin( x, y;z), which we define in Eq. (5).Forxandy, we use hidden states of the neural network afterthe previous layer. The nature of zis different: it is a referencemultivector and only necessary to ensure that the functioncorrectly changes sign under mirrorings of the inputs. We findit beneficial to choose this reference multivector zbased on theinput data rather than the hidden representations, and chooseit as the mean of all inputs to the network.b) Auxiliary scalars: In addition to multivector repre-sentations, GATr supports auxiliary scalar representations, forinstance to describe non-geometric side information such aspositional encodings or diffusion time embeddings. In mostlayers, these scalar variables are processed like in a standardtransformer, with two exceptions. In linear layers, we allowfor the scalar components of multivectors and the auxiliaryscalars to freely mix. In the attention operation, we computeattention weights asSoftmax i Pc⟨qMVi′c, kMVic⟩+Pcqsi′cksic√8nMV+ns!, (9)where qMVandkMVare query and key multivector represen-tations, qsandksare query and key scalar representations,nMV is the number of multivector channels, and nsis thenumber of scalar channels.C.n-body dynamics predictiona) Dataset: We first demonstrate GATr on a n-body dy-namics prediction problem. Given the masses, initial positions,and velocities of a star and a few planets, the goal is to predictthe final position after the system has evolved under Newtoniangravity for some time.To be more precise, we generate data (for nobjects) asfollows:1)The masses of nobjects are sampled from log-uniformdistributions. For one object (the star), we use m0∈[1,10]; for the remaining objects (the planets), we usemi∈[0.01,0.1]. (Following common practice in theoreti-cal physics, we use dimensionless quantities such that thegravitational constant is 1.)2)The initial positions of all bodies are sampled. We first usea heliocentric reference frame. Here the initial positionsof all bodies are sampled. The star is set to the origin,while the planets are sampled uniformly on a plane withina distance ri∈[0.1,1.0]from the star.3)The initial velocities are sampled. In the heliocentricreference frame, the star is at rest. The planet velocities aredetermined by computing the velocity of a stable circularorbit corresponding to the initial positions and masses,Parameter GATr Transformer MLP SEGNNLayers 10blocks 10blocks 10layers n/aChannels 16multivectors + 128 scalars 384 384 n/aAttention heads 8 8 n/a n/aParameters [ 106] 1.9 11 .8 1 .3 0 .1TABLE III: Hyperparameters used in the n-body experiments.and then adding isotropic Gaussian noise (with standarddeviation 0.01) to it.4)We transform the positions and velocities from theheliocentric reference frame to a global reference frameby applying a random translation and rotation to it. Thetranslation is sampled from a multivariate Gaussian withstandard deviation 20and zero mean (except for thedomain generalization evaluation set, where we use amean of (200,0,0)T). The rotation is sampled from theHaar measure on SO(3). In addition, we apply a randompermutation of the bodies.5)We compute the final state of the system by evolving itunder Newton’s equations of motion, using Euler’s methodand100time steps with a time interval of 10−4each.6)Finally, samples in which any bodies have traveled morethan a distance of 2(the diamater of the solar system) arerejected. (Otherwise, rare gravitational slingshot effectsdominate the regression loss and all methods becomeunreliable.)We generate training datasets with n= 4 and between100 and105samples; a validation dataset with n= 4 and5000 samples; a regular evaluation set with n= 4 and5000samples; a number-generalization evaluation set with n= 6and5000 samples; and a E(3) generalization set with n= 4,an additional translation (see step 4 above), and 5000 samples.All models are tasked with predicting the final objectpositions given the initial positions, initial velocities, andmasses.b) Models: Our GATr model is explained in III. Weembed object masses as scalars, positions as trivectors, andvelocities (like translation vectors) as bivectors.GATr is compared to three baselines: the equivariantSEGNN [ 4], a vanilla transformer, and an MLP. For SEGNN,we use the code published by Brandstetter et al. [4]and thehyperparameters that publication uses for n-body experiments.We vary the number of nearest neighbours between 3 and thenumber of objects in the scene (corresponding a fully connectedgraph) and show the best result. For the Transformer baseline,we follow a pre-layer normalization [ 1,24] architecture withGELU activations [ 12] in the MLP block. For the MLP, weuse GELU activations as well.In Tbl. IV we show hyperparameter choices and parametercounts.c) Training: All models are trained by minimizing a L2loss on the final position of all objects. We train for 50 000steps with the Adam optimizer, using a batch size of 64andexponentially decaying the learning rate from 3·10−4to3·10−6.d) Results: In the left panel of Fig. 3 we show the predic-tion errors as a function of the number of training samples used.The MLP, which has the least strong inductive bias and treatsthe object positions and velocities as a single, structurelessfeature vector, performs poorly on this task. The transformerstructures the data in terms of objects and is permutation-equivariant, but not aware of the geometry; it achieves areasonable prediction accuracy when using the full training set.SEGNN, which is E(3) -equivariant, achieves a substantiallybetter performance than the non-geometric baselines. Our GATrarchitecture outperforms all three, achieving an asymptoticperformance on par with SEGNN while being clearly moresample-efficient. It is able to predict final positions with highaccuracy even from just 100 training samples.GATr also generalizes robustly out of domain, as we showin the middle and right panels of Fig. 1. When evaluated ona larger number of planets, the mean error becomes larger,as non-trivial gravitational interactions become more frequent,but GATr still outperforms the baselines. In particular, bothGATr and the baseline transformer generalizes better thanSEGNN, providing evidence that a softmax-based attentionmechanisms is more robust to object number generalizationthan the message passing algorithm of SEGNN. Finally, theperformance of the E(3) -equivariant GATr and SEGNN doesnot drop when evaluated on spatially translated data, while thenon-equivariant baselines fail in this setting.D. Robotic planning through invariant diffusiona) Environment: We use the block stacking environmentfrom Janner et al. [15]. It consists of a Kuka robotic arminteracting with four blocks on a table, simulated with Py-Bullet [ 8]. The state consists of seven robotic joint angles aswell as the positions and orientations of the four blocks. Weconsider the task of stacking four blocks on top of each otherin any order. The reward is the stacking success probabilityand is normalized such that 0means that no blocks are eversuccessfully stacked, while 100denotes perfect block stacking.b) Dataset and data parameterization: We train modelson the offline trajectory dataset published by Janner et al. [15].It consists of 11 000 expert demonstrations.To describe the problem in terms of geometric quantities,we re-parameterize the environment state into the positions andorientations of the robotic endeffector as well as the four blocks.The orientations of all objects are given by two direction vectors.In addition, there are attachment variables that characterizewhether the endeffector is in contact with either of the four102103104105Training samples102101100101Root mean squared errorn-body prediction, no domain shiftGATr (ours)MLPTransformerSEGNN (Brandstetter '22)102103104105Training samples102101100101Root mean squared errorn-body pred., number generalization102103104105Training samples102101100101Root mean squared errorn-body prediction, E(3) generalizationFig. 3: Results on a synthetic n-body dynamics dataset. We show the error in predicting future positions of planets as a function of thetraining dataset size. Out of five independent training runs, the mean and standard error are shown. Left: Evaluating without distributionshift. GATr ( ) is more sample efficient than SEGNN [ 4] ( ) and outperforms non-geometric baselines ( , ).Middle : Evaluatingon systems with more planets than trained on. Both GATr and the baseline transformer generalize well to different object counts. Right :Evaluating on translated data. Because GATr is E(3) equivariant, it generalizes under this domain shift.Parameter GATr-Diffuser Transformer-Diffuser DiffuserTransformer blocks { 10,20,30} { 10,20,30} n/aChannels 16multivectors + 128 scalars { 144,384} n/aAttention heads 8 8 n/aParameters [ 106] { 2.1,4.0,5.9} { 1.8, . . . , 3.5, . . . , 35.7} 65.1TABLE IV: Hyperparameters used in the robotic planning experiments. For GATr-Diffuser and the Transformer-Diffuser, we experimentedwith different depth and (for the Transformer-Diffuser) channel counts. For each model, we independently chose the best-performing setting,shown here in bold. The Diffuser model uses a substantially different architecture based on a U-net, we refer the reader to Janner et al. [15]for details.blocks. In this parameterization, the environment state is 49-dimensional.We train models in this geometric parameterization of theproblem. To map back to the original parameterization in termsof joint angles, we use a simple inverse kinematics model thatsolves for the joint angles consistent with a given endeffectorpose.c) Models: Our GATr model is explained in Sec. III. Weuse the axial version, alternating between attending over timesteps and over objects. We embed object positions as trivectors,object orientations as oriented planes, gripper attachmentvariables as scalars, and the diffusion time as scalars.For the Transformer baseline, we follow a pre-layer nor-malization [ 1,24] architecture with GELU activations [ 12]in the MLP block and rotary positional embeddings [ 22].For the Diffuser baseline, we follow the architecture andhyperparameters described by Janner et al. [15].For all models, we use the diffusion time embedding ofRef. [ 15]. In Tbl. IV we show hyperparameter choices andparameter counts.All models are embedded in a diffusion pipeline as describedby Ho et al. [14], using the hyperparameter choices of Ref. [ 15].In particular, we use univariate Gaussian base densities and1000 diffusion steps.d) Training: We train all models by minimizing thesimplified diffusion loss proposed by Ho et al. [14]. For ourGATr models and the Diffuser baselines we use an L2loss andtrain for 200 000 steps with the Adam optimizer, exponentiallydecaying the learning rate from 3·10−4to3·10−6. This setupdid not work well for the Diffuser model, where (followingJanner et al. [15]) we use a L1loss and a low constant learningrate instead.e) Evaluation: All models are evaluated by rolling outat least 200 episodes in a block stacking environment andreporting the mean task and the standard error. We use theplanning algorithm and parameter choices of Janner et al. [15](we do not optimize these, as our focus in this work is onarchitectural improvements). It consists of sampling trajectoriesof length 128 from the model, conditional on the current state;then executing these in the environment using PyBullet’s PIDcontroller. Each rollout consists of three such phases. |
N00uQFLlvHC | Spatial Generalization of Visual Imitation Learningwith Position-Invariant RegularizationZhao-Heng Yin1, Yang Gao2, Qifeng Chen1HKUST1Tsinghua University2zhaohengyin@berkeley.eduAbstract —How the visual imitation learning models can gener-alize to novel unseen visual observations is a highly challengingproblem. Such a generalization ability is very crucial for their real-world applications. Since this generalization problem has manydifferent aspects, we focus on one case called spatial generalization ,which refers to generalization to unseen setup of object (entity)locations in a task, such as a novel setup of object locations inthe robotic manipulation problem. In this case, previous worksobserve that the visual imitation learning models will overfitto the absolute information (e.g., coordinates) rather than therelational information between objects, which is more importantfor decision making. As a result, the models will perform poorlyin novel object location setups. Nevertheless, so far, it remainsunclear how we can solve this problem effectively. Our insightinto this problem is to explicitly remove the absolute informationfrom the features learned by imitation learning models so that themodels can use robust, relational information to make decisions.To this end, we propose a novel, position-invariant regularizercalled POINT for generalization. The proposed regularizer willpenalize the imitation learning model when its features containabsolute, positional information of objects. Various experimentsdemonstrate the effectiveness of our method.I. I NTRODUCTIONImitation learning is a class of algorithms that enable robotsto acquire behaviors from human demonstrations [ 8]. Therecent advance in deep learning has boosted the developmentof visual imitation learning and supported its applications likeautonomous driving, robotic manipulation, and human-robotinteraction [8].In spite of its success, visual imitation learning methodsstill face many practical challenges. One major challenge is itsability to generalize to novel unseen visual observations, whichis very common when we deploy the trained models [ 15,11].In the literature, this generalization problem is also known asthe robustness problem. The problem covers many differentaspects. For example, here we can identify two basic general-ization capabilities: observational generalization andspatialgeneralization (Figure 1). Observational generalization refersto the generalization to novel visual textures. The changes inbackground color, object texture, or ambient light in the roboticmanipulation task are examples of observational generalization.Such kind of visual change does not affect the underlying taskstructure (e.g., the position of object and targets) and onlyrequires the robot to reason about semantic meanings correctly.In contrast, spatial generalization refers to the generalizationto unseen setup of objects’ (entities) locations in one task,which instead requires physical common sense about space andTrain TestObservational GeneralizationTrain TestSpatial Generalization Remove! KeepAbsolute Info Relation InfoExamples of Visual Generalization What should be in the feature?Fig. 1: Left and Middle: Two kinds of visual generalization.The examples are based on the MAGICAL benchmark providedby [15], in which a robot is required to relocate a box to atarget region. The left figure shows an example of observationalgeneralization, in which the only change during the testingphase is the visual texture of objects. The middle figure showsan example of spatial generalization. The object setup in thetesting phase is unseen. Right: To achieve spatial generalization,we suggest that absolute information should be removed fromthe feature while the relational information should be kept. Wepropose a novel, position-invariant regularizer for this purpose.object. Consider the task of letting a warehouse robot movea box to some target region. If we set the initial position ofthe box to a place that is not covered by the demonstrationdataset, then the imitation learning methods must be able toperform spatial generalization so as to succeed. In reality, thegeneralization challenge usually emerges as a combination ofdifferent generalization capabilities. In this paper, we focus onthe study of spatial generalization .For better spatial generalization, the visual imitation learningmodels should be able to obtain knowledge about objects andtheir spatial relations with proper inductive biases. Some workfinds that vanilla deep visual imitation learning models stronglyoverfit to the absolute position of objects [ 15], which suggeststhat they do not extract relational information of objects tomake decisions like humans [ 4]. Based on this observation,our main insight into this problem is to explicitly removethe absolute, positional information from the learned featuresin the visual imitation learning models. Note that this doesnot mean that the decision-making process is not dependenton absolute information. Rather, we expect that the modelcan extract the relational information (e.g., distance, direction)from the absolute information to make robust decisions. To thisend, we propose a novel position-invariant regularizer calledPOINT. This regularizer will penalize the imitation learningmodel when it finds that the learned feature highly correlateswith absolute, positional information. As a result, the imitationlearning model has to discover more robust relational features,and can generalize better in unseen scenarios.II. P RELIMINARIESa) Notations: We model the sequential decision makingproblem as a Markov Decision Process M= (S,A,R,T).Sis the state space. Ais the action space. Ris the rewardfunction. Tis the transition dynamics. The agent’s stateat timestep tisst∈ S . The agent takes action atandreceives reward rt=R(st, at). Its state at timestep t+ 1is then st+1∼ T(st, at). The objective of the agent is tomaximize the returnPTt=0γtrt, where γ∈(0,1]is a discountfactor. For the imitation learning problem studied here, theagent has no access to RandT, but it is provided witha fixed expert demonstration dataset D={τi}. Here, eachτi= (sE0, aE0, sE1, aE1, ...sET, aET)is an expert trajectory that canachieve high performance (return) in M. Therefore, the agentshould learn the behavior by leveraging the given demonstrationdataset.b) Behavioral Cloning: One classical imitation learningalgorithm is the Behavioral Cloning (BC). BC turns theimitation learning problem into a supervised learning problem.It fits the expert’s action aigiven the observation si. Forthe visual imitation learning problem, the BC model can bedivided into two consecutive parts: a vision encoder fθ(whichis usually a convolutional neural network), and a policy headπ. The fθfirst encodes sito the feature fi=fθ(si), and theπthen uses it to predict the expert’s action. The BC algorithmminimizes the following negative log-likelihood objective:LBC=E(si,ai)∈D[−logπ(ai|fθ(si))]. (1)Due to its simplicity, BC is widely used in visual imitationlearning. Therefore, we study the spatial generalization of BCin this paper.III. M ETHODA. Formulation and ChallengesFor the tasks that involve spatial generalization, there usuallyexist multiple objects in the observed states, such as the agent,the target object, and the goal. For the state si, we denote eachof these objects in siasoji, and their positions as (xji, yji).Then, our idea can be formulated as the minimization problemof each I((xj,yj),f), where Iis the mutual information. Notethat we use the notation xj,yj,fto indicate the correspondingrandom variables of xji, yji, fi. However, this formulation leadsto many practical challenges. First, since each (xji, yji)is notprovided directly by siand should be inferred, we have to eithertrain some object key-point detectors to detect the underlyingobjects in the training set, or annotate the objects by ourselves.However, both of these approaches can be difficult and tediousin practice. Second, even if we have ideal key-point detectors,we have to deal with a hard optimization problem in thesummation formPjI((xj,yj),f). This can be intractablewhen there are many objects in the observed state.Fortunately, we find that the previous works on the interpre-tation of deep learning models like GradCAM provide usefulEncoderGradCAM ++SampleDImportance HeatmapInput Feature fπ ActionPairedProposed RegularizationPeriodicCopyisiInputjsSample NotPairedp = (x ,y )i i ip = (x ,y )j j jDiscriminatorGradient of fooling Dfθfθ~π~Target ModelBC ModelFig. 2: Overview of our method. The blue branch above isthe common imitation learning (BC) pipeline. Our proposedregularizer is shown in the light pink box at the bottom. Theregularizer first uses the GradCAM++ algorithm to find outthe important areas based on which the latest BC modelmakes decisions. Then it samples the coordinates from thediscovered important areas and trains a discriminator networkDto calculate whether these sampled coordinates are pairedwith the feature fi. The BC model (encoder fθ) is then trainedto fool the discriminator D. When the encoder fθis able tofoolD, the absolute positional information is removed fromthe feature as desired.tools to handle these challenges. It can reduce the problem toa much simpler form. We discuss our observations as follows.B. Problem Reduction with GradCAMGradCAM [ 13] is an interpretation method that can tellwhich part of the image is crucial in the decision processof a deep learning model. Given a BC model (fθ, π)andinput s, GradCAM outputs an importance heatmap of the sameresolution as the input s. The heatmap indicates the importanceof each pixel when we use this BC model for prediction. Onenice property of this generated heatmap is that it is smoothand usually coincides with the meaningful objects in the inputs. Therefore, we can consider the GradCAM as a rough objectdetector here.We propose to sample pi= (xi, yi)from the generatedheatmap, and then minimize the I(p,f). We find that thisnew objective can act as a proxy for the original objective inpractice. Concretely, if piis always far from a specific objectlikeok, then we know that okis irrelevant to the decisionprocess of the current model. In this case, we conjecture thatI((xk,yk),f)should be low enough to meet our requirement.On the contrary, if pialways coincides with a certain objectlikeol, then we actually minimize I(p,f)≈I((xl,yl),f)aswe want.C. Loss FunctionsNow, our remaining work is to reduce the mutual infor-mation I(p,f). However, we find that jointly estimating andminimizing the mutual information in our vision-based tasksis hard in practice. Since our ultimate goal is to minimizethe information of pinf, we instead propose an adversarialtraining framework to achieve this goal.Specifically, we introduce a discriminator network Dto playa two-player min-max game with the BC model as follows.minfθmaxDE(si,ai)∼D,(sj,aj)∼D (2)[logD(pi, fi) + log(1 −D(pj, fi))]. (3)In this min-max game, the discriminator Dtries to tell thejoint distribution of pandf, denoted as Pp,f, from the productof their marginal distributions Pp⊗f. Meanwhile, the BC modelis trying to fool the discriminator by removing the informationofpfrom f. Applying the convergence theory of the generativeadversarial network (GAN) [ 6], we know that when fθis aglobal minimizer of Equation 2, Pp,f=Pp⊗f, which impliesthatI(p,f) = 0 . Therefore this min-max game fulfills ourrequirement.In practice, we train Dto minimize the following binaryclassification loss function:LD=−E(si,ai)∼D,(sj,aj)∼D (4)[logD(pi, fi) + log(1 −D(pj, fi))]. (5)However, for the encoder fθ, we find that using −LDasthe loss function for training will result in instabilities. Weassume this is because the fiterm is present in both of thetwo terms in Equation 2, which is different from that in theoriginal GAN objective. Therefore, we propose to use thefollowing loss function for optimization, which we find workswell empirically:Lreg=E(si,ai)∼D[logD(pi, fi)]. (6)Combining the BC loss, the loss function to train the fθandπis thenL=LBC+λLreg (7)=E(si,ai)∼D[−logπ(ai|fθ(si)) +λlogD(pi, fi)].(8)IV. E XPERIMENTSIn the experiments, we first test the performance of ourmethod on the MAGICAL benchmark. We study the general-ization according to the IID protocol [ 9]. This means that thetraining and testing task distributions are the same, though thetest instance will be unseen. Then, we provide an analysis ofour algorithm through both qualitative and quantitive studies.Finally, we extend our method to a real robot manipulationproblem.A. Task Setupa) MAGICAL: The MAGICAL benchmark simulates a2D robotic manipulation problem in a warehouse room. Thetasks provided by the MAGICAL involve complex interactionsbetween the agent and multiple objects, which require effectivespatial generalization. In the experiments, we use a variant ofits MatchRegion task. In this task, a robot is required to goacross a square room to move some objects to a target regionspecified by a dashed rectangle. We set up several task instancesof the MatchRegion task: MatchRegion-Target-1, MatchRegion-Target-2, MatchRegion-Target-2-Distract, MatchRegion-Target-3, MatchRegion-Target-3-Distract. We provide an illustrationDistractorDistractorMR-T1 MR-T1D MR-T2 MR-T2D MR-T3Fig. 3: The MAGICAL tasks used in our experiments. Thegrey robot is required to move the target objects (we markthem with red dots) to the target region. The red curve shows apossible plan to solve the task (the interaction details likereleasing box are omitted). The long horizontal nature ofthis task brings additional challenges aside from the spatialgeneralization problem.of these tasks in Figure 3. For each MatchRegion-Target- Xtask (MR-T X), there is no distractor object in the room, so therobot only needs to move all the Xobjects into the targetlocation. However, for the MatchRegion-Target- X-Distracttask (MR-T XD), there is an additional distractor object inthe room. This object is also randomly placed in the roomduring testing. The existence of this distractor object not onlyincreases the risk of learning spurious features but also addsto the difficulty of learning secure motions. As we will discusslater, even the existence of one distractor object can lead to asignificant increase of generalization difficulty. The study ofmore distractors is carried out in the analysis part.For each of the tasks above, we collect its human demon-stration dataset by ourselves. For each demonstration trajectory,we randomly set up the initial position of the objects, targetregion, and the robot. For MR-T1, we collect 50 trajectories.For each of the other tasks, we collect 100 trajectories. Thecollection of all these trajectories takes 2 hours. We also studythe outcome of using a different number of trajectories in thelater analysis part.B. BaselinesFor the vanilla BC policy, we train an IMPALA [5] policy,whose encoder is a residual convolutional neural network. Wealso try vision-transformer [ 3] and relational network [ 12] thathave relational biases, but we find that they perform worse thanIMPALA and do not report their results here. Then, we im-plement the following baselines for comparison: Dropout [ 14],Crop [ 17,10], Cutout [ 2], MixReg [ 16], OREO [ 11], andCLOP [1].C. Resultsa) MAGICAL: The result on MAGICAL is shown inTable I. The performance is defined by the success rate of thetrained policy, which is the number of target objects that aresuccessfully transferred to the target region, divided by the totalnumber of target objects. We observe that our method is able toachieve state-of-the-art results and outperform the baselines bya large margin. Concretely, it improves the success rate by about30%. Besides, we find that most of the previous regularizationmethods do increase the success rate of the vanilla version andtheir results are similar to each other. This shows that theyTABLE I: Evaluation result on the MAGICAL benchmark. We show the average score on three random seeds. Our method canachieve state-of-the-art results compared with the baselines.Method Vanilla Dropout Crop Cutout MixReg OREO CLOP OursMR-T1 0.09±0.020.28±0.040.42±0.030.19±0.030.26±0.020.21±0.030.16±0.060.63±0.05MR-T1D 0.19±0.060.32±0.110.44±0.030.27±0.030.41±0.100.27±0.060.21±0.020.60±0.08MR-T2 0.25±0.030.48±0.030.46±0.040.43±0.050.44±0.050.37±0.050.32±0.070.75±0.07MR-T2D 0.27±0.060.35±0.030.38±0.040.32±0.030.33±0.030.27±0.030.23±0.040.70±0.04MR-T3 0.23±0.020.51±0.030.47±0.050.32±0.040.48±0.050.42±0.040.35±0.070.66±0.03Ours DropoutFig. 4: The GradCAM++ importance heatmap of the dropoutmodel (left) and our model (right) on the MatchRegion-Target-1-Distract task. The red region indicates the most importantregion, while the dark blue indicates the least important region.The results suggest that the dropout model attends to the reddistractor and is not robust. In contrast, our model is able toattend to correct objects.may solve some common issues in the generalization problem.However, their performance gap from our method suggeststhat we tackle a different issue here, which is overfitting toabsolute positions.D. Analysisa) Qualitative Results: To understand whether our methodlearns more robust features, we use GradCAM++ to visualizethe learned model. For simplicity, we show the result on theMatchReigion-Target-1-Distract task. We compare the result ofour model to the model trained with dropout here (Figure 4).We notice that the dropout model tends to focus on the reddistractor object rather than the correct target object. In contrast,our model is able to focus on the correct objects. Even whenthe distance between the agent and the object is large, itcan attend to the agent and the object simultaneously. Thevisualization results suggest that our regularizer indeed leadsto robust relational features even when the vision networkIMPALA does not have an explicit relational inductive bias.This accounts for the improvement of generalization.b) Unseen Number of Distractors: A robust model shouldbase its decision on robust relational information. As a result,for the MAGICAL tasks, it should be able to ignore the00.20.40.60.80 1 2 3Dropout MixReg CutOutCLOP OREO Ours# of DistractorsSuccess RateFig. 5: The generalizationperformance to differentnumber of distractors onMR-T1D.00.20.40.60.825% 50% 75% 100%Dropout MixReg CutOutCLOP OREO OursDataset SizeSuccess RateFig. 6: The variation of per-formance on MAGICAL us-ing the datasets of differentsizes.distractor and generalize to an unseen number of distractors.Therefore, we test whether our model trained on MR-T1D(where only one distractor presents) can generalize to MR-T1Dwith the unseen number of distractors (e.g., 0, 2, 3). We alsocompare the results with the previous models. The result isshown in Figure 5. We find that our model is able to generalizeto the case of 0, 2, 3, though the performance is lower thanthe case of 1 (training scenario). In contrast, the prior model,such as the dropout model, fails in these unseen cases totally.This also echoes our qualitative analysis results.c) Number of Demonstrations: We also study whetherthe proposed method works when the amount of expertdemonstrations is limited. For this purpose, we test ourmethod on the MAGICAL with 25%,50%,75% of expertdemonstrations. We show the averaged performance in Figure 6.We find that our method can achieve consistent improvement,though the performance decreases as the dataset becomessmaller. This result suggests that we still require a certainamount of diverse data to achieve spatial generalization.V. C ONCLUSIONWe studied the spatial generalization problem of imitationlearning. We proposed POINT, a novel position-invariantregularizer to remove the absolute positional information fromthe features to tackle this problem. Through experiments onthe MAGICAL benchmark as well as a robot manipulationsystem, we confirmed that previous methods do overfit to theabsolute position and showed that our proposed approach caneffectively help generalization.REFERENCES[1]David Bertoin and Emmanuel Rachelson. Local featureswapping for generalization in reinforcement learning.InInternational Conference on Learning Representa-tions (ICLR) , 2022.[2]Terrance DeVries and Graham W Taylor. Improvedregularization of convolutional neural networks withcutout. arXiv preprint arXiv:1708.04552 , 2017.[3]Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,Mostafa Dehghani, Matthias Minderer, Georg Heigold,Sylvain Gelly, et al. An image is worth 16x16 words:Transformers for image recognition at scale. In Interna-tional Conference on Learning Representation (ICLR) ,2021.[4]Leonidas AA Doumas, Guillermo Puebla, Andrea EMartin, and John E Hummel. A theory of relation learningand cross-domain generalization. Psychological review ,2022.[5]Lasse Espeholt, Hubert Soyer, Remi Munos, KarenSimonyan, Vlad Mnih, Tom Ward, Yotam Doron, VladFiroiu, Tim Harley, Iain Dunning, et al. Impala: Scalabledistributed deep-rl with importance weighted actor-learnerarchitectures. In International Conference on MachineLearning (ICML) , 2018.[6]Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, BingXu, David Warde-Farley, Sherjil Ozair, Aaron Courville,and Yoshua Bengio. Generative adversarial networks.Communications of the ACM , 63(11):139–144, 2020.[7]Kyle Hsu, Moo Jin Kim, Rafael Rafailov, Jiajun Wu, andChelsea Finn. Vision-based manipulators need to alsosee from their hands. In International Conference onLearning Representation (ICLR) , 2022.[8]Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan,and Chrisina Jayne. Imitation learning: A survey oflearning methods. ACM Computing Surveys (CSUR) , 50(2):1–35, 2017.[9]Robert Kirk, Amy Zhang, Edward Grefenstette, andTim Rockt ̈aschel. A survey of generalisation in deepreinforcement learning. arXiv preprint arXiv:2111.09794 ,2021.[10] Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto,Pieter Abbeel, and Aravind Srinivas. Reinforcementlearning with augmented data. In Neural InformationProcessing Systems (NeurIPS) , 2020.[11] Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, TaoQin, Jinwoo Shin, and Tie-Yan Liu. Object-aware regular-ization for addressing causal confusion in imitation learn-ing. In Neural Information Processing Systems (NeurIPS) ,2021.[12] Adam Santoro, David Raposo, David G Barrett, MateuszMalinowski, Razvan Pascanu, Peter Battaglia, and Tim-othy Lillicrap. A simple neural network module forrelational reasoning. In Neural Information ProcessingSystems (NeurIPS) , 2017.[13] Ramprasaath R Selvaraju, Michael Cogswell, AbhishekDas, Ramakrishna Vedantam, Devi Parikh, and DhruvBatra. Grad-cam: Visual explanations from deep net-works via gradient-based localization. In InternationalConference on Computer Vision (ICCV) , pages 618–626,2017.[14] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, IlyaSutskever, and Ruslan Salakhutdinov. Dropout: a simpleway to prevent neural networks from overfitting. TheJournal of Machine Learning Research (JMLR) , 15(1):1929–1958, 2014.[15] Sam Toyer, Rohin Shah, Andrew Critch, and StuartRussell. The magical benchmark for robust imitation.InNeural Information Processing Systems (NeurIPS) ,2020.[16] Kaixin Wang, Bingyi Kang, Jie Shao, and Jiashi Feng.Improving generalization in reinforcement learning withmixture regularization. In Neural Information ProcessingSystems (NeurIPS) , 2020.[17] Denis Yarats, Ilya Kostrikov, and Rob Fergus. Imageaugmentation is all you need: Regularizing deep reinforce-ment learning from pixels. In International Conferenceon Learning Representations (ICLR) , 2020.VI. A PPENDIXA. Real-World ExperimentsWe also test whether our method scales to the real-world pick-and-place manipulation problem. We extend the MR-T1D to aUR10 robot arm with a Robotiq parallel-jaw gripper (Figure 7).As suggested by [ 7], we use a gripper camera and a workspacecamera to provide observation. For the BC model, we usetwo separate IMPALA encoders to process each camera image,concatenate their output features along with the z-coordinateof gripper, and feed them into an MLP. We use the proposedregularizer to regularize the workspace branch. We collect 75human demonstrations for training. We compare our methodto dropout with different numbers of distract objects. Theresult is shown in Table II. Our method also achieves a largeimprovement in this problem. The qualitative results are shownin the Appendix Section VI-B.Gripper Camer aWorkspaceCameraFig. 7: The setup of real-world robot manipulation experiments.TABLE II: The success rate of the real-world experiments. Ourmethod is also effective here. Each test consists of 20 trials.Method Dropout Ours0 Dis. Obj 35% 55%1 Dis. Obj 35% 60%2 Dis. Obj 20% 50%3 Dis. Obj 10% 45%B. Qualitative Results of the Manipulation ProblemIn this section, we provide some qualitative results of the real-world manipulation problem. Recall that in this task, the robotis required to move a red cube to a target location specified bya green area. We show the importance heatmap of the dropoutmodel (Figure 8) and our model (Figure 9). As is shown in thefigures, we find that dropout model tends to attend more to theround distractor object compared with our model. However,Fig. 8: The GradCAM++ importance heatmap of dropout modelin the real-world manipulation problem. The dropout modeltends to attend the round distractor object.Fig. 9: The GradCAM++ importance heatmap of our modelin the real-world manipulation problem. Our model attendsless to the round distractor object. However, due to the visualcomplexity, we find that our model sometimes may attend theshadow in the background.due to the visual complexity, we find that our model sometimesmay attend the shadow in the background. |
n9sxj3TKWm8 | Morphological symmetries in robot learningDaniel Ordonez-Apraez∗, Mario Martin†‡, Antonio Agudo∗and Francesc Moreno-Noguer∗∗Institut de Rob `otica i Inform `atica Industrial, CSIC-UPC.†Barcelona Supercomputing Center (BSC).‡Departament de Ci `encies de la Computaci ́o, Universitat Polit `ecnica de Catalunya (UPC).[dordonez, aagudo, fmoreno]@iri.upc.edu, mmartin@cs.upc.eduAbstract —This work studies the impact of morphologicalsymmetries in learning applications in robotics. Morphologicalsymmetries are a predominant feature in both biological androbotic systems, arising from the presence of planes/axis ofsymmetry in the system’s morphology. This results in harmo-nious duplication and distribution of body parts (e.g., humans’sagittal/left-right symmetry). Morphological symmetries becomea significant learning prior as they extend to symmetries inthe system’s dynamics, optimal control policies, and in allproprioceptive and exteroceptive measurements, related to thesystem’s dynamics evolution [10]. Exploiting these symmetriesin learning applications offers several advantageous outcomes,such as the use of data augmentation to mitigate the cost andchallenges of data collection, or the use of equivariant/invariantfunction approximation models (e.g., neural networks) to improvesample efficiency and generalization, while reducing the numberof trainable parameters. Lastly, we provide a video presentation1and an open access repository2reproducing our experiments andallowing for rapid prototyping in robot learning applicationsexploiting morphological symmetries.I. I NTRODUCTIONDiscrete Morphological Symmetries (DMSs) are ubiquitousin both biological and robotic systems. The vast majorityof living and extinct animal species exhibit bilateral/sagittalreflection symmetry, where the right side of the body isapproximately a reflection of the left side (see fig. 1-left).Similarly, a significant number of species exhibit radial sym-metry, characterized by two or more morphological symmetryplanes/axis (see fig. 1-center) [6]. These symmetries are aconsequence of nature’s tendency to symmetric body partsand harmonic duplication and distribution of limbs. A patternperfected and exploited in the design of robotic systems.Symmetries of the state of a dynamical system translate tosymmetries of the system’s dynamics and control [17]. Thus,DMSs imply the presence of symmetries in the dynamicsand control of body motions, extending to symmetries inall proprioceptive and exteroceptive measurements, relatedto the evolution of the system’s dynamics (e.g., joint posi-tion/velocity/torque, depth images, contact forces). Therefore,for systems with morphological symmetries, we can use dataaugmentation to mitigate the challenges of data collection inrobotics, computer graphics, and computational biology. This,roughly implies that for every minute of recorded data of asystem with nmorphological symmetries, we can obtain anadditional n−1minutes of recordings, solely by consideringthe symmetric states of the recorded data. See the case of1Video presentation: youtu.be/qu4jIViRU1A2Code repository: github.com/Danfoa/MorphoSymmthe robot Solo in fig. 1-center, for which we obtain 3addi-tional minutes of recording by considering the depicted 4-foldsymmetries. Furthermore, we can exploit the symmetries ofproprioceptive and exteroceptive data by imposing symmetryconstraints in machine learning algorithms to boost sampleefficiency and enhance generalization [17, 4, 12]. Considerthe case of robot Solo in fig. 1-center/right. We desire toapproximate the function y=f(x), mapping points in aninput space x∈ X (say, the state of our robot) to points inan output space y∈ Y (say, the binary contact state of therobot’s feet). To achieve this we use recorded data to traina function approximation model ˆfparameterized with φ, i.e.y≈ˆf(x;φ). Because of the robot morphological symmetry,the input and output spaces have symmetries, and our targetfunction is subjected to an equivariance constraint:g·y=f(g·x)| ∀ g∈ G. (1)Where grepresents a symmetry, g·xandg·ythe input andoutput points transformed by the symmetry (in our example,g·xis the transformed robot state and g·ya different contactstate), while Grepresents the set of symmetries of the robot,its symmetry group. In these scenarios, we should impose thesame equivariance constraints of our target function (eq. (1)) toour model ̄f. Since by doing so, we are reducing the solutionspace of the optimization algorithm used to find the optimal ̄f.In practice, imposing equivariance (or invariance) constraintsimplies reducing the number of parameters of your model φ,while empirically obtaining benefits in sample efficiency andgeneralization [4, 12, 10].Despite the potential benefits of exploiting symmetry andthe ubiquitous presence of morphological symmetries inrobotic/biological/virtual systems, this relevant inductive biasis frequently left unexploited in data-driven applications inrobotics, computational biology, and computer graphics. Weattribute the scarce adoption of these techniques to a miss-ing theoretical framework that consolidates the concept ofmorphological symmetries, facilitating their study and iden-tification. And, to a missing practical framework enabling theefficient and convenient exploitation of symmetries in real-world data-driven applications.The identification of morphological symmetries and howthese extend to symmetries of proprioceptive and exteroceptivedata is currently a laborious and error-prone system-specificprocess, due to the lack of a clear theoretical framework. Asa result, most recent works that exploit some morphologicalsymmetry (e.g., [15, 1, 16] in computer graphics and [12, 9,5, 3] in robotics/dynamical systems) have only been appliedf1gr·f1gt·f1gs·f1egrgtgs gtgs gsgtgrh.=hlkigs·hgt·hgr·hg·y=f(g·x;φ)| ∀g∈ K4φ.=n0c, . . . ,lcoK4=¶e, gs, gt, gr|g2s=g2t=g2r=e, gr=gsgr©0c+0W×σlc+ ×σf(x;φ)lW0BlBxgs·xgt·xgr·x0zl−1zygs·ygt·ygr·ylk0B:,:,10B:,:,re gsgsf1 gs·f1gs·h hC2=¶e, gs|g2s=e©Reflection plane of gsFig. 1: Left: Symmetric configurations of the bipedal robot Atlas (3D animation) illustrating its morphological symmetrydescribed by the reflection group C2. The robot can imitate the reflections gs(hint: note the non-reflected text on the robot’schest). Middle: Top-view of symmetric configurations of the quadruped robot Solo (3D animation) showcasing its morphologicalsymmetries described by the Klein four-group K4. The robot can imitate two reflections (gs, gt)and a 180◦rotation ( gr) ofspace (hint: observe the unreflected/unrotated robot’s heading direction and legs coloring). Symmetry transformations (arrows)affect the robot’s configuration, as well as proprioceptive measurements (center of mass linear land angular kmomentum)and exteroceptive measurements (terrain elevation, external force f1).Right: Diagram of a toy K4-equivariant neural network,processing the symmetric states of robot Solo xand outputting the symmetric binary foot contact states y(see section IV).to simple systems and the simplest morphological symmetry:reflection/sagittal symmetry (see fig. 1-left), with the exceptionof Finzi et al. [3]. However, these works provide little guidanceon how to apply these techniques to other systems, particularlythose with more than a single morphological symmetry.Our recent work [10], aims at increasing the adoption ofmorphological symmetry exploitation in robotics by presentingthe theoretical and practical contributions2that enable thestudy and exploitation of these symmetries in arbitrary dy-namical systems with any number of symmetries. In this shortpaper, we summarize the most important facts of morphologi-cal symmetries in robotics and their implications in data-drivenapplications. For a rigorous and extended development, werefer the interested reader to [10].II. P ROPERTIES OF SYMMETRIC DYNAMICAL SYSTEMSIn robotics a symmetry gis roughly defined as an energy-preserving transformation of the robot state (q, ̇q), definedby the system generalized position q∈Qand velocitycoordinates ̇q∈TqQ. If a dynamical system has a groupof symmetries G, its dynamics (i.e, its equations of motionM(q) ̈q=τ(q, ̇q)) are equivariant. That is:g·[M(q) ̈q|{z}Inertial−τ(q, ̇q)|{z}Moving] =M(g·q)g· ̈q|{z }Inertial−τ(g·q, g· ̇q)|{z }Moving=0| ∀g∈ G,q∈Q, ̇q∈TqQ.(2)Denoting M(q) : Q→Rn×nas the generalized mass matrixfunction and τ(q, ̇q) : Q×TqQ→Rnas the generalizedmoving forces at a given state (q, ̇q).This property of symmetric dynamical systems, denotedas dynamics G-equivariance (eq. (2)), depends on both thegeneralized inertial and moving forces being independentlyequivariant, implying:M(g·q) =gM(q)g-1∧g·τ(q, ̇q) =τ(g·q, g· ̇q)| ∀g∈ G,q∈Q, ̇q∈TqQ.(3)The equivariance of the inertial forces requires that the gen-eralized mass matrix of the systems is equivariant. This isthe identifying property of symmetrical dynamical systems.In practice, as the generalized mass matrix is well-defined formodel-based systems, it can be used for the identification ofsystem’s symmetries using eq. (3) (see [10] for the case ofrigid body dynamics). Furthermore, the equivariance of thegeneralized moving forces (which in practice, usually incor-porates control, constraint, and external forces) implies thatdynamics G-equivariance (eq. (2)) is upheld until a symmetrybreaking force violates the equivariance of τ.To gain some intuition, consider as an example the bipedalrobot Atlas, with symmetry group G=C2={e, gs}. Bothrobot states in fig. 1-left are symmetric states (related by theaction gs). Then, eq. (2) suggests that any trajectory of motion,starting from the left robot state, will be equivalent (up totransformation by gs) to a motion trajectory starting from theright robot state, if and only if, the moving forces driving bothtrajectories are equivalent (up to transformation by gs). That isif the control and external forces are C2-equivariant (eq. (3)).Note, we can perform a similar analysis for each symmetricstate and action of systems with larger symmetry groups (e.g.Solo in fig. 1-center).The aforementioned definition of symmetries as energy-preserving transformations of the system state is intentionallygeneric, imposing no restrictions on the nature of the statetransformation, such as whether the transformed state is feasi-ble or reachable. This allows us to consider feasible state trans-formations (such as robot translations and rotations3) alongwith unfeasible state transformations (such as a reflection ofspace) as symmetries of the system. Naturally, in robotics, weare interested in studying and exploiting feasible symmetriesalone. Therefore we introduced the concept of discrete mor-phological symmetry, as the set of feasible symmetries of thesystem that imitate feasible and unfeasible symmetries.III. D ISCRETE MORPHOLOGICAL SYMMETRIES (DMS S)A dynamical system is said to possess a DMS if it canimitate the effects of a rotation, reflection, or translation inspace (i.e. Euclidean isometries), through a feasible discretechange in its configuration (see formal definition in [10]). Togain intuition, we can analyze the simplest and most commonDMS.Reflection DMS: Although most floating-base dynamicalsystems are symmetric with respect to reflections of space(section II), these symmetries are infeasible due to the impos-sibility to execute reflections in the real-world [11]. However,systems with sagittal symmetry (e.g., Atlas in fig. 1-left, orhumans) can imitate the effect of a reflection with a feasiblediscrete change in their configuration, by rotating their bodyand modifying their limbs’ pose. These systems share the samesymmetry group, the reflection group G ≡ C 2.Multiple DMSs: This property can be extended to the caseof a floating-base system having multiple DMSs, allowing it toimitate multiple distinct Euclidean isometries. Most frequentlysystems can imitate a set of rotations and reflections, makingGa Cyclic Ckor Dihedral D2kgroup. See examples for C3in[10], and for D4≡ K 4in fig. 1-center.Because each DMS is defined as a feasible transformationthat imitates a system’s symmetry gdue to a Euclideanisometry, the group of DMSs Gis isomorphic to a subsetof the feasible and unfeasible symmetries of the dynamicalsystem due to rotations, reflections, and translations in space.Furthermore, the existence of the DMSs is subjected to the sys-tem’s generalized mass matrix being G-equivariant (eq. (3)). Inpractice, these constraints translate to identifiable constraintsin the kinematic and dynamic parameters of the system model[10].IV.G-EQUIVARIANT AND G-INVARIANT FUNCTIONAPPROXIMATORSOnce we identified the DMS group Gof our system, weknow that any proprioceptive or exteroceptive measurementshave the same symmetry group G. Therefore, to improvegeneralization and sample efficiency, we can exploit theknown symmetries of the input xand output yspaces, ofany mapping we desire to approximate, by constructing G-equivariant or G-invariant (eq. (1)) function approximationmodels ˆf(x;φ), parameterized with φ. In [10] we study3In conservative systems, translational, rotational, and time-shift sym-metries imply, by Noether’s theorem, the conservation of linear momentum,angular momentum, and energy, respectively [8].Fig. 2: Left: Solo sagittal (blue) and transversal (red) symme-try planes of the base body. Right: Solo’s kinematic tree, andpermutation symmetries of the legs/tree-branches.the case of G-equivariant/invariant neural networks (NN). Inthis section, we summarized the most relevant implications ofDMSs for this type of machine-learning model.•Computational implications of using G-equivariantNN. Thanks to recent theoretical and practical develop-ments [4, 10, 12], the use of G-equivariant NN insteadof unconstrained NN comes at the price of a negligibleincrease in memory and computational resources requiredduring training of the model. Most importantly, there isno difference, at inference time, between equivariant andunconstrained models.•Number of trainable parameters of a NN. Imposingequivariance/invariance constraints in NN signifies thereduction in the number of trainable parameters of themodel [4, 12, 2]. In practice, this implies that for a G-equivariant layer the number of trainable parameters isreduced by approximately1/|G|being |G|the numberof symmetries of the data (i.e., number of DMSs ofthe system). Therefore a G-equivariant architecture withG=C2(robot Atlas in fig. 1-left), or G=K4(Solo infig. 1-center) will have approximately1/2(Atlas) or1/4(Solo) of the trainable parameters of an unconstrainedNN of the same architectural size. The reduction ofparameters is caused by the parameter sharing and isvisually depicted in fig. 1-right.An increasing amount of theoretical [2, 14] and empirical[12, 3, 10, 13] evidence suggest that when the data featuressymmetries, the use of equivariant/invariant function approx-imation models leads to increase generalization capabilitiesand a reduction in sample complexity. On [10] we presentempirical evidence in robotics in a synthetic and real-worldlearning application. Here, we summarize the results of thereal-world application.V. E XPERIMENTSWe present a supervised experiment using real-world datain a classification application to showcase the effectivenessof Discrete Morphological Symmetries (DMSs) for data aug-mentation and training equivariant functions. The goal is todemonstrate the positive impact of exploiting DMSs on themodel’s sample efficiency and generalization capacity. Fora detailed analysis of the technical aspects and additionalexperiments, please refer to [10].Fig. 3: Static-Friction-Regime contact detection results comparing CNN, CNN-aug, and ECNN. Left: Sample efficiencyin log-log scale. Middle: Average legs F1-score. Right: Classification metrics on test set performance of models trained withthe entire training set. The selected metrics include contact-state ( y∈R16) accuracy (Acc) and f1-score (F1) for each legbinary contact state. Due to the sagittal symmetry of the robot, the left front (LF) and right front (RF) legs are expected to besymmetric, as well as the left hind (LH) and right hind (RH) legs. F1-score is presented considering the dataset class imbalance(see [10]). The reported values represent the average and standard deviation across 8different seeds.A. Static-friction-regime contact detection (Classification)In this experiment, we utilize the dataset introduced inLin et al. [7] for estimating static-friction-regime contacts inthe foots of the Mini-Cheetah quadruped robot. The datasetconsists of real-world proprioceptive data ( ˆq, ̇ˆq, base linearacceleration, base angular velocity, and leg feet positions andvelocities) captured over a history of 150time-frames. Thesemeasurements were obtained from inboard sensors duringlocomotion, encompassing various gaits and terrains. Thedataset also includes y∈R16, representing the ground truthcontact state of the robot, which was estimated offline usinga non-causal algorithm. Our goal is to train a causal functionapproximator ˆf(x;φ)to predict the contact state based on theinput proprioceptive data.The Mini-Cheetah robot in the real-world exhibits an ap-proximate reflection symmetry group, G ≈ C 2. As a result,both the proprioceptive data xand the contact state ysharethe symmetry group G. In this experiment, we compare threevariants of function approximators: the original ConvolutionalNeural Network architecture proposed by Lin et al. [7] (CNN),a version of CNN trained with data augmentation (CNN-aug),and a version of CNN that incorporates hard-equivarianceconstraints (E-CNN).The sampling efficiency and average leg contact state classi-fication results are depicted in fig. 3-left-&-middle. The equiv-ariant model, E-CNN, demonstrates superior generalizationperformance and robustness to dataset biases compared tothe unconstrained models [10]. Following E-CNN, CNN-augexhibits better performance than the original CNN. In fig. 3-right, we evaluate the classification metrics of the test set whenusing the entire training data. The E-CNN model outperformsboth CNN-aug and CNN in contact state classification andaverage leg contact detection. Notably, exploiting symmetrieshelps mitigate suboptimal asymmetries in the models, prevent-ing them from favoring the classification of one leg over others(observe legs LF and RF in fig. 3-right).VI. C ONCLUSIONS & D ISCUSSIONIn this work, we summarize the findings presented in [10],where we present the definition of Discrete MorphologicalSymmetry (DMS): a capability of some dynamical systemsto imitate the effect of rotations, translations, and infeasiblereflections of space with a feasible discrete change in thesystem configuration. Using the language of group theory westudy the set of DMSs of a dynamical system as a symmetrygroup Gand conclude that: (1) A system with a symmetrygroup Gexhibits G-equivariant generalized mass matrix anddynamics. (2) That the symmetries of the dynamics extendto optimal control policies as well as to any proprioceptiveand exteroceptive measurements, related to the evolution ofthe system’s dynamics.We establish the necessary theoretical abstractions to inves-tigate and identify DMSs in any dynamical system, irrespectiveof the number of symmetries present. This new formalismallows us to identify the reflection/sagittal symmetry, prevalentin humans, animals, and most robots, as the simplest morpho-logical symmetry group G=C2. Crucially, we use the sameformalism to identify and exploit DMSs in real-world roboticsystems with a greater number of symmetries.In addition, we provide an open-access repository thatfacilitates the efficient prototyping of G-equivariant neuralnetworks for exploiting DMS in various applications involvingrigid-body dynamics, such as robotics, computer graphics, andcomputational biology. This repository includes a growingcollection of symmetric dynamical systems, with their cor-responding symmetry groups already identified. Furthermore,we present compelling empirical and theoretical evidencesupporting the utilization of DMSs in data-driven applicationsthrough data augmentation and the adoption of G-equivariantneural networks. Both symmetry exploitation techniques resultin improved sample efficiency and generalization.ACKNOWLEDGMENTSThis work’s experiments were run at the Barcelona Super-computing Center in collaboration with the HPAI group. Thiswork is supported by the Spanish government with the projectMoHuCo PID2020-120049RB-I00 and the ERA-Net Chisteraproject IPALM PCI2019-103386.REFERENCES[1] Farzad Abdolhosseini, Hung Yu Ling, Zhaoming Xie,Xue Bin Peng, and Michiel Van de Panne. On learn-ing symmetric locomotion. In Motion, Interaction andGames , pages 1–10. 2019.[2] Michael M Bronstein, Joan Bruna, Taco Cohen, andPetar Veli ˇckovi ́c. Geometric deep learning: Grids,groups, graphs, geodesics, and gauges. arXiv preprintarXiv:2104.13478 , 2021.[3] Marc Finzi, Gregory Benton, and Andrew G Wilson.Residual pathway priors for soft equivariance constraints.Advances in Neural Information Processing Systems , 34:30037–30049, 2021.[4] Marc Finzi, Max Welling, and Andrew Gordon Wil-son. A practical method for constructing equivariantmultilayer perceptrons for arbitrary matrix groups. InInternational Conference on Machine Learning , pages3318–3328. PMLR, 2021.[5] Kaveh Akbari Hamed and Jessy W Grizzle. Event-based stabilization of periodic orbits for underactuated3-d bipedal robots with left-right symmetry. IEEETransactions on Robotics , 30(2):365–381, 2013.[6] G ́abor Holl ́o. Demystification of animal symmetry:Symmetry is a response to mechanical forces. BiologyDirect , 12(1):1–18, 2017.[7] Tzu-Yuan Lin, Ray Zhang, Justin Yu, and Maani Ghaf-fari. Legged robot state estimation using invariant kalmanfiltering and learned contact events. In 5th AnnualConference on Robot Learning , 2021.[8] Emmy Noether. Invariante variationsprobleme, math-phys. Klasse, pp235-257 , 1918.[9] Daniel Ordonez-Apraez, Antonio Agudo, FrancescMoreno-Noguer, and Mario Martin. An adaptable ap-proach to learn realistic legged locomotion without ex-amples. In 2022 International Conference on Roboticsand Automation (ICRA) , pages 4671–4678. IEEE, 2022.[10] Daniel Ordonez-Apraez, Mario Martin, Antonio Agudo,and Francesc Moreno-Noguer. On discrete symmetriesof robotics systems: A group-theoretic and data-drivenanalysis. arXiv preprint arXiv:2302.10433 , 2023.[11] Jon M Selig. Geometric fundamentals of robotics ,volume 128. Springer, 2005.[12] Elise Van der Pol, Daniel Worrall, Herke van Hoof,Frans Oliehoek, and Max Welling. Mdp homomorphicnetworks: Group symmetries in reinforcement learning.Advances in Neural Information Processing Systems , 33:4199–4210, 2020.[13] Rui Wang, Robin Walters, and Rose Yu. Approximatelyequivariant networks for imperfectly symmetric dynam-ics.arXiv preprint arXiv:2201.11969 , 2022.[14] Rui Wang, Robin Walters, and Rose Yu. Data aug-mentation vs. equivariant networks: A theory of gen-eralization on dynamics forecasting. arXiv preprintarXiv:2206.09450 , 2022.[15] Raymond Yeh, Yuan-Ting Hu, and Alexander Schwing.Chirality nets for human pose regression. Advances inNeural Information Processing Systems , 32, 2019.[16] Wenhao Yu, Greg Turk, and C Karen Liu. Learning sym-metric and low-energy locomotion. ACM Transactions onGraphics (TOG) , 37(4):1–12, 2018.[17] Martin Zinkevich and Tucker Balch. Symmetry inmarkov decision processes and its implications for singleagent and multi agent learning. In In Proceedings ofthe 18th International Conference on Machine Learning .Citeseer, 2001. |
yY5avw0u6G | Continual Reinforcement Learning with GroupSymmetriesShiqi Liu∗, Mengdi Xu∗, Peide Huang, Xilun Zhang, Yongkang Liu†, Kentaro Oguchi†and Ding ZhaoDepartment of Mechanical Engineering, Carnegie Mellon University†Toyota Motor North America R&DAbstract —Continual reinforcement learning aims to sequen-tially learn a variety of tasks, retaining the ability to performpreviously encountered tasks while simultaneously developingnew policies for novel tasks. However, current continual RLapproaches overlook the fact that certain tasks are identicalunder basic group operations like rotations or translations,especially with visual inputs. They may unnecessarily learn andmaintain a new policy for each similar task, leading to poorsample efficiency and weak generalization capability. To addressthis, we introduce a unique Continual Vision-based Reinforce-ment Learning method that recognizes Group Symmetries, calledCOVERS, cultivating a policy for each group of equivalent tasksrather than an individual task. COVERS employs a proximalpolicy gradient-based (PPO-based) algorithm to train each policy,which contains an equivariant feature extractor and takes inputswith different modalities, including image observations androbot proprioceptive states. It also utilizes an unsupervised taskclustering mechanism that relies on 1-Wasserstein distance on theextracted invariant features. We evaluate COVERS on a sequenceof table-top manipulation tasks in simulation and on a real robotplatform. Our results show that COVERS accurately assignstasks to their respective groups and significantly outperformsbaselines by generalizing to unseen but equivariant tasks inseen task groups. Demos are available on our project page:https://sites.google.com/view/rl-covers/.I. INTRODUCTIONQuick adaptation to unseen tasks has been a key objective inthe field of reinforcement learning (RL) [11, 19, 18]. RL algo-rithms are usually trained in simulated environments and thendeployed in the real world. However, pre-trained RL agentsare likely to encounter new tasks during their deploymentdue to the nonstationarity of the environment. Blindly reusingpolicies obtained during training can result in substantialperformance drops and even catastrophic failures [42, 16].Continual RL (CRL), also referred to as lifelong RL,addresses this issue by sequentially learning a series of tasks.It achieves this by generating task-specific policies for thecurrent task, while simultaneously preserving the ability tosolve previously encountered tasks [18, 15, 37, 23, 36].Existing CRL works that rely on the task delineations to handlenon-stationary initial states, dynamics or reward functions cangreatly boost task performance, particularly when significanttask changes occur [37]. However, in realistic task-agnosticsettings, these delineations are unknown a prior and have tobe identified by the agents. In this work, we explore how todefine and detect task delineations to enhance robots’ learningcapabilities in task-agnostic CRL .Equivariant Policy NetworkReflect Task ConfigurationReflectActionFig. 1: This example illustrates how group symmetry enhancesadaptability. The robot is instructed to close drawers situatedin two distinct locations with top-down images as inputs.Considering the symmetry of the drawers’ locations aroundthe robot’s position, the optimal control policies are equivalentbut mirrored.Our key insight is that robotic control tasks typically pre-serve certain desirable structures, such as group symmetries .Existing CRL approaches typically delineate task boundariesbased on statistical measures, such as maximum a posterioriestimates and likelihoods [37, 23]. However, these measuresoverlook the geometric information inherent in task repre-sentations, which naturally emerge in robotic control tasks,as demonstrated in Figure 1. Consider the drawer-closingexample: conventional CRL works using image inputs wouldtreat each mirrored configuration as a new task and learnthe task from scratch. Yet, we, as humans, understand thatthe mirrored task configuration can be easily resolved bycorrespondingly reflecting the actions. Learning the mirroredtask from scratch hampers positive task interference and limitsthe agent’s adaptivity. To address this issue, our goal is toexploit the geometric similarity among tasks in the task-agnostic CRL setting to facilitate rapid adaptation to unseenbut geometrically equivalent tasks.In this work, we propose COVERS, a task-agnostic vision-based CRL algorithm with strong sample efficiency and gen-eralization capability by encoding group symmetries in thestate and action spaces. We define a task group as the set thatcontains equivalent tasks under the same group operation, suchas rotations and reflections. We state our main contributionsas follows:1) COVERS grows a PPO-based [26] policy with an equiv-ariant feature extractor for each task group, instead ofa single task, to solve unseen tasks in seen groups in azero-shot manner.2) COVERS utilizes a novel unsupervised task groupingmechanism, which automatically detects group bound-aries based on 1-Wasserstein distance in the invariantfeature space.3) In non-stationary table-top manipulation environments,COVERS performs better than baselines in terms ofaverage rewards and success rates. Moreover, we showthat (a) the group symmetric information from theequivariant feature extractor promotes the adaptivityby maximizing the positive interference within eachgroup, and (b) the task grouping mechanism recoversthe ground truth group indexes, which helps minimizethe negative interference among different groups.II. R ELATED WORKTask-Agnostic CRL. CRL has been a long-standing prob-lem that aims to train RL agents adaptable to non-stationaryenvironments with evolving world models [28, 27, 8, 24, 38,16, 17, 20, 1, 29]. In task-agnostic CRL where task identifi-cations are unrevealed, existing methods have addressed theproblem through a range of techniques. These include hierar-chical task modeling with stochastic processes [37, 23], meta-learning [18, 25], online system identification [40], learning arepresentation from experience [36, 5], and experience replay[24, 7]. Considering that in realistic situations, the new taskmay not belong to the same task distribution as past tasks,we develop an ensemble model of policy networks capableof handling diverse unseen tasks, rather than relying on asingle network to model dynamics or latent representations.Moreover, prior work often depends on data distribution-wisesimilarity or distances between latent variables, implicitlymodeling task relationships. In contrast, we aim to intro-duce beneficial inductive bias explicitly by developing policynetworks with equivariant feature extractors to capture thegeometric structures of tasks.Symmetries in RL. There has been a surge of interestin modeling symmetries in components of Markov DecisionProcesses (MDPs) to improve generalization and efficiency[21, 22, 30, 31, 33, 34, 41, 32, 43, 9, 13, 14]. MDP homomor-phic network [30] preserves equivariant under symmetries inthe state-action spaces of an MDP by imposing an equivarianceconstraint on the policy and value network. As a result, itreduces the RL agent’s solution space and increases sampleefficiency. This single-agent MDP homomorphic network isthen extended to the multi-agent domain by factorizing globalsymmetries into local symmetries [31]. SO(2)-EquivariantRL [33] extends the discrete symmetry group to the group ofcontinuous planar rotations, SO(2), to boost the performancein robotic manipulation tasks. In contrast, we seek to exploitthe symmetric properties to improve the generalization capa-bility of task-agnostic CRL algorithms and handle inputs withmultiple modalities.III. P RELIMINARYMarkov decision process. We consider a Markov decisionprocess (MDP) as a 5-tuple (S,A, T, R, γ ), where SandAarethe state and action space, respectively. T:S ×A → ∆(S)isthe transition function, R:S ×A → Ris the reward function,andγis the discount factor. We aim to find an optimal policyπθ:S → A parameterized by θthat maximizes the expectedreturn Eτ∼πθhPH−1t=0γtr(st, at)i, where His the episodelength.Invariance and equivariance. LetGbe a mathematicalgroup. f:X → Y is a mapping function. For a transformationLg:X → X that satisfies f(x) =f(Lg[x]),∀g∈G, x∈ X,we say fis invariant to Lg. Equivariance is closely related toinvariance. If we can find another transformation Kg:Y → Ythat fulfills Kg[f(x)] =f(Lg[x]),∀g∈G, x∈ X then we sayfis equivariant to transformation Lg. It’s worth noting thatinvariance is a special case of equivariance.MDP with group symmetries. In MDPs with symmetries[21, 22, 30], we can identify at least one mathematical groupGof a transformation Lg:S → S and a state-dependentaction transformation Ksg:A → A , such that R(s, a) =RLg[s], Ksg[a], T(s, a, s′) =TLg[s], Ksg[a], Lg[s′]holdfor all g∈G, s, s′∈ S, a∈ A.Equivariant convolutional layer. LetGbe a Euclideangroup, with the special orthogonal group and reflection groupas subgroups. We use the equivariant convolutional layerdeveloped by Weiler and Cesa [35], where each layer consistsof G-steerable kernels k:R2→Rcout×cinthat satisfiesk(gx) =ρout(g)k(x)ρing−1,∀g∈G, x∈R2.ρinandρoutare the types of input vector field fin:R2→Rcinand outputvector field fout:R2→Rcout, respectively.Equivariant MLP. An equivariant multi-layer perceptron(MLP) consists of both equivariant linear layers and equiv-ariant nonlinearities. An equivariant linear layer is a linearfunction Wthat maps from one vector space Vinwith type ρinto another vector space with type ρoutfor a given group G.Formally ∀x∈Vin,∀g∈G:ρout(g)Wx=Wρ in(g)x. Herewe use the numerical method proposed by Finzi et al. [12] toparameterize MLPs that are equivariant to arbitrary groups.IV. M ETHODOLOGYA. Problem FormulationWe focus on continual learning in table-top manipulationenvironments, where various tasks are sequentially presented.We hypothesize that the streaming tasks can be partitionedinto task groups, each containing tasks that share symmetrywith one another. We adopt a realistic setting where a new taskgroup may emerge at each episode, the total number of distinctgroups remains unknown and the group may arrive in randomorders. The primary objective is to devise an online learningalgorithm capable of achieving high performance across alltasks with strong data efficiency. We visualize our CRL settingwith table-top manipulation environments in Figure 2.TimestepsDrawer CloseButton PressPlate SlideGoal ReachStreaming GroupsFig. 2: The continual learning environment setup involves four task groups, including Plate Slide, Button Press, Drawer Close,and Goal Reach. Groups streamingly come in.Equivariant Feature ExtractorInitialFrameCurrentFrameRobotState&AuxiliaryInfo.EquivariantConvolutionalNetworksEquivariantLinearNetworksEquivariantFeaturesGroupMaxPoolingInvariantFeaturesMLPValueActionDistanceMetricEquivariantMLPFig. 3: Equivariant policy network architecture.B. AlgorithmWe present the pseudocode for COVERS, a task-agnosticcontinual RL method with group symmetries, in Algorithm 1.COVERS maintains a collection Π ={(π,B)}, each elementof which comprising a pair of policy πand its respective databuffer B. Each policy πindependently manages one group oftasks, with Bstoring the initial frames of the group it oversees.At fixed time intervals, COVERS collects Nssteps in parallelunder the current policy πcurand stores the first kframesfrom each episode in the rollout buffer O. Based on O, thealgorithm then either (a) creates a new policy for an unseengroup and adds it to the collection Π, or (b) recalls an existingpolicy from the collection Πif the group has been previouslyencountered. It is worth noting that we assign policies basedon initial frames of each episode rather than the full episoderollout. This is because frames corresponding to later timestepsare heavily influenced by the behavior policy and could easilylead to unstable policy assignments. Only maintaining a subsetof the rollout trajectories also helps alleviate memory usage.After the policy assignment, the selected policy πcurwithparameters θis updated based on an online rollout bufferDand Proximal Policy gradient (PPO) method [26] withloss in Equation 1. ˆAtis the estimated advantage, ρt=πθ(at|st)/πθold(at|st)is the importance ratio and εis the cliprange.LCLIP =Eτ∼DhHXt=1min[ρt(θ)ˆAt,clip(ρt(θ),1−ε,1+ε)ˆAt]i.(1)C. Policy Network ArchitectureCOVERS utilizes an equivariant policy network that com-prises a policy network for predicting actions, a value networkapproximating values, and an equivariant feature extractortaking multiple modalities. We show the policy architecturein Figure 3 and additional details in Figure 10.Equivariant feature extractor. In real manipulation tasks,the observations typically comprise multiple modalities, suchas image observations, robot proprioceptive states, and goalpositions represented in vector form. To accommodate thesediverse modalities, we designed an equivariant feature extrac-torhequi, that employs an equivariant convolutional networkheConv[35] for image processing, coupled with an equiv-ariant linear network heMLP[6] to handle vector inputs.The resulting equivariant features from these two pathwaysare concatenated to form the output of the feature extractor.Formally, hequi(s) =Concat (heConv(s), heMLP(s)).Invariant value and equivariant policy. In the contextof MDPs involving robotic manipulation tasks with groupsymmetries, it is known that the optimal value functionmaintains group invariance, while the optimal policy displaysgroup equivariance [33]. To attain this, both the policy andvalue networks utilize a shared equivariant feature extractor,designed to distill equivariant features from observations.Subsequently, the value network leverages a group poolinglayer to transform these equivariant features into invariantones, before employing a fully connected layer to generatevalues. Formally, hinv(s) =GroupMaxPooling (hequi(s)). Thepolicy network, on the other hand, processes the equivariantfeatures with an additional equivariant MLP network to outputactions.D. Unsupervised Dynamic Policy AssignmentIn COVERS, we propose to detect different groups of tasksbased on distances in the invariant feature space . Such amechanism facilitates knowledge transfer between tasks ineach group. At a fixed episode interval, COVERS selects thepolicy of the group, whose data buffer Bhas the minimaldistance in the invariant feature space to the rollout buffer Ocollected in the current environment. Note that the invariantfeatures of both OandBare obtained through the featureAlgorithm 1 COVERS: Continual Vision-based RL withGroup SymmetriesInput : Threshold dε, initial frame number k, update intervalNu, rollout step size NsOutput : collection of policies ΠInitialization : Current policy πcurinitialized as a randompolicy with a policy data buffer B ← ∅, policy collectionΠ← {(πcur,B)}, number of episodes n←0, online rolloutbuffer D ←∅1:while task not finish do2: n←n+ 13: ifn%Nu= 0then4: Rollout buffer O ←∅ ▷Unsupervised PolicyAssignment5: Rollout Nssteps with πcurand get trajectories τ={(s0, a0, . . . , s H, aH)}6: Append the first kframes of each episode to rolloutbuffer O ← { (s0, . . . , s k−1)}7: Append the whole episode trajectories τto theonline rollout buffer D8: Calculate the 1-Wasserstein distancesdWi(O,Bi),∀{πi,Bi} ∈Π(Equation 2)9: Get the minimum distance dWjwhere j=arg min idWi(O,Bi)10: ifdj> dεthen11: Initialize a new random policy πas well as itspolicy data buffer B ← O12: πcur←π,Π←Π∪ {{π,B}}13: else14: Assign the existing policy and buffer withπcur←πj,Bj← B j∪ O15: Update πcurbased on online rollout buffer D(Equation 1) ▷Equivariant Policy Update16: D ←∅17: else18: Sample an episode and append to online rolloutbuffer Dextractor of πas shown in Figure 4. Considering that OandBmay have a different number of data pairs, we takea probablistic perspective by treating those data buffers assample-based representations of two distributions and use theWasserstein distance to measure the distance between thosetwo feature distributions. The invariant features are obtainedfrom the equivariant feature extractor via a group max-poolingoperation as shown in Figure 3.Wasserstein distance on invariant feature space. LetXandYbe a matrix constructed by invariant features extractedfrom the state buffer Bof size nand the buffer Oof sizem. Concretly, X= (X1, X2, ..., X n)T, Xi=hinv(si), i∈[n], si∈ B, and Y= (Y1, Y2, ..., Y m)T, Yl=hinv(sl), l∈[m], sl∈ O. We use the 1-Wasserstein distance [4] to measurethe distance between two empirical distributions XandY.rollout buffer Oh!"#$π!B$h!"#%π"B%d&$d&%online rollout Buffer DFig. 4: Calculation of 1-Wasserstein distance and update ofselected policy πj, whose data has minimal distance to O.Hence the distance between OandBisdW(O,B) =W1(X,Y) = minγ⟨γ,M⟩Fs.t.γ1=a, γT1=b, γ≥0, (2)where Mi,l=∥Xi−Yl∥2,a= [1 /n, . . . , 1/n],b=[1/m, . . . , 1/m].Mis the metric cost matrix.V. S IMULATION EXPERIMENTSWe validate COVERS’s performance in robot manipulation[39] tasks with nonstationary environments containing differ-ent objects or following different reward functions. We aimto investigate whether our method can (1) recall stored policywhen facing a seen group, as well as automatically initializea new policy when encountering an unseen group, (2) achievesimilar or better performance compared to baselines, and (3)understand the significance of key components of COVERS.A. EnvironmentSimulation setup. Our manipulation setup is composedof four groups of tasks. Each group contains four tasks,and all tasks within the same group exhibit rotational orreflectional symmetry with respect to each other. We buildenvironments based on the Meta-World benchmark [39]. Meta-World features a variety of table-top manipulation tasks thatrequire interaction with diverse objects using a Sawyer robot.We show the four groups of tasks in Figure 2 including GoalReach for reaching a goal position, Button Press for pressingthe button with gripper, Drawer Close for closing drawer withgripper, and Plate Slide for sliding the plate to a goal position.The goal positions and object locations of tasks in each groupare symmetrically arranged around the center of the table.States and actions. The agent receives four kinds ofobservations: an RGB image captured by a top-down cameracentered over the table at each timestep, an RGB imagecaptured by the same camera at the beginning of the episode,the robot state including gripper’s 3D coordinates and openingangle, and auxiliary information. The RGB image at the initialstep helps alleviate the occlusion problem caused by themovement of the robot. The auxiliary information containsFig. 5: Training curves for COVERS and other methods. Each background color corresponds to one task group. COVERS showssimilar performance with COVERS-GT, which utilizes additional ground truth group indices, and substantially outperformsother baselines.Fig. 6: The selected policies at each episode of COVERS. Each background color corresponds to one task group. The assignedpolicy indexes remain in alignment with the ground truth ones.3D goal positions which are only revealed to the agent inGoal Reach since the goal locations are not visualized inthe captured image, and are masked out for other groups. Toclose the sim-to-real gap, we prepossess the RGB images byinpainting robot arms motivated by [2], with details deferred toSection E. A comparison of the original and processed imagesis visualized in Figure 7. The action is a four-dimensionalvector containing the gripper’s 3D positions and its openingangle. Considering that we utilize two distinct robots: Sawyerin the simulation and Kinova in the real-world, such an actionspace and the image preprocessing mechanism help improvetransferability between different robot morphologies.B. Baselines and AblationsWe compare COVERS with different methods detailed asfollows. 3RL [5], an acronym for Replay-based Recurrent RL,is a state-of-the-art method in CRL with Meta-World tasks thatintegrates experience replay [24] and recurrent neural networks[3]. Note that we augment 3RL with a convolutional neuralnetwork (CNN) to handle image inputs. In contrast, CLEAR[24], a common baseline of CRL, only utilize the experiencereplay by maintaining a memory buffer to store the experienceof the past tasks and oversamples the current tasks to boost theperformance in the current one. Equi utilizes a single policywith an equivariant feature extractor to solve all tasks. CNNutilizes a single policy with a CNN-based feature extractoras a vanilla baseline. We provide the detailed implementationof baselines and hyperparameters in Section D. We comparewith two ablation methods. COVERS-GT uses ground truthgroup labels to assign policies to different groups, which helpsablate the performance of our proposed policy assignmentmechanism. COVERS-CNN utilizes a vanilla CNN block asthe image feature extractor to help ablate the effect of usingequivariant feature extractors.VI. S IMULATION RESULTS AND ABLATIONSA. ResultsDynamic policy assignments. Figure 6 shows that whenthe environment switches to a new group, COVERS quicklydetects changes and initializes a new policy for the group.Our method also recalls the corresponding policy from thecollection when facing the same group again. Overall, thedynamic policy assignments generated by COVERS align wellwith the ground truth group labels. However, we observe someinstances where the policy assignment does not match theground truth. This could potentially be attributed to the factthat the feature extractor of each policy may not be able tocapture representative features for each group during the earlystages of training. Notably, the rate of such misclassificationssignificantly reduces as the number of training episodes in-creases.Training performance. We show the training curves ofall methods in Figure 5 and the quantitative performance inTable II, including the average success rates and mean rewards.COVERS achieves a much higher episode reward and successrate consistently in different groups than baselines. It is worthnoting that although 3RL performs worse than COVERS,it achieves better performance than baselines with implicittask representations, including Equi, CLEAR, and CNN. Thisindicates that the explicit task representation used by 3RL,which maps transition pairs to latent variables using an RNN,facilitates the revelation of partial task identifications, therebyenhancing performance. It underscores the significance of task-specific representations in CRL.In the early stages of training, there isn’t a significantperformance difference between COVERS and Equi. However,as training progresses, COVERS begins to outperform Equi.This is because COVERS avoids the problem of forgettingthrough the retraining of policies for each previously en-countered task group. A comparison between CNN and Equireveals that incorporating group symmetries as inductive biaswithin the equivariant network significantly enhances sampleefficiency. This is achieved by only optimizing the policy forthe abstracted MDP of each task group.B. Ablation StudyThe effect of group symmetric information. COVERS-CNN devoid of the invariant feature extractor demonstrateslower episodic rewards and success rates when compared withCOVERS as shown in Table I and Figure 5. From these results,we conclude that the equivariant feature extractor significantlyenhances performance by modeling group symmetry informa-tion by introducing beneficial inductive bias through its modelarchitecture.The effect of the dynamic policy assignment module InFigure 5, COVERS’s training curve is similar to COVERS-GT, which uses ground truth group indexes as extra priorknowledge. Table I shows that the performance drop dueto misclassification is minor considering the small standarddeviation and COVERS’s performance is within one or twostandard deviations of COVERS-GT.VII. R EAL-WORLD VALIDATIONReal-world setup. Our real-world experiment setup utilizesa Kinova GEN3 robotic arm with a Robotiq 2F-85 gripper.The top-down RGB image is captured with an Intel RealSenseD345f. Gripper’s coordinates and opening angle are obtainedthrough the robot’s internal sensors. The real robot setupsare demonstrated in Figure 8. We directly deploy the trainedpolicies in simulation to the real world. Table II shows averagesuccess rates across 20 trials and shows that our trainedpolicies have strong generalization capability to real-worldscenarios. The performance drop compared with simulationexperiments may be due to the inconsistent visual featuresand different scales of robots’ action spaces.VIII. C ONCLUSIONWe propose COVERS, a novel Vision-based CRL frame-work that leverages group symmetries to facilitate general-ization to unseen but equivalent tasks under the same groupoperations. COVERS detects group boundaries in an unsuper-vised manner based on invariant features and grows policiesRealSimEnvironment SetupOriginal Top-down ImageProcessed Top-down ImageCameraFig. 7: Image preprocessing to narrow down the sim-to-realgap.Fig. 8: The real Kinova GEN3 setup with four task groups.The goal point marked in the figure is only disclosed to theagent in Goal Reach as auxiliary information.for each group of equivalent tasks instead of a single task.We show that COVERS assigns tasks to different groupswith high accuracy and has a strong generalization capability,outperforming baselines by a large margin. One limitationof COVERS is that the memory it occupies grows linearlywith the number of task groups. However, it is worth notingTABLE I: Quantitative results showing performances at convergence for different methods.Methods COVERS 3RL CLEAR CNN Equi COVERS-GT COVERS-CNNPlate SlideSuccess Rate 0.97±0.02 0.28±0.06 0 .06±0.03 0 .03±0.02 0 .02±0.02 0 .91±0.03 0 .62±0.05Ave. Reward 344.04±12.89 101.20±7.35 65 .65±2.23 23 .44±1.14 64 .02±5.85 337 .44±13.87 232 .25±14.24Button PressSuccess Rate 0.87±0.04 0.52±0.06 0 .31±0.06 0 .09±0.03 0 .01±0.01 0 .87±0.04 0 .26±0.05Ave. Reward 323.41±3.48 260.80±6.86 138 .78±12.23 91 .34±9.34 121 .13±7.02 330 .56±2.63 181 .21±10.83Drawer CloseSuccess Rate 0.82±0.04 0 .40±0.06 0 .27±0.05 0 .16±0.04 0 .40±0.05 0.98±0.02 0.56±0.05Ave. Reward 400.09±6.18 280 .62±6.39 216 .08±7.68 116 .33±10.1 273 .26±9.67 417.38±5.6 227.3±13.0Goal ReachSuccess Rate 0.98±0.02 0.60±0.06 0 .58±0.06 0 .14±0.04 0 .47±0.05 0 .97±0.02 0 .97±0.02Ave. Reward 483.53±1.35 322 .23±17.33 293 .5±16.16 151 .24±14.31 306 .72±20.34 488.02±0.35 480.96±1.05AverageSuccess Rate 0.91±0.02 0 .44±0.03 0 .30±0.03 0 .1±0.02 0 .22±0.02 0.93±0.01 0.60±0.03Ave. Reward 387.77±5.02 241 .21±7.39 178 .5±7.58 95 .59±5.59 191 .28±8.23 393.35±5.19 280.43±8.49TABLE II: Real-world validation results.Task Groups Success RatePlate Slide 0.45±0.15Button Press 0.60±0.15Drawer Close 0.65±0.15Goal Reach 0.95±0.07that COVERS still occupies less memory than maintaining apolicy buffer for each task by only storing representative dataframes such as the initial frames for each task group. Anotherlimitation is that although assuming a top-down camera witha fixed base is widely adopted in existing works, it is hard tofulfill outside of labs. It would be interesting to incorporatemore general group operations, such as affine transformationand domain randomization techniques, to handle deformedimages. Another interesting future direction is extending ourwork to continual multi-agent RL settings.ACKNOWLEDGMENTThe authors gratefully acknowledge the support from theunrestricted research grant from Toyota Motor North America.The ideas, opinions, and conclusions presented in this paperare solely those of the authors.REFERENCES[1] Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Tae-sup Moon. Uncertainty-based continual learning withadaptive regularization. Advances in neural informationprocessing systems , 32, 2019.[2] Shikhar Bahl, Abhinav Gupta, and Deepak Pathak.Human-to-robot imitation in the wild. arXiv preprintarXiv:2207.09450 , 2022.[3] Bram Bakker. Reinforcement learning with long short-term memory. Advances in neural information processingsystems , 14, 2001.[4] Vladimir I Bogachev and Aleksandr V Kolesnikov. Themonge-kantorovich problem: achievements, connections,and perspectives. Russian Mathematical Surveys , 67(5):785, 2012.[5] Massimo Caccia, Jonas Mueller, Taesup Kim, LaurentCharlin, and Rasool Fakoor. Task-agnostic continualreinforcement learning: In praise of a simple baseline.arXiv preprint arXiv:2205.14495 , 2022.[6] Gabriele Cesa, Leon Lang, and Maurice Weiler. A pro-gram to build e(n)-equivariant steerable CNNs. In Inter-national Conference on Learning Representations , 2022.URL https://openreview.net/forum?id=WE4qe9xlnQw.[7] Arslan Chaudhry, Marcus Rohrbach, Mohamed Elho-seiny, Thalaiyasingam Ajanthan, Puneet K Dokania,Philip HS Torr, and M Ranzato. Continual learning withtiny episodic memories. 2019.[8] Zhiyuan Chen and Bing Liu. Lifelong machine learning.Synthesis Lectures on Artificial Intelligence and MachineLearning , 12(3):1–207, 2018.[9] Taco Cohen and Max Welling. Group equivariant con-volutional networks. In International conference onmachine learning , pages 2990–2999. PMLR, 2016.[10] Pavel I Etingof, Oleg Golberg, Sebastian Hensel, TiankaiLiu, Alex Schwendner, Dmitry Vaintrob, and Elena Yu-dovina. Introduction to representation theory , volume 59.American Mathematical Soc., 2011.[11] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep net-works. In International Conference on Machine Learn-ing, pages 1126–1135. PMLR, 2017.[12] Marc Finzi, Max Welling, and Andrew Gordon Wil-son. A practical method for constructing equivariantmultilayer perceptrons for arbitrary matrix groups. InInternational Conference on Machine Learning , pages3318–3328. PMLR, 2021.[13] Fabian Fuchs, Daniel Worrall, V olker Fischer, and MaxWelling. Se (3)-transformers: 3d roto-translation equiv-ariant attention networks. Advances in Neural Informa-tion Processing Systems , 33:1970–1981, 2020.[14] Michael J Hutchinson, Charline Le Lan, Sheheryar Zaidi,Emilien Dupont, Yee Whye Teh, and Hyunjik Kim.Lietransformer: Equivariant self-attention for lie groups.InInternational Conference on Machine Learning , pages4533–4543. PMLR, 2021.[15] Khimya Khetarpal, Matthew Riemer, Irina Rish, andDoina Precup. Towards continual reinforcement learn-ing: A review and perspectives. arXiv preprintarXiv:2012.13490 , 2020.[16] Khimya Khetarpal, Matthew Riemer, Irina Rish, andDoina Precup. Towards continual reinforcement learning:A review and perspectives. Journal of Artificial Intelli-gence Research , 75:1401–1476, 2022.[17] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz,Joel Veness, Guillaume Desjardins, Andrei A Rusu,Kieran Milan, John Quan, Tiago Ramalho, AgnieszkaGrabska-Barwinska, et al. Overcoming catastrophic for-getting in neural networks. Proceedings of the nationalacademy of sciences , 114(13):3521–3526, 2017.[18] Anusha Nagabandi, Ignasi Clavera, Simin Liu, Ronald SFearing, Pieter Abbeel, Sergey Levine, and Chelsea Finn.Learning to adapt in dynamic, real-world environmentsthrough meta-reinforcement learning. arXiv preprintarXiv:1803.11347 , 2018.[19] Anusha Nagabandi, Chelsea Finn, and Sergey Levine.Deep online learning via meta-learning: Continualadaptation for model-based rl. arXiv preprintarXiv:1812.07671 , 2018.[20] Sam Powers, Eliot Xing, Eric Kolve, Roozbeh Mottaghi,and Abhinav Gupta. Cora: Benchmarks, baselines, andmetrics as a platform for continual reinforcement learn-ing agents. In Conference on Lifelong Learning Agents ,pages 705–743. PMLR, 2022.[21] Balaraman Ravindran and Andrew G Barto. Symmetriesand model minimization in markov decision processes,2001.[22] Balaraman Ravindran and Andrew G Barto. Approximatehomomorphisms: A framework for non-exact minimiza-tion in markov decision processes. 2004.[23] Hang Ren, Aivar Sootla, Taher Jafferjee, Junxiao Shen,Jun Wang, and Haitham Bou-Ammar. Reinforcementlearning in presence of discrete markovian context evo-lution. arXiv preprint arXiv:2202.06557 , 2022.[24] David Rolnick, Arun Ahuja, Jonathan Schwarz, TimothyLillicrap, and Gregory Wayne. Experience replay forcontinual learning. Advances in Neural InformationProcessing Systems , 32, 2019.[25] Steind ́or Sæmundsson, Katja Hofmann, and Marc Pe-ter Deisenroth. Meta reinforcement learning withlatent variable gaussian processes. arXiv preprintarXiv:1803.07551 , 2018.[26] John Schulman, Filip Wolski, Prafulla Dhariwal, AlecRadford, and Oleg Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[27] Fumihide Tanaka and Masayuki Yamamura. An approachto lifelong reinforcement learning through multiple en-vironments. In 6th European Workshop on LearningRobots , pages 93–99, 1997.[28] Sebastian Thrun and Tom M Mitchell. Lifelong robotlearning. Robotics and autonomous systems , 15(1-2):25–46, 1995.[29] Ren ́e Traor ́e, Hugo Caselles-Dupr ́e, Timoth ́ee Lesort,Te Sun, Guanghang Cai, Natalia D ́ıaz-Rodr ́ıguez, andDavid Filliat. Discorl: Continual reinforcement learningvia policy distillation. arXiv preprint arXiv:1907.05855 ,2019.[30] Elise van der Pol, Daniel Worrall, Herke van Hoof,Frans Oliehoek, and Max Welling. Mdp homomorphicnetworks: Group symmetries in reinforcement learning.Advances in Neural Information Processing Systems , 33:4199–4210, 2020.[31] Elise van der Pol, Herke van Hoof, Frans A Oliehoek,and Max Welling. Multi-agent mdp homomorphic net-works. arXiv preprint arXiv:2110.04495 , 2021.[32] Dian Wang, Jung Yeon Park, Neel Sortur, Lawson LSWong, Robin Walters, and Robert Platt. The surprisingeffectiveness of equivariant models in domains withlatent symmetry. arXiv preprint arXiv:2211.09231 , 2022.[33] Dian Wang, Robin Walters, and Robert Platt. So (2)equivariant reinforcement learning. In International con-ference on learning representations (ICLR) , 2022.[34] Dian Wang, Robin Walters, Xupeng Zhu, and RobertPlatt. Equivariant qlearning in spatial action spaces.InConference on Robot Learning , pages 1713–1723.PMLR, 2022.[35] Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. Advances in Neural Infor-mation Processing Systems , 32, 2019.[36] Annie Xie, James Harrison, and Chelsea Finn. Deepreinforcement learning amidst continual structured non-stationarity. In International Conference on MachineLearning , pages 11393–11403. PMLR, 2021.[37] Mengdi Xu, Wenhao Ding, Jiacheng Zhu, Zuxin Liu,Baiming Chen, and Ding Zhao. Task-agnostic online re-inforcement learning with an infinite mixture of gaussianprocesses. Advances in Neural Information ProcessingSystems , 33:6429–6440, 2020.[38] Mengdi Xu, Zuxin Liu, Peide Huang, Wenhao Ding,Zhepeng Cen, Bo Li, and Ding Zhao. Trustworthyreinforcement learning against intrinsic vulnerabilities:Robustness, safety, and generalizability. arXiv preprintarXiv:2209.08025 , 2022.[39] Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian,Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task andmeta reinforcement learning. In Conference on RobotLearning , pages 1094–1100. PMLR, 2020.[40] Wenhao Yu, Jie Tan, C Karen Liu, and Greg Turk.Preparing for the unknown: Learning a universal pol-icy with online system identification. arXiv preprintarXiv:1702.02453 , 2017.[41] Linfeng Zhao, Xupeng Zhu, Lingzhi Kong, Robin Wal-ters, and Lawson LS Wong. Integrating symmetryinto differentiable planning with steerable convolutions.InThe Eleventh International Conference on LearningRepresentations , 2023.[42] Wenshuai Zhao, Jorge Pe ̃na Queralta, and Tomi Wester-lund. Sim-to-real transfer in deep reinforcement learningfor robotics: a survey. In 2020 IEEE Symposium Serieson Computational Intelligence (SSCI) , pages 737–744.IEEE, 2020.[43] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su,Robin Walters, and Robert Platt. Sample efficientgrasp learning using equivariant models. arXiv preprintarXiv:2202.09468 , 2022.APPENDIXIn this section, we briefly introduce Group and Represen-tation Theory [10] to help understand the policy structure inSection F.Linear group representations describe abstract groups interms of linear transformations on some vector spaces. In par-ticular, they can be used to represent group elements as lineartransformations (matrices) on that space. A representation ofa group Gon a vector space Vis a group homomorphismfrom GtoGL(V), the general linear group on V . That is, arepresentation is a mapρ:G→GL (V),such that ρ(g1g2) =ρ(g1)ρ(g2),∀g1, g2∈G. (3)Here Vis the representation space, and the dimension of Vis the dimension of the representation.A. Trivial RepresentationTrivial representation maps any group element to the iden-tity, i.e.∀g∈G, ρ(g) = 1 . (4)B. Irreducible RepresentationsA representation of a group Gis said to be irreducible(shorthand as irrep ) if it has no non-trivial invariant subspaces.For example, given a group Gacting on a vector spaceV,Vis said to be irreducible if the only subspaces of Vpreserved under the action of every group element are thezero subspace and Vitself. The trivial representation is anirreducible representation and is common to all groups.C. Regular RepresentationGiven a group G, the regular representation is a represen-tation over a vector space Vwhich has a basis indexed bythe elements of G. In other words, if Ghasnelements (ifGis finite), then the regular representation is a representationon a vector space of dimension n. An important fact aboutthe regular representation is that it can be decomposed intoirreducible representations in a very structured way.D. Dihedral GroupThe dihedral group Dnis the group of symmetries of a reg-ular n-sided polygon, including nrotations and nreflections.Thus, Dnhas2nelements. For example, the dihedral groupof a square ( D4) includes 4 rotations and 4 reflections, giving8 transformations in total.E. Image InpaintingTo close the sim-to-real gap, we employ a pre-processingtechnique on camera images, which involves in-paintingrobotic arms. The process begins by capturing a backgroundimage in which the robotic arm is absent from the camera’sview. For every time step, a mask that represents the po-sition of each robotic limb is generated, leveraging the 3Dlocations of individual joints and the projection matrix of thecamera. With this mask, we can select all areas devoid of therobotic arm, and subsequently update the background imageaccordingly. The images are subjected to a color correctionprocess to mitigate any potential color deviations attributableto lighting or reflection. Lastly, a distinct blue circle is overlaidat the gripper’s position on the background image to indicatethe gripper’s location. The entire image in-painting process isshown in Figure 9.F . Detailed Policy ArchitectureIn this section, we present the detailed model architectureincluding the model sizes and the types of each layer inFigure 10.In order to make our policy network equivariant undertransformations from the finite group D2, we need to choosethe appropriate representation for both the network input andoutput, while also ensuring that the network architecture andoperations preserve this equivariance.The image input is encoded using the trivial representation.The robot state, on the other hand, is encoded with a mixtureof different representations: the gripper’s position on the z-axis and the gripper’s open angle are encoded with the trivialrepresentation since they are invariant to group actions in D2.The gripper’s location on the x and y-axes, however, are en-coded with two different non-trivial irreducible representationsbecause their values are equivariant to group actions in D2.The value output is encoded with the trivial representationsince the optimal value function should be invariant to groupactions [33]. Finally, the action output is encoded with amixture of different representations. For actions, the grippermovement along the z-axis and the gripper’s opening angleare encoded with the trivial representation, while the gripper’slocation on the x and y-axes are encoded with two differentnon-trivial irreducible representations, aligning with the inputencoding. The distance metric is encoded with trivial repre-sentation through the group pooling operation.G. Implementation of CLEARThe CLEAR algorithm [24] addresses the challenge ofcontinual learning by putting data from preceding tasks ina buffer, utilized subsequently for retraining. This methodeffectively decelerates the rate of forgetting by emulating acontinuous learning setting. The specific network architecturefor CLEAR is illustrated in Figure 11.To make CLEAR able to process both images and robotstate as input, we introduce a feature extractor, which har-moniously integrates a CNN and an MLP network. Thiscomposite feature extractor is carefully designed to containa similar quantity of learnable parameters to our Equivariantfeature extractor.H. Implementation of 3RLThe 3RL algorithm [5] can be seen as an improved versionof CLEAR, wherein additional historical data is provided tothe actor and critic from a dedicated context encoder. Thishistorical data includes (si, ai, ri), and the context encoderFig. 9: Image inpainting process.Fig. 10: Detailed equivariant policy network architecture. ReLU nonlinearity is omitted in the figure. A layer with a suffix ofR indicates the layer output is in the regular representation. A layer with a suffix of T indicates the layer output is in the trivialrepresentation. A layer with a suffix of ’mix’ means the layer output combines different representations.extracted task specificities from the history data with anRNN network. The specific network architecture for 3RL isillustrated in Figure 12.I. HyperparametersWe show the hyperparameters of our proposed COVERS inTable III. Moreover, we show the hyperparameters of baselinesin Table IV.TABLE III: COVERS HyperparameterHyperparameters ValueWasserstein distance threshold dε 1.0Initial frame number k 4Update interval Nu 1000Rollout buffer size Ns 1000Batch size 64Number of epochs 8Discount factor 0.99Optimizer learning rate 0.0003Likelihood ratio clip range ε 0.2Advantage estimation λ 0.95Entropy coefficient 0.001Max KL divergence 0.05TABLE IV: CLEAR and 3RL HyperparameterHyperparameters ValueCommon hyperparameterReplay buffer size 200000Discount factor 0.95Burn in period 20000Warm up period 1000Batch size 512Gradient clipping range (−1.0,+1.0)Learning rate 0.0003Entropy regularization coefficient 0.0053RL Specific HyperparametersRNN’s number of layers 1RNN’s context size 30RNN’s context length 5Fig. 11: Network architecture for CLEAR. In (a) we show the network architecture of the actor network and the critic network.In (b) we show the structure of the feature extractor, which consists of both a CNN network and an MLP network. ReLUnonlinearity is omitted in the figure.Fig. 12: Network architecture for 3RL. In (a), we illustrate the structure of both the actor and critic networks, whereas (b)highlights the configuration of the context encoder, comprising a feature extractor and GRUs. It’s noteworthy that the featureextractor has the same architecture as the CLEAR algorithm, as shown in Figure 11. |
N8KlLRpevrT | Point-based Correspondence Estimation for ClothAlignment and ManipulationMansi Agarwal, Thomas Weng, David HeldThe Robotics Institute, Carnegie Mellon University, USA{magarwa2,tweng,dheld }@andrew.cmu.eduAbstract —Automating cloth folding is a challenging task withpractical implications in various domains. Existing methodsoften struggle with unaligned configurations, limiting their ap-plicability in real-world scenarios. In this research, we presentFabricFlowAlignNet (FFAN), a novel approach that learns flow-based correspondences on point clouds between the currentobserved and goal cloth configurations. We use these learned3D correspondences for both cloth alignment and manipula-tion: correspondences are used to align the observed clothwith the goal, and the flow-based correspondences are re-usedas action proposals. Our experiments demonstrate that FFANdemonstrates superior performance compared to a state-of-the-art folding approach, particularly in scenarios where observedcloth is rotated or otherwise unaligned with the goal.I. I NTRODUCTIONCloth manipulation is a challenging task, with difficulties inboth perception and control due to the deformability of cloth.Manual cloth manipulation techniques are time-consuming,labor-intensive, and prone to human error. As a result, thereis a growing demand to automate cloth manipulation invarious domains such as folding laundry, handling textiles inmanufacturing, and assistive dressing.A fundamental aspect of successful cloth manipulation isestablishing correspondences between the current observationand the goal configuration. These correspondences providespatial associations necessary for planning and executing fold-ing actions. However, while prior methods have proposed tolearn correspondences for cloth [10, 4], they do not explicitlyuse such methods for reasoning about the alignment betweenthe observed cloth and the desired configuration. Alignment isa crucial step in cloth manipulation, and prior correspondence-based policies do not handle cases where the cloth and goalare not aligned [10], or rely on human demonstrations [4].In this work, we propose FabricFlowAlignNet (FFAN ),an approach that combines the use of correspondences andsymmetry-handling techniques to learn a goal-conditionedcloth manipulation policy. Our method leverages correspon-dences to “virtually” align the observation and goal pointclouds, enabling the policy to determine the appropriate ac-tions to execute on the observation. By incorporating thesecorrespondences and symmetry handling, our approach aimsto acquire an understanding of cloth folding strategies anddevelop a manipulation policy capable of accurately andefficiently folding clothes. This is particularly beneficial inchallenging scenarios where the observed cloth is rotated orunaligned with the desired goal configuration.Fig. 1: Performance of FabricFlowAlignNet (FFAN) vs. FNNon unaligned goals. FFAN uses an alignment procedure onlearned correspondences to achieve the desired manipulations.We evaluate the performance of our method against a state-of-the-art folding approach [10] on a folding task, wherethe goal and observation poses are not aligned. Our methodreasons about symmetries and employs correspondences todeal with unaligned goals, unlike the baseline. The resultsdemonstrate the effectiveness and robustness of our approachin achieving successful cloth folding when the observation andgoal configurations are unaligned.II. P RIOR WORKFabricFlowNet (FFN) [10] performed bimanual cloth fold-ing by estimating flow correspondences between the observedcloth image and goal cloth image. However, FFN relies onstrict alignment between observation and goal cloth poses inthe image. Our approach extends FFN by proposing an ap-proach for aligning learned 3D correspondences to overcomethese limitations. By establishing spatial relationships betweenpoints in observation and goal configurations, we enableprecise alignment and achieve better folding performance forunaligned goals than FFN.Fabric Descriptors [4] is a method for learning correspon-dences in fabric manipulation tasks using a dense contrastiveloss. However, once the correspondences are learned, theproposed policy relies on human demonstrations. In contrast,our method can learn and estimate correspondences withoutany human demonstrations.SpeedFolding [1], employs self-supervised learning anda small number of expert demonstrations to perform clothsmoothing and folding. However, Speedfolding is trainedFig. 2: Overview of the FFAN pipeline.exclusively in the real world. In comparison, our approach,similar to FFN, undergoes training in simulation before beingtransferred to the real world.Cloth Funnels [2] proposes a method that uses self-supervised rewards to learn both cloth canonicalization andalignment. Their alignment procedure is an iterative versionof the Procrustes’ algorithm, which is designed for aligningrigid objects. However, since the objects being aligned aredeformed fabrics, the alignment achieved using Procrustes canbe a local optimum. In contrast, our approach proposes usingrandom sample consensus (RANSAC) for aligning deformedfabrics, resulting in an asymptotic, globally optimal alignment.III. P OINT CLOUD CORRESPONDENCE ESTIMATION FORCLOTH ALIGNMENT AND MANIPULATIONIn this section, we describe FabricFlowAlignNet (FFAN),our approach for estimating observation-goal correspondencesto align and manipulate cloth. A schematic overview can befound in Fig. 2.A. Learning Correspondences for Point CloudsAs the first component of our overall pipeline, wepropose a 3D, flow-based correspondence estimator called“3DFlowNet” (Fig. 3). 3DFlowNet takes the observation andgoal point clouds coandcgas input, and outputs 3D flowˆf. 3DFlowNet is a non-trivial extension of the FlowNet fromFabricFlowNet [10], which was limited to 2D.Fig. 3: 3DFlowNet ArchitectureWe first transform the point clouds into a graph, wherenodes represent cloth particles and are connected to theirneighboring particles on the cloth mesh. This step requiresprivileged state information from the simulator of the clothmesh edges, which would not be available in the real world;estimating these edges is an area of future work and couldleverage prior methods like VCD [5]. We embed the inputgraphs by employing a graph neural network H, which outputsembeddings for each node in the graph: c′o, c′g∈RN×F.We then use a Transformer network [7] denoted as Ttoperform cross-attention between observation and goal features.Our approach is inspired by prior Transformer-based per-pointnetworks like DCP [9] and TAX-Pose [6]. Ttakes c′oandc′gas input and outputs transformed embeddings c′∈RN×F. Theresulting transformer embeddings, c′, are then summed withthe original observation embeddings c′oto produce c′′o.To estimate correspondences, we pass c′′othrough MLP lay-ersMto produce estimated correspondences ˆf∈RNx3. Thesecorrespondences represent how each cloth particle transportsto achieve the goal configuration.To train 3DFlowNet, we use ground truth correspondencebetween point clouds cgandcoand a weighted L2 lossL2(ˆf, f) =PNi=1wi(fi−ˆfi)2where ˆfrepresents the esti-mated correspondences, frepresents the ground truth corre-spondences, and Nis the total number of points in the pointcloud. The weights wiare higher for ground truth pick points.B. Iterative Correspondence EstimationTo improve correspondence estimation when the displace-ment between observed and desired goal configurations islarge, we introduce an iterative approach to improve the accu-racy of our correspondence estimation. Our iterative processinvolves transporting the input point cloud to the positionsindicated by the estimated correspondences, and then re-computing the estimation with this intermediate point cloud.Each iteration of this procedure should further refine theestimated correspondence.In each iteration, we utilize the trained 3DFlowNet modelto estimate the correspondence between the intermediate pointcloud ˆcoand the target configuration cg. By integrating theestimated correspondence into the observation, we simulatethe application of the flow to progressively approach the tar-get configuration. The algorithm for iterative correspondenceestimation is summarized in Alg. 1.C. RANSAC Alignment for Unaligned GoalsThe correspondences estimated by 3DFlowNet indicate howeach point in the observed cloth configuration should move toreach the desired configuration. In the case where the observa-tion and goal are aligned with respect to each other, this per-point flow correspondence represents a desired cloth manipu-lation, which we can use to estimate the action (Sec. III-D).However, in cases where the observation and goal are notAlgorithm 1 Iterative Correspondence Estimation1:Input: Trained 3DFlowNet, Point Clouds co, cg2:Initialize all zeros ̄f∈RN×33:ˆco:=co4:fork = 1. . . K do5: ˆf= 3DFlowNet( ˆco,cg)6: ̄f+=ˆf7: ˆco+=ˆf8:end for9:return ̄faligned, the flow correspondences contain both informationabout alignment as well as the desired manipulation.To address the cases where the goal is not aligned, wefirst propose estimating the alignment using the flow-basedcorrespondence and RANSAC [3]. The forward pass through3DFlowNet provides the estimated correspondences. TheRANSAC procedure attempts to find an alignment transformwith the maximum number of inlier cloth points as follows:1) Sample three indices (i, j, k )on the cloth.2) Compute the transformation matrix Tbetween the 3sampled cloth points ( pi,pj,pk) and their estimatedcorrespondences ( pi+ˆfi,pj+ˆfj,pk+ˆfk).3) Compute inliers by transforming all current cloth pointspaccording to T, computing the distance between trans-formed points and points transported using estimatedflow||Tp−(p+ˆf)||, and thresholding the per-pointdistance by an epsilon ε.4) Sample mtimes and choose the transformation matrixTwith the maximum number of inliers.Once the alignment Thas been estimated, we virtuallyalign the observation and goal, re-estimate correspondencesgiven the alignment, and determine the manipulation actionaccording to the following section.D. Estimating the Pick Location for an ActionFig. 4: 3DPickNet ArchitectureTo predict the pick points necessary for cloth manipulation,we introduce a neural network called “3DPickNet”. Similar toFFN [10], our method supports bimanual manipulation and iscapable of estimating both pick points p1andp2. The inputsto 3DPickNet are the current observation coand the estimatedcorrespondences ˆfbetween coand the goal configuration cg.The architecture of 3DPickNet is depicted in Figure 4.To enable the prediction of the second pick point condi-tioned on the first pick point, we utilize two separate networks:3DPickNet1 and 3DPickNet2. In 3DPickNet1, we concatenatecoand ˆfand create a graph representation of the pointcloud. Each node in the graph is represented as [x, y, z, ˆf].3DPickNet1 generates a probability value for each node to beselected as the first pick point p1. The node with the highestprobability is identified as p1.3DPickNet2 is responsible for predicting the second pickpoint p2, taking p1into account. In this network, we introducean additional input channel called ˆp1, which represents a 3DGaussian distribution centered on p1. This channel assignshigher values to nodes near p1and lower values to nodesfarther away, to give PickNet2 information about the first picklocation when predicting the second pick point p2.For training 3DPickNet, we use a weighted binary cross-entropy loss. The loss function compares the predicted proba-bilities of nodes being pick points with the ground truth labels.The binary cross-entropy loss function is defined as:L(p, y) =NXi=1−wi(yilogpi+ (1−yi) log(1 −pi)) (1)where prepresents the predicted probabilities, yis the groundtruth labels, and Nis the total number of nodes. The weightswiare higher for ground truth pick points.Once the pick points p1andp2are predicted using theestimated correspondences ˆf, the corresponding actions canbe executed to achieve the desired goal configuration.E. Implementation DetailsWe use the same dataset as FabricFlowNet [10], but usepoint clouds of the cloth instead of depth images, and use 3Dpick and place points instead of 2D. The graph neural networkHfor 3DFlowNet consists of two Graph Attention layers(GATConv) [8]. The MLP network architecture Mconsistsof two fully-connected layers. The 3DPickNet architectureconsists of three Graph Attention Network layers and twoMLP layers for both 3DPickNet1 and 3DPickNet2. At the endof each network, a Sigmoid layer computes the probabilityof each node being a pick point. 3DPickNet1 representseach node with a six-dimensional feature, while 3DPickNet2utilizes a seven-dimensional feature to accommodate the ad-ditional information provided by ˆp1.IV. E XPERIMENTSOur experiments investigate the following questions: (1)How does FFAN compare with FabricFlowNet (FFN) [10] onaligned goals? (2) How does FFAN compare with FFN onunaligned goals? We evaluate the methods in simulation, usingthe average L2 distance between cloth points in the achievedvs. desired point clouds as our error metric.A. Performance on Aligned GoalsWe use the same test set as FFN [10] to evaluate perfor-mance on aligned goals. This test set consists of 40 single-stepgoals, where both the observation and the goal positioned atthe center of the workspace with the same orientation. For thisexperiment, we do not use alignment estimation with FFANto directly compare pre-aligned folding performance.Table I presents the performance comparison between ourmethod and FFN. The results demonstrate that our methodperforms comparably to FFN on aligned goals, with only amarginal difference in average particle distance.TABLE I: Folding Performance on 40 Aligned GoalsMethod Average Particle Distance (mm) ↓FFN [10] 4.26FFAN (Ours) 5.54B. Performance on Unaligned GoalsWe also evaluate the performance on unaligned goals, wherethe goal cloth configuration is randomly rotated and there-fore not aligned with the initial observed configuration. Weconducted experiments on three test sets: Easy, Medium, andHard, where each test set corresponds to a different range ofrotations. Easy encompasses angles between -5 and 5 degrees,Medium ranges from -45 to 45 degrees, and Hard covers acomplete rotation from 0 to 360 degrees.We evaluated the performance of FFAN in two scenar-ios: using ground truth vs. estimated correspondences forRANSAC alignment. Figure 5 presents a comparison of thetwo methods against FFN across all four test sets: aligned,easy, medium, and hard. From the results, we observe thatour method with estimated correspondences outperforms FFNon the Medium and Hard tasks. However, using ground truthcorrespondences for RANSAC alignment yields even betterresults across all misaligned sets, surpassing the performanceof FFN. This demonstrates the potential for further improve-ment by improving the correspondence estimation. Qualitativeresults on the Medium case can be found in Fig. 1.Fig. 5: Comparison of Folding on Different Test SetsC. Ablations1) No Iterative Correspondence: In this section, we ablateour approach by removing iterative correspondence estimation(Sec. III-B). Table II shows that average particle distance erroris higher when iterative correspondence estimation is removed.TABLE II: Ablation of Iterative Correspondence EstimationMethod Average Particle Distance (mm) ↓FFAN w/o Iter. Corresp. 10.591FFAN w/ Iter. Corresp. 5.542) Number of Iterations for Iterative Correspondence:To determine the number of iterations to run for iterativecorrespondence estimation, we measured performance whileincreasing the number of iterations on a validation set. Weused flow prediction error, an unweighted version of the lossfrom Sec. III-A , as our performance metric. We evaluatednumber of iterations k= 1 (run 3DFlowNet once) to 4. Notethat we did not retrain 3DFlowNet in an iterative manner.Figure 6 shows the flow prediction error as a function ofthe number of iterations ( k) in the iterative flow process. Aswe increase kfrom 1 to 3, there is a notable decrease in theflow prediction error; however, beyond k= 3, we observeda slight increase in the error. Based on these observations,we empirically determined that the number of iterations foriterative flow correspondence estimation is k= 3.Fig. 6: Error vs. Number of Correspondence Estimation StepsV. C ONCLUSIONIn this work, we propose FabricFlowAlignNet (FFAN ), agoal-conditioned policy for cloth alignment and folding. Ourapproach estimates flow correspondences to reason about thealignment between the observed cloth and desired goal, thenpredicts actions given the estimated alignment. FFAN performson par with FFN for aligned goals, and outperforms FFN whenhandling large misalignments. Our ablations demonstrate theimportance of using iterative correspondence estimation andof selecting the number of iterations.Limitations and Challenges : Our method also currentlyrequires meshes as input; for cloth manipulation in the realworld, such a mesh will have to be estimated. Like FFN,our method relies on sub-goals, which can be restrictive andmay not generalize well to unseen fabrics and configurations.Exploring alternative approaches that eliminate explicit sub-goals is a potential direction for future work.ACKNOWLEDGMENTSThis work was supported by the US Air Force andDARPA (FA8750-18-C-0092) and the NSF (IIS-1849154,DGE2140739).REFERENCES[1] Yahav Avigal, Lars Berscheid, Tamim Asfour, TorstenKr ̈oger, and Ken Goldberg. Speedfolding: Learning effi-cient bimanual folding of garments. In 2022 IEEE/RSJInternational Conference on Intelligent Robots and Sys-tems (IROS) , pages 1–8. IEEE, 2022.[2] Alper Canberk, Cheng Chi, Huy Ha, Benjamin Burchfiel,Eric Cousineau, Siyuan Feng, and Shuran Song. Clothfunnels: Canonicalized-alignment for multi-purpose gar-ment manipulation. arXiv preprint arXiv:2210.09347 ,2022.[3] Martin A Fischler and Robert C Bolles. Random sampleconsensus: a paradigm for model fitting with applicationsto image analysis and automated cartography. Commu-nications of the ACM , 24(6):381–395, 1981.[4] Aditya Ganapathi, Priya Sundaresan, Brijen Thanan-jeyan, Ashwin Balakrishna, Daniel Seita, JenniferGrannen, Minho Hwang, Ryan Hoque, Joseph E Gon-zalez, Nawid Jamali, et al. Learning dense visual corre-spondences in simulation to smooth and fold real fabrics.In2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 11515–11522. IEEE, 2021.[5] Xingyu Lin, Yufei Wang, Zixuan Huang, and DavidHeld. Learning visible connectivity dynamics for clothsmoothing. In Conference on Robot Learning , pages256–266. PMLR, 2022.[6] Chuer Pan, Brian Okorn, Harry Zhang, Ben Eisner,and David Held. Tax-pose: Task-specific cross-poseestimation for robot manipulation. In Conference onRobot Learning , pages 1783–1792. PMLR, 2023.[7] Ashish Vaswani, Noam Shazeer, Niki Parmar, JakobUszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser,and Illia Polosukhin. Attention is all you need. Advancesin neural information processing systems , 30, 2017.[8] Petar Veli ˇckovi ́c, Guillem Cucurull, Arantxa Casanova,Adriana Romero, Pietro Lio, and Yoshua Bengio. Graphattention networks. arXiv preprint arXiv:1710.10903 ,2017.[9] Yue Wang and Justin M Solomon. Deep closest point:Learning representations for point cloud registration. InProceedings of the IEEE/CVF international conferenceon computer vision , pages 3523–3532, 2019.[10] Thomas Weng, Sujay Man Bajracharya, Yufei Wang,Khush Agrawal, and David Held. Fabricflownet: Bi-manual cloth manipulation with a flow-based policy. InConference on Robot Learning , pages 192–202. PMLR,2022. |
OFoo4631KAo | Edge Grasp Network: A Graph-BasedSE(3)-invariant Approach to Grasp DetectionHaojie Huang Dian Wang Xupeng Zhu Robin Walters Robert PlattKhoury College of Computer Science, Northeastern University{huang.haoj; wang.dian; zhu.xup; r.walters; r.platt }@northeastern.eduAbstract —Given point cloud input, the problem of 6-DoFgrasp pose detection is to identify a set of hand poses inSE(3) from which an object can be successfully grasped. Thisimportant problem has many practical applications. Here wepropose a novel method and neural network model that enablesbetter grasp success rates relative to what is available in theliterature. The method takes standard point cloud data as inputand works well with single-view point clouds observed fromarbitrary viewing directions. Videos and code are available athttps://haojhuang.github.io/edge grasp page/.I. I NTRODUCTIONGrasp detection [6, 25, 18] is a critical robotic skill. Therobot first observes a scene containing objects in the formof images, voxels, or point clouds, and detects a set of viablegrasp poses from which an object may be grasped stably. Thereare two general approaches: SE(2) methods where the modelreasons in terms of a top-down image of the scene (e.g. [13,15, 17, 12, 30]), and SE(3) methods where the model reasonsin terms of a point cloud or voxel grid (e.g. [6, 18, 8, 3]).SE(3) methods have a distinct advantage over SE(2) methodsbecause they have more flexibility and are easier to apply ingeneral robotics settings. Unfortunately, SE(3) methods aregenerally much more complex, so SE(2) models are oftenpreferred.This paper tackles the problem of SE(3) grasping with anovel grasp detection model that we call the Edge GraspNetwork . The model is based on a novel representation ofa 6-DoF grasp that uses a pair of vertices in a graph. Given asingle approach point (a position the hand will approach), wedefine a KNN graph that contains all the points in the pointcloud that are within a fixed radius of the approach point. Eachpoint in this KNN graph corresponds to an orientation of thegripper and, when paired with the approach point, defines adistinct 6-DOF grasp pose. We infer the quality of all suchgrasps simultaneously using a graph neural network.This approach is novel relative to the literature in threeways: 1) First, our method of defining unique grasp candidatesin terms of a pair of vertices in a graph is new; 2) Second,our inference model using a graph neural network definedwith respect to a single approach point is novel; 3) Third,our model is the first SE(3) grasp method that incorporatesSO(3) equivariance.II. P ROBLEM STATEMENTThe grasp detection problem is to locate a set of grasp posesinSE(3) for a parallel-jaw gripper given input about the scenein the form of a point cloud. Denote the point cloud observa-tion as P={pi∈R3}ni=1, where nis the number of points.For each point p∈P, we will assume that an estimate of theobject surface normal np∈S2can be calculated. Althoughit is not required, we generally assume that this point cloudis generated by a single depth camera. A grasp pose of thegripper is parameterized α= (C, R)∈SE(3) , where C∈R3is the location of the center of the gripper and R∈SO(3)represents its orientation. The grasp detection problem is tofind a function S:P7→ {αi∈SE(3)}mi=1,that maps Pontomgrasp poses detected in the scene. The grasp evaluationproblem is to find a function Φ : (P, α)7→[0,1], that denotesthe quality of grasp α. Notice that Φis invariant to translationand rotation in the sense that Φ(g·P, g·α) = Φ( P, α)for anarbitrary g∈SE(3) . In other words, the predicted quality ofa grasp attempt should be invariant to transformation of theobject to be grasped and the grasp pose by the same rotationand translation.III. M ETHODA. Grasp Pose RepresentationFig. 1. Grasp pose defined by the edge grasp(pa, pc). The reference frame of the gripperis illustrated by the RGB coordinate system.GwandGdare the gripper width and gripperdepth.We represent a grasp as apair of points in the cloud,(pa, pc)∈P2.pais con-sidered to be the approachpoint and pcis the contactpoint. Assuming that wecan estimate the object sur-face normal ncat point pc,(pa, pc)defines a grasp ori-entation Rwhere the grip-per fingers move parallelto the vector ncand thegripper approaches the ob-ject along the vector aac=nc×(nc×(pa−pc)). This is illustrated in Figure 1. Thegripper center Cis positioned such that pais directly betweenthe fingers and pcis at a desired point of contact on the finger,C=pa−δaac. Here, δ=Gd+ (pa−pc)Taacdenotes thedistance between the center of the gripper and paandGddenotes gripper depth. We will sometimes refer to a graspdefined this way as an edge grasp .To sample edge grasps, we will generally sample the ap-proach point pafirst and then for each approach point samplemultiple contact points pcfrom the neighbors of pawithin theFig. 2. Encoding process of edge grasps. The rightmost part shows the represented grasp of one edge feature.distance ofGw2, where Gwdenotes the aperture of the gripper,i.e. the distance between the fingers when the gripper is open.One key advantage of this representation is that we can easilyprovide the approximate position of a desired grasp as an inputto the model. If we want to grasp a tool by its handle, forexample, this is easily achieved by only considering contactlocations on the handle.B. Model ArchitectureOur model, which we call the Edge Grasp Network , evalu-ates the grasp quality for a set of edge grasps that have a singleapproach point pa∈Pin common. We evaluate multipleapproach points by cropping them separately and then placingthem in a batch. There are four steps, as illustrated in Figure 2.Step 1: Crop Point Cloud. Given a point cloud Pand anapproach point pa, only a set of neighboring points of paaffects the edge grasp. We crop the point cloud to a ball aroundpa:Sa={p∈P:∥p−pa∥2≤Gw/2},Step 2: PointNetConv ( ψ).We compute a feature at each pointusing a stack of PointNetConv layers [21], denoted ψ. Eachlayer calculates a new feature f(l+1)i at each point pi∈Sausingf(l+1)i = maxj∈N(i)MLPf(l)j, pj−pi, (1)where N(i)denotes the k-nearest neighbors to pi. Here, f(l)jdenotes the feature at point pjprior to the layer, max denotesmax-pooling where the max is taken over features (like inPointNet [20]). MLP is a 2-layer multi-layer perceptron thattakes both parameters as input. The input features at the firstlayer are the positions and surface normals of the points. LetFSadenote the set of features for the points in Saat the outputof Step 2.Step 3: Compute Global Feature ( ω).ωtakes FSaas inputand generates a single global feature gathat describes Sa.First, FSais passed to an MLP followed by a max-poolinglayer (over features) to generate a first-level global feature.This is concatenated with each feature f∈FSaand passed toa second MLP and max-pooling layer to output ga. Finally,for each edge grasp (pa, pc)∈P2associated with pa, wecalculate an edge feature fac∈Facby concatenating gawiththe point feature fc∈FSacorresponding to pc. This edgefeature will represent the edge grasp to the classifier.Step 4: Grasp Classification. After calculating the edge fea-tures Fac, we predict grasp success using a four-layer MLPwith a sigmoid function which takes an edge feature facasinput and infers whether the corresponding edge grasp willsucceed.C.SO(3) Invariance of Edge Grasp NetworkIn Section II, we noted that the grasp quality functionΦ(P, α)is invariant to translation and rotation, i.e. Φ(g·P, g·α) = Φ( P, α)for arbitrary g∈SE(3) . As presented above, theEdge Grasp Network is invariant to translation because eachSais centered at the approach point pa(we translate pato theorigin of the world frame). However, additional methodologyis required to create invariance to rotations. Rotational invari-ance allows the model to generalize grasp knowledge fromone orientation to another. We enable rotational invariancewith two different approaches. The first approach is to applydata augmentation on Sato learn SO(3) invariance duringtraining. Our second approach is to use an SO(3) -equivariantmodel, Vector Neurons [5]. Vector Neurons can be appliedto nearly any neural model architecture by encoding the R3along which SO(3) acts as a separate tensor axis. As we showin Section IV-C, leveraging SO(3) symmetries is beneficial tolearn a grasp function.IV. S IMULATIONSWe benchmarked our method in simulation against threestrong baselines, PointNetGPD [14], VGN [2], and GIGA [8].To make the comparison as fair as possible, we used the samesimulator developed by Breyer et al. [2] and used by Jiang etal. [8]. There are two types of simulated grasp environments,PACKED and PILED . In PACKED , objects are placed randomlyin an upright configuration in close proximity, e.g. as shownin Figure 3(a). In PILED , objects are dumped randomly froma box into a pile.A. Experimental Protocol:We evaluate our model over several rounds of testing.During each round, a pile or packed scene with 5 test objectsis generated inside of a 30×30×30 cm3workspace and thesystem begins grasping one object at a time. Prior to eachgrasp, we take a depth image of the scene from a directionabove the table to extract the point cloud or TSDF, and passit to the model. After receiving grasp scores from the model,we execute the grasp with the highest quality score. A roundof testing ends when either all objects are cleared or twoFig. 3. Left: the packed scenario; Right: the pile scenario.consecutive grasp failures occur. Performance is measuredover 100 simulation rounds with 5 different random seedsin terms of: 1) Grasp Success Rate (GSR =#successful grasps#total grasps);and 2) Declutter Rate (DR =#grasped objects#total objects). The results arereported in Table I. Detailed description of the baselines andtraining could be found in Appendix VIII-G and VIII-F.TABLE I. Quantitative results of clutter removal. Edge-sample randomlysample edges that do not collide with the table. EdgeGraspNet is the version ofour method trained with data augmentation. VN-EdgeGraspNet is the versionwith Vector Neurons. GIGA-High query at a higher resolution of 60×60×60.Method Packed PileGSR (%) DR (%) GSR (%) DR (%)PointNetGPD 79.3±1.8 82 .5±2.9 75 .6±2.3 77 .0±2.8VGN 80.2±1.6 86 .2±2.0 64 .9±2.2 69 .1±3.2GIGA 85.3±1.9 91 .2±1.7 69 .9±1.8 75 .2±2.2GIGA-High 88.5±2.0 93 .9±1.4 74 .1±1.5 80 .1±0.5Edge-Sample 44.0±4.0 39 .7±4.5 40 .2±2.5 30 .9±3.2EdgeGraspNet 92.0±1.4 94 .8±0.8 89 .9±1.8 92 .8±1.6VN-EdgeGraspNet 92.3±1.2 95 .2±0.6 92 .3±1.5 93 .5±1.8Method PointNetGPD VGN GIGA GIGA-High EdgeGraspNet VN-EdgeGraspNet# of Parameters 1.6 M 0.3 M 0.6 M 0.6 M 3.0 M 1.7 MInference time 382 ms 10 ms 21 ms 50 ms 28 ms 89 msTABLE II. Number of parameters and inference time for proposed methodsand baselines. Evaluated on one NVIDIA-GeForce RTX 3090.B. Results Analysis:We draw several conclusions from Table I. First, our sam-ple strategy unadorned with grasp quality inference (Edge-Sample) already performs with a grasp success rate of between40% and 44%. This suggests our edge grasp representation andsample strategy provide a helpful bias. Second, both Edge-GraspNet and VN-EdgeGraspNet outperform all the baselinesin all performance categories by a significant margin, particu-larly in the P ILEcategory. Third, the performance gap betweenthe packed and piled scenarios is smaller for our method thanthat for the baselines, which suggests that our model adapts todifferent object configurations better. Finally, one concern ofmost sampled-based methods is the inference time since theyneed to evaluate each grasp individually. However, our methodtakes use of the shared global features and could achieve a real-time inference time. Detailed inference time analyses could befound in Appendix VIII-H.C. Vector Neurons and Data Augmentation:To investigate the role of SO(3) invariance, we comparedour base version of EdgeGraspNet with a variation thatomits data augmentation (EdgeGraspNet-NoAug) and VN-EdgeGraspNet.Fig. 4. Test loss functions showing theeffect of data augmentation and VectorNeurons.As shown in Figure 4, theVector Neurons version per-forms best and learns fastest,and the base EdgeGrasp-Net converges to approxi-mately the same level. How-ever, without either VectorNeurons or data augmenta-tion, the model overfits. Thisdemonstrates that leveragingSO(3) symmetry is beneficialto learning the grasp function.D. Ablation study on cropping Sa(a) (b)Fig. 5. Ablation Study on cropping Sa. Left Figure: Test loss v.s. Epoch; Right Figure:Test Accuracy v.s. Epoch. The results show the effect of cropping Sa.We compare our EdgeGrapNet with a variation that skipscropping point cloud around the approach point pa. Aftergetting the observed point cloud P, we build a KNN graphonPand feed it to ψdirectly to get the point features FP.Then, we extract the global feature gacorresponding to pafrom{fp∈FP|p∈Sa}. Instead of translating patothe origin of the world coordinate, we center P, the entireobserved point cloud, at the origin. Except for these variations,other operations are the same. Let’s denote the variationas EdgeGraspNet-NoBall. Figure 5 shows the results of ourmodel and the variation version. It indicates that implementingonSais better than implementing on P. There are somereasons why Sais better than P. First, Pis a special case ofSawhen we set the radius of the sphere as infinity. Second,Saincludes all the related points that affect the grasp qualitywithout redundant information. Last but not least, the invariantproperty on Sais more generalized than that on Pa. Given ag∈SO(3) , a grasp action α, and a grasp evaluation functionΨ, the invariance of EdgeGraspNet could be defined asΨ(g·Sa, g·α) = Ψ( Sa, α)However, EdgeGraspNet-NoBall could only be invariant torotations on the entire point cloud: Ψ(g·P, g·α) = Ψ( P, α),which is less generalized.V. E VALUATION ON A ROBOTIn this paper, we measure physical grasp performance inthree different setups with 4 object sets, as shown in Figure 7.Our model trained in simulation is directly implemented on areal robot.(a) (b)Fig. 6. Robot setup. Left: the robot takes a depth image of the scene from a randomviewpoint. Right: the robot grasps the red adversarial object from a localized graspablepart.VI. S ETUPWe used a UR5 robot equipped with a Robotiq-85 Gripper,as shown in Figure 6. An Occipital Structure Sensor wasmounted on the arm to capture the observation. Prior to eachgrasp, we move the sensor to a randomly selected viewpoint1(pointing toward the objects to be grasped, as shown inFigure 6(a)), take a depth image, and generate a point cloud.We detect and remove the table plane with RANSAC and wedenoise and downsample the point cloud using Open3D [29].For each observed point cloud, we sample 40 approach pointsand 2000 grasps total. After running inference, we filter outthe grasps with a grasp quality score below 0.9. As is theprocedure in [2] and [6], we select the highest (largest z-coordinate) above-threshold candidate for execution. We useMoveIt 2 to plan the motion of the robot arm. A grasp islabeled as a success only when the object(s) is picked andtransferred to the bin.A. ResultsHousehold Objects in the Packed and Pile Settings: Thisexperiment evaluates our method in the packed and piledsettings described in Section IV. In each round, 5 objects arerandomly selected from 10 objects. Table III reports graspsuccess rates and declutter rates from 16 rounds (80 objectstotal). GSRs vary between 91.7% and 93% – a result thatclosely matches our simulated results. It indicates the smallsim-to-real gap of our method.Method Packed PileGSR (%) DR (%) GSR (%) DR (%)EdgeGrasoNet 91.9 (80 / 87) 100 (80 / 80) 93.0 (80 / 86) 100 (80 / 80)VN-EdgeGraspNet 91.7 (78 / 85) 98.7 (79 / 80) 92.9 (79 / 85) 98.7 (79 / 80)TABLE III. Results of real-robot experiments for packed and piled graspsettings.Comparison with Zhu et al. [31] on test hard Objects:This experiment compares our method against the methodof Zhu et al. [31], a strong baseline from the literature. Ineach round, 10 objects are randomly selected and dumpedon the table. Table IV shows the results from 15 runs.VN-EdgeGraspNet outperforms [31] by about four percentagepoints both in terms of the grasp success rate and the declutterrate – a significant improvement against a strong baseline.1We randomly select a viewpoint and repeatedly use it.(a) (b) (c) (d)Fig. 7. Object sets and test configurations used for real robot experiments. From leftcolumn to right column: packed scene with 10 objects; pile scene with 10 objects; 20test hard objects [31]; 12 Berkeley adversarial objects [16].Method GSR (%) DR (%)Zhu et al. [31] 89.0 (138 / 155) 94.0 (141 / 150)EdgeGraspNet 91.8 (146 / 159) 98.0 (147 / 150)VN-EdgeGraspNet 93.6 (148 / 159) 98.6 (148 / 150)TABLE IV. Comparison with the method of Zhu et al. [31] using exactly thesame objects and setup.Comparison with [3] on the Berkeley Adversarial Pile: Wealso baselined our method using the 12 Berkeley AdversarialObjects described in [16], shown in Figure 7. Here, wecompare our method to the work of Cai et al. [3], called Volu-metric Point Network (VPN). Table V shows the performancecomparison. The results indicate that our method outperformsall the baselines. Our final grasp success rate is 84.4%, a verygood performance for the Berkeley adversarial object set.Method GSR (%) DR (%)Gualtieri et al. [6]* 70.91 (39 / 55) 97.5 (39 / 40)Breyer et al. [2]* 41.56 (32 / 77) 80 (32 / 40)Cai et al. [3]* 78.4 (40 / 51) 100 (40 / 40)EdgeGraspNet 84.4 (38 / 45) 95.0 (38 / 40)VN-EdgeGraspNet 83.0 (40 / 48) 100 (40 / 40)TABLE V. Comparison with VPN [3], GPD [6], and VGN [2] for the BerkeleyAdversarial Objects in a pile setting. We performed five rounds of graspingwith piles of eight objects in each. * Results for VPN [3], GPD [6], andVGN [2] are copied directly from [3].VII. C ONCLUSIONThis paper proposes a novel edge representation in the 6-DoF grasp detection problem. By formulating the grasp posewith an approach point, a contact point, and its surface normal,we represent edge grasps by local features of contacts andglobal features of the related points. We explore the SE(3)symmetry of our representation and propose EdgeGraspNetand VN-EdgeGraspNet to leverage SE(3) invariance in twodifferent ways. Finally, We evaluate our models on varioussimulated and real-world object sets against several strongbaselines. Experiments show the small sim-to-real gap, thehigh grasping success rate, and the generalization ability todifferent object sets of our method. A clear direction for futurework is to integrate more on-policy learning, which we believewould enable us to improve our performance.REFERENCES[1] Antonio Bicchi. On the closure properties of roboticgrasping. The International Journal of Robotics Re-search , 14(4):319–334, 1995.[2] Michel Breyer, Jen Jen Chung, Lionel Ott, Roland Sieg-wart, and Juan Nieto. V olumetric grasping network: Real-time 6 dof grasp detection in clutter. arXiv preprintarXiv:2101.01132 , 2021.[3] Junhao Cai, Jun Cen, Haokun Wang, and Michael YuWang. Real-time collision-free grasp pose detectionwith geometry-aware refinement using high-resolutionvolume. IEEE Robotics and Automation Letters , 7(2):1888–1895, 2022.[4] Berk Calli, Arjun Singh, Aaron Walsman, SiddharthaSrinivasa, Pieter Abbeel, and Aaron M Dollar. The ycbobject and model set: Towards common benchmarks formanipulation research. In 2015 international conferenceon advanced robotics (ICAR) , pages 510–517. IEEE,2015.[5] Congyue Deng, Or Litany, Yueqi Duan, Adrien Poule-nard, Andrea Tagliasacchi, and Leonidas J Guibas. Vec-tor neurons: A general framework for so (3)-equivariantnetworks. In Proceedings of the IEEE/CVF InternationalConference on Computer Vision , pages 12200–12209,2021.[6] Marcus Gualtieri, Andreas Ten Pas, Kate Saenko, andRobert Platt. High precision grasp pose detection indense clutter. In 2016 IEEE/RSJ International Confer-ence on Intelligent Robots and Systems (IROS) , pages598–605. IEEE, 2016.[7] Haojie Huang, Dian Wang, Robin Walter, and RobertPlatt. Equivariant transporter network. arXiv preprintarXiv:2202.09400 , 2022.[8] Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang,and Yuke Zhu. Synergies between affordance and geom-etry: 6-dof grasp detection via implicit representations.arXiv preprint arXiv:2104.01542 , 2021.[9] Daniel Kappler, Jeannette Bohg, and Stefan Schaal.Leveraging big data for grasp planning. In 2015 IEEEinternational conference on robotics and automation(ICRA) , pages 4304–4311. IEEE, 2015.[10] Alexander Kasper, Zhixing Xue, and R ̈udiger Dill-mann. The kit object models database: An object modeldatabase for object recognition, localization and manip-ulation in service robotics. The International Journal ofRobotics Research , 31(8):927–934, 2012.[11] Diederik P Kingma and Jimmy Ba. Adam: A method forstochastic optimization. arXiv preprint arXiv:1412.6980 ,2014.[12] Sulabh Kumra, Shirin Joshi, and Ferat Sahin. Antipodalrobotic grasping using generative residual convolutionalneural network. In 2020 IEEE/RSJ International Con-ference on Intelligent Robots and Systems (IROS) , pages9626–9633. IEEE, 2020.[13] Ian Lenz, Honglak Lee, and Ashutosh Saxena. Deeplearning for detecting robotic grasps. The InternationalJournal of Robotics Research , 34(4-5):705–724, 2015.[14] Hongzhuo Liang, Xiaojian Ma, Shuang Li, MichaelG ̈orner, Song Tang, Bin Fang, Fuchun Sun, and JianweiZhang. Pointnetgpd: Detecting grasp configurations frompoint sets. In 2019 International Conference on Roboticsand Automation (ICRA) , pages 3629–3635. IEEE, 2019.[15] Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, MichaelLaskey, Richard Doan, Xinyu Liu, Juan Aparicio Ojea,and Ken Goldberg. Dex-net 2.0: Deep learning to planrobust grasps with synthetic point clouds and analyticgrasp metrics. arXiv preprint arXiv:1703.09312 , 2017.[16] Jeffrey Mahler, Matthew Matl, Vishal Satish, MichaelDanielczuk, Bill DeRose, Stephen McKinley, and KenGoldberg. Learning ambidextrous robot grasping poli-cies. Science Robotics , 4(26):eaau4984, 2019.[17] Douglas Morrison, Peter Corke, and J ̈urgen Leitner.Closing the loop for robotic grasping: A real-time,generative grasp synthesis approach. arXiv preprintarXiv:1804.05172 , 2018.[18] Arsalan Mousavian, Clemens Eppner, and Dieter Fox.6-dof graspnet: Variational grasp generation for objectmanipulation. In Proceedings of the IEEE/CVF Inter-national Conference on Computer Vision , pages 2901–2910, 2019.[19] Vinod Nair and Geoffrey E Hinton. Rectified linear unitsimprove restricted boltzmann machines. In Icml, 2010.[20] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas JGuibas. Pointnet: Deep learning on point sets for 3dclassification and segmentation. In Proceedings of theIEEE conference on computer vision and pattern recog-nition , pages 652–660, 2017.[21] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas JGuibas. Pointnet++: Deep hierarchical feature learningon point sets in a metric space. Advances in neuralinformation processing systems , 30, 2017.[22] Yuzhe Qin, Rui Chen, Hao Zhu, Meng Song, Jing Xu,and Hao Su. S4g: Amodal single-view single-shot se (3)grasp detection in cluttered scenes. In Conference onrobot learning , pages 53–65. PMLR, 2020.[23] Anthony Simeonov, Yilun Du, Andrea Tagliasac-chi, Joshua B Tenenbaum, Alberto Rodriguez, PulkitAgrawal, and Vincent Sitzmann. Neural descriptor fields:Se (3)-equivariant object representations for manipula-tion. In 2022 International Conference on Robotics andAutomation (ICRA) , pages 6394–6400. IEEE, 2022.[24] Arjun Singh, James Sha, Karthik S Narayan, TudorAchim, and Pieter Abbeel. Bigbird: A large-scale 3ddatabase of object instances. In 2014 IEEE internationalconference on robotics and automation (ICRA) , pages509–516. IEEE, 2014.[25] Andreas ten Pas, Marcus Gualtieri, Kate Saenko, andRobert Platt. Grasp pose detection in point clouds. TheInternational Journal of Robotics Research , 36(13-14):1455–1473, 2017.[26] Dian Wang, Robin Walters, Xupeng Zhu, and RobertPlatt. Equivariant qlearning in spatial action spaces.InConference on Robot Learning , pages 1713–1723.PMLR, 2022.[27] Chaozheng Wu, Jian Chen, Qiaoyu Cao, Jianchi Zhang,Yunxin Tai, Lin Sun, and Kui Jia. Grasp proposalnetworks: An end-to-end solution for visual learningof robotic grasps. Advances in Neural InformationProcessing Systems , 33:13174–13184, 2020.[28] Binglei Zhao, Hanbo Zhang, Xuguang Lan, Haoyu Wang,Zhiqiang Tian, and Nanning Zheng. Regnet: Region-based grasp network for end-to-end grasp detection inpoint clouds. In 2021 IEEE International Conference onRobotics and Automation (ICRA) , pages 13474–13480.IEEE, 2021.[29] Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3d:A modern library for 3d data processing. arXiv preprintarXiv:1801.09847 , 2018.[30] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su,Robin Walters, and Robert Platt. Sample efficientgrasp learning using equivariant models. Proceedingsof Robotics: Science and Systems (RSS) , 2022.[31] Xupeng Zhu, Dian Wang, Ondrej Biza, Guanang Su,Robin Walters, and Robert Platt. Sample efficientgrasp learning using equivariant models. arXiv preprintarXiv:2202.09468 , 2022.VIII. A PPENDIXA. Grasp SamplingEdge Grasp Network enables us to evaluate a large numberof edge grasps that share a single approach point with a singleforward pass through the model. However, each differentapproach point necessitates evaluating the model separately.Therefore we adopt the following grasp sample strategy. First,we sample a small number of approach points Pa⊂P. Theseapproach points can be sampled uniformly at random fromthe cloud, or they can be focused on parts of the cloud wherea grasp is preferred. Then, we evaluate the model once forall approach points by forming a minibatch of |Pa|inputsand performing a single forward pass. The output of this is aset of sets of edge grasp features, F(ac)1, F(ac)2, . . . , F (ac)|Pa|.One can take the union of these sets, sample medge graspsuniformly at random or select grasps with preferred gripperapproach directions and gripper contact locations, and thenrun the grasp classifier on these sampled grasps to producethe final output.B. Simulator Setting and Grasp LabelHere, we provide a more detailed description of our sim-ulator settings. To generate the training data, we selected arandom number of objects from training object sets. We set themass of each object as 0.5kg and the friction ratio between thegripper and the object as 0.75. We label up to 2000 edge graspcandidates per scene by attempting grasps in simulation. Tosample 2000 grasps, we sample 32 approach points from theobserved point clouds through Farthest Point Sampling. Edgegrasps whose minimum zvalue (the height) is smaller thanthe height of the table are filtered out to avoid colliding withthe table. A True label of a grasp candidate must satisfy thefollowing conditions: 1) the gripper should not collide withany objects when moving from the “prergrasp“ pose to thegrasp pose; 2) the object must be held by the gripper after asequence of gripper shaking motions.C. ModelWe implemented the Edge Grasp Network model describedin Section III-B. The input to the model is a downsampledpoint cloud created by voxelizing the input with a 4mm voxeldimension. The PointNetConv layers in ψare implemented us-ing a KNN graph with k= 16 , i.e. with 16 nearest neighbors.ψis implemented as a sequence of three PointNetConv layerswith a 2-layer MLP as the message passing function. The graspclassifier is implemented as a 4-layer MLP with ReLUs [19]and a sigmoid layer at the end. We evaluate both conventionaland Vector Neuron versions of our model in simulated andreal-robot experiments.D. Data AugmentationExtensive data augmentation is applied to the conventionalversion of our model to force it to learn the SO(3) invariancefrom training. Before loading the point cloud Pfrom the train-ing dataset, we randomly sample a g∈SO(3) to rotate P. Thisresults in rotations on the 32 cropped point clouds correspond-ing to each approach point, i.e., {g·Sa1, g·Sa2, . . . , g ·Sa32}.Since Sais centered at pa, we then translate pato the origin. Abatch of 32 rotated and translated Sais fed to our model as theinput during training. Since the Vector Neurons version of ourmodel obtains SO(3) invariance by mathematical constraint,in this case only a translation is applied to each Sa.E.SO(3) Equivariance to SO(3) InvarianceBased on Vector Neurons [5], we implement the equivairantPointNetConv to realize the SO(3) equivariant feature. Wemaintain the equivariance of our network until getting the edgefeature fac. Invariance is a speical case of equivariance andcan be achieved by multiplying a matrix Tac∈R3×3generatedfrom facby a network:(facR)(TacR)⊤=facRR⊤Tac⊤=facTac⊤(2)Equation 2 transforms the SO(3) -equivairant edge feature toSO(3) -invariant edge feature. Combined with the translationalinvariance described in Section IV-D, we finally realize theSE(3) invariance of edge features. Once the edge features areSE(3) invariant, the entire network becomes SE(3) invariant,i.e., the invariant feature could be fed to a conventional MLPwithout breaking its invariant property.F . TrainingThe grasp simulator developed by Breyer et al. [2] includesa Franka-Emika Panda gripper. There are 303 training objectsand 40 test objects drawn collectively from YCB [4], Big-Bird [24] and other sources [10, 9]. We created training databy generating both packed and piled scenes with a randomnumber of objects in simulation, we add pixelwise Gaussiannoise ( N ∼ (0,0.001) ) to the depth image, extract the pointcloud or TSDF (Truncated Signed Distance Function) fromthe depth image, voxelizing the point cloud with 4-millimetervoxel, generating up to 2000 edge grasp candidates per scene,and labeling each of those candidates by attempting a graspin simulation. To generate the 2000 edge grasp candidates,we sample 32 approach points uniformly at random from thevoxelized cloud. In total, we generated 3.36M labeled graspsbased on 3,317scenes, 85% of which were used for trainingand 15% were used for testing. We train our model with theAdam [11] optimizer and an initial learning rate of 10−4.The learning rate is reduced by a factor of 2 when the testloss has stopped improving for 6 epochs. It takes about 0.5seconds to complete one SGD step with a batch size of 32 ona NVIDIA Tesla V100 SXM2 GPU. We train the model for150 epochs and balance the positive and negative grasp labelsduring training. Both VN-EdgeGraspNet and EdgeGraspNetconverge in less than 10 hours.G. Baselines for Simulation ExperimentsWe compare our method against three strong baselinesin Section IV. PointNetGPD [14] is a sample-based methodthat represents a candidate grasp pose by the canonicalizedpoints inside the gripper and infers grasp quality using aPointNet [20] model. VGN [2] (V olumetric Grasping Net-work) takes a TSDF of the workspace as input and outputsthe grasp orientation and quality at each voxel. GIGA [8](Grasp detection via Implicit Geometry and Affordance) usesa structured implicit neural representation from 2D featuregrids and generates the grasp orientation and quality for eachpoint trained with a auxiliary occupancy loss. Both VGN andGIGA receive a 40×40×40TSDF based on output from asingle depth image. We also evaluate a variation of GIGAwith a 60×60×60resolution TSDF, which we refer toasGIGA-High . We use the pretrained models2of VGN andGIGA from Jiang et al. [8] and uniformly sample 64 approachpoints and 4000 grasps for our method and PointNetGPD . Asshown in Table II, the pretrained VGN and GIGA models havefewer parameters than our method due to their TSDF input.While our model requires more parameters to operate on pointclouds, all compared models are relatively lightweight.H. Performance ConsiderationsInference Time: Table II shows the time needed by variousmodels to infer grasp qualities. At 28ms per 4,000 grasps, ourEdgeGraspNet model is slightly slower than both VGN andGIGA but still much faster than PointNetGPD and GIGA-High. The Vector Neurons version of out model is about threetimes slower than the EdgeGraspNet model.Method Packed PileGSR (%) DR (%) GSR (%) DR (%)EdgeGraspNet (16-1k) 88.5±1.7 92.6±1.4 84.8±2.1 86.7±3.3EdgeGraspNet (32-2k) 91.4±1.5 94.0±2.0 89.4±1.3 91.2±2.5EdgeGraspNet (64-4k) 92.0±1.4 94.8±0.8 89.9±1.8 92.8±1.6VN-EdgeGraspNet (16-1k) 89.7±2.4 92.2±1.6 87.1±0.8 88.5±2.3VN-EdgeGraspNet (32-2k) 91.4±1.3 93.8±2.0 89.3±0.5 92.1±1.8VN-EdgeGraspNet (64-4K) 92.3±1.2 95.2±0.6 92.3±1.5 93.5±1.8TABLE VI. Grasp performance for different numbers of approach points (16,32, and 64) and grasp samples (1000, 2000, and 4000).TABLE VII. Inference time v.s. # of approach points. We sample differentnumbers of approach points (16, 32 and 64) with the same number (2000) ofedge grasps. Evaluated on one NVIDIA-GeForce RTX 3090.16-2k 32-2k 64-2kEdgeGraspNet 9.6 ms 15.8 ms 27.4 ms32-500 32-1k 32-2kEdgeGraspNet 15.8 ms 15.7 ms 15.8 msTABLE VIII. Inference time v.s. # sampled edge grasps. We sample differentnumbers of edge grasps (500, 1000 and 2000) with the same number (32) ofapproach points. Evaluated on one NVIDIA-GeForce RTX 3090.Performance of different sample sizes: The speed and perfor-mance of our model is closely tied to the number of approachpoints (which determines batch size) and the number ofclassified grasps. Table VI shows that fewer approach pointsand grasp samples reduce grasp success somewhat, but not by2Our trained models for VGN and GIGA on the dataset described above inSection VIII-F did not perform as well as the pretrained models from Jiang etal.[8]. It is probably because they train separate models for the PACKED andPILE scenarios with a larger dataset (4M labeled grasps for each scenario).We used their pretained models to do the evaluations.a huge amount. As shown in Table VII, when we double thenumber of approach points, the inference time increases about1.7 times. As shown in Table IX, when we fix the numberof approach points and increase the sampled edge grasps, theinference time almost does not change.I. Failure Case AnalysisMethod EdgeGraspNet VN-EdgeGraspNetGSR (%) DR (%) GSR (%) DR (%)Household Packed 91.9 (80 / 87) 100 (80 / 80) 91.7 (78 / 85) 98.7 (79 / 80)Household Pile 93.0 (80 / 86) 100 (80 / 80) 92.9 (79 / 85) 98.7 (79 / 80)Test Hard objects 91.8 (146/159) 98.0 (147/150) 93.6 (148/159) 98.6 (148/150)Berkeley Adversarial 84.4 (38/45) 95.0 (38/40) 83.0 (40/48) 100 (40/40)TABLE IX. Summary of real Robot experiments. We report grasp successrates (GSR) and declutter rates (DR).We summarized the results of the real-robot experimentsin Table IX. Almost half of our failures are caused bycolliding with other objects when executing the grasp. It couldbe mitigated by considering collision when selecting grasps.However, there are some other cases we think readers mightwant to notice. 1). Occlusion due to partial observation, e.g.,a single camera view could only capture a plane of a complexobject. 2). Sensor noise. Our model is robust to small noisesand leverage the bilateral symmetry of a parallel jaw gripper,i.e., a flip of the calculated surface normal3results in a 180◦rotation of the gripper along the approach direction. However,if the observation is largely distorted, the proposed edge graspcould be inaccurate since our sampling strategy is closelyrelated to the observed points. There is a trade-off betweenthe precise grasping and the robust grasping. 3). Grasp label oftraining data. Our binary label of the training data is describedin Section VIII-B, but it does not prohibit true dangerousgrasps. A dangerous grasp could be defined as there is a largechange of the pose of the target object when being graspedregardless a successful outcome or not. We believe the truedangerous grasp could cause false-positive predictions whenthe observation is noisy. Last but no least, failures are thestepping stones to better algorithms in robotics.J. Visualization of GraspsWe shows grasp candidates found using our algorithm inFigure 8. The first two rows show three examples of randomlysampled grasp poses for each observed object. The diversityof grasp poses demonstrates our model can provides a highcoverage of possible stable grasps. The last row of Figure 8shows five grasps that share the same contact point. It indicatesour model is beneficial to grasping tasks involved with specificcontact locations.IX. R ELATED WORKA. 6-DoF gasping methodsThere are two main types of 6-DoF grasping methodsin recent research. Sample-based methods like GPD [25],3A flip of the calculated surfaced normal happens frequently.(a) (b) (c) (d)Fig. 8. Illustrations of grasp candidates found using our algorithm. The first two rows show three examples of a gripper placed at randomly sampled grasp candidate configurations.The last row shows five grasps that share the same contact point.PoinetNetGDP [14], GraspNet [18] that are often comprised ofa grasp sampler module and a grasp evaluator module. Thesemethods often require long training time and execution timesince each grasp is represented and evaluated individually. Incontrast, our method uses shared features to represent differentgrasps and achieve more computation efficiency. Element-wise prediction methods include point-based methods [3, 22,27, 28] and volumetric-based methods [2, 8]. They estimategrasp qualities for all interesting points or voxels with a singlefeed-forward propagation. For instance, S4G [22] generateseach point feature through PointNet++ [21] and predicts thegrasp quality and the grasp pose together. REGNet [28]considers the geometry of radius sphere around the sampledpoints and regresses the orientations. However, the grasp distri-bution is a multi-modal function and regression methods onlypredict one grasp pose for a single point, which may causeambiguity when multiple graspable poses are valid in thatposition. Classification methods can generate the distributionsover multiple grasps at a single point, but copious amounts ofdata are often required. V olumetric-based methods [2, 8] usewell-structured voxels instead of an unordered set of points.The memory requirements for voxel grids or SDFs are cubicin the resolution of the grid and therefore severely limit theresolution at which the method can be applied.B. Grasp Pose RepresentationGrasp representation matters in evaluating and refininggrasp poses. Most sample-based methods have a clear repre-sentation of grasp pose. GPD [25] projects the points aroundthe gripper into canonical planes; PoinetGPD [14] feeds thepoints inside the gripper to PointNet; GraspNet [18] representsthe grasp pose with a set of points of the gripper. On the otherhand, element-wise methods [3, 22, 27, 28, 2, 8] often avoidrepresenting grasp explicitly. Since the relative pose betweenthe gripper and the point/voxel is unclear, they have to doregressions or classifications of some elements of the grasppose. Our method has a clear representation of the grasp poseand satisfies the multi-modal property of the grasp distributionand the friction constraint [1] of the contact point.C. Symmetries in ManipulationSymmetries and equivariance have been shown to improvelearning efficiency and generalization ability in many ma-nipulation tasks [31, 26, 7, 23]. Zhu et al. [31] decouplesrotation and translation symmetries to enable the robot tolearn a planar grasp policy within 1.5hours; Huang et al. [7]achieve better sample efficiency and faster convergence speedin planar pick and place tasks with the use of Cn×Cnequivariance; Simeonov et al. [23] use Vector Neurons to getSE(3)-equivariant object representations so that the model canmanipulate objects in the same category with a few trainingdemonstrations. Our method also leverages SE(3) symmetryto learn faster and generalize better on 6-DoF grasping. |
3LnP1W8pKm | Euclidean Equivariant Models forGenerative Graphical Inverse KinematicsOliver Limoyo,1;yFilip Mari ́c,1;2;yMatthew Giamou,1Petra Alexson,1Ivan Petrovi ́c,2and Jonathan Kelly1yDenotes equal contribution.1Institute for Aerospace Studies, University of Toronto,2Laboratory for Autonomous Systems and Mobile Robotics, University of ZagrebAbstract —Quickly and reliably finding accurate inverse kine-matics (IK) solutions remains a challenging problem for roboticmanipulation. Existing numerical solvers typically produce a sin-gle solution only and rely on local search techniques to minimizea highly nonconvex objective function. Recently, learning-basedapproaches that approximate the entire feasible set of solutionshave shown promise as a means to generate multiple fast andaccurate IK results in parallel. However, existing learning-basedtechniques have a significant drawback: each robot of interestrequires a specialized model that must be trained from scratch.To address this shortcoming, we investigate a novel distance-geometric robot representation coupled with a graph structurethat allows us to leverage the flexibility of graph neural networks(GNNs). We use this approach to train a generative graphicalinverse kinematics solver (GGIK) that is able to produce a largenumber of diverse solutions in parallel while also generalizingwell—a single learned model can be used to produce IK solutionsfor a variety of different robots. The graphical formulationelegantly exposes the symmetry and Euclidean equivariance ofthe IK problem that stems from the spatial nature of robotmanipulators. We exploit this symmetry by encoding it into thearchitecture of our learned model, yielding a flexible solver thatis able to produce sets of IK solutions for multiple robots.I. I NTRODUCTIONRobotic manipulation tasks are naturally defined in termsof end-effector poses (for, e.g., bin-picking or path follow-ing). However, the configuration of a manipulator is typicallyspecified in terms of joint angles, and determining the jointconfiguration(s) that correspond to a given end-effector poserequires solving the inverse kinematics (IK) problem. Forredundant manipulators (i.e., those with more than six degreesof freedom, or DOF), target poses may be reachable byan infinite set of feasible configurations. While redundancyallows high-level algorithms such as motion planners to chooseconfigurations that best fit the overall task, it makes solvingIK substantially more involved.Since the full set of IK solutions cannot, in general, bederived analytically for redundant manipulators, individualconfigurations reaching a target pose are found by locallysearching the configuration space using numerical optimiza-tion methods and geometric heuristics. These limitations havemotivated the use of learned models that approximate theentire feasible set of solutions. In terms of success rate, learnedmodels that output individual solutions are able to competewith the best numerical IK solvers when high accuracy is notrequired [ 19]. Data-driven methods are also useful for integrat-ing abstract criteria such as “human-like” poses or motions [ 2].Generative approaches [ 8,15] have demonstrated the ability torapidly produce a large number of approximate IK solutionsand even model the entire feasible set for specific robots [ 1].Unfortunately, these learned models, parameterized by deepneural networks (DNNs), require specific configuration andend-effector input-output vector pairs for training (by design).In turn, it is not possible to generalize learned solutions torobots that vary in link geometry or DOF. Ultimately, thisdrawback limits the utility of learning for IK over well-established numerical methods that are easier to implementand generalize [ 3].In this paper, we describe a novel generative inverse kine-matics solver and explain its capacity to simultaneously repre-sent general (i.e., not tied to a single robot manipulator modelor geometry) IK mappings and to produce approximations ofentire feasible sets of solutions. In contrast to existing DNN-based approaches [ 1,8,11,15,19], we explore a new path to-wards learning generalized IK by adopting a graphical modelof robot kinematics [ 13,14]. This graph-based descriptionallows us to make use of graph neural networks (GNNs) tocapture varying robot geometries and DOF within a singlemodel. Furthermore, crucial to the success of our method, thegraphical formulation exposes the symmetry and Euclideanequivariance of the IK problem that stems from the spatialnature of robot manipulators. We exploit this symmetry byencoding it into the architecture of our learned model, whichwe call GGIK (for generative graphical inverse kinematics ),to produce accurate IK solutions.II. G RAPH REPRESENTATION FOR INVERSE KINEMATICSThe mapping IK :T !C from task spaceTto config-uration spaceCdefines the inverse kinematics of the robot,connecting a target pose T2SE(3) to one or more feasibleconfigurations 2C. In this paper, we consider the associatedproblem of determining this mapping for manipulators withn > 6 DOF (also known as redundant manipulators), whereeach end-effector pose corresponds to a set of configurationsIK(T) =f2CjFK() =Tg (1)that we refer to as the full set of IK solutions.We eschew the common angle-based representation of theconfiguration space in favour of a distance-geometric modelof robotic manipulators comprised of revolute joints [ 14]. Thisallows us to represent configurations as complete graphsRobot Model(a)Distance Geometry (b)Structure Graph (c)Pose Goal (d)Fig. 1: The process of defining an IK problem as an incomplete or partial graph eGof inter-point distances. (a) Conventional forwardkinematics model parameterized by joint angles and joint rotation axes. (b) The point placement procedure for the distance based description,first introduced in [ 13]. Note that the four distances between points associated with pairs of consecutive joints remain constant regardlessof the of configuration. (c) A structure graph of the robot based on inter-point distances. (d) Addition of distances in red describing therobot end-effector pose using auxiliary points to define the base coordinate system, completing the graphical IK problem description. Allconfigurations of the robot reaching this end-effector pose will result in a partial graph of distances shown in (c) and (d).TrainingOutput Graph Output GraphLatent NodeGraphPrior LatentNode GraphPrior LatentNode GraphInferenceFig. 2: Our GGIK solver is based on the CV AE framework. GNN encencodes a complete manipulator graph into a latent graph representationand GNN dec“reconstructs” it. The prior network, GNN prior , encodes the partial graph into a latent embedding that is near the embeddingof the full graph. At inference time, we decode the latent embedding of a partial graph into a complete graph to generate a solution.G= (V; E ). The edges Eare weighted by distances dbetweena collection of Npoints p=fpigNi=12RNDindexed byvertices V, where D2f2;3gis the workspace dimension.The coordinates of points corresponding to these distances arerecovered by solving the distance geometry problem (DGP):Distance Geometry Problem ([12]).Given an integer D > 0,a set of vertices V, and a simple undirected graph G= (V; E )whose edgesfu; vg2Eare assigned non-negative weightsfu; vg7!du;v2R+, find a function p:V!RDsuch thatthe Euclidean distances between neighbouring vertices matchtheir edges’ weights (i.e., 8fu; vg2E;∥p(u)p(v)∥=du;v).It was shown in [ 13] that any solution p2DGP (G)may bemapped to a unique corresponding configuration .1Crucially,this allows us to a construct a partial graph eG= (V;eE), witheEEcorresponding to distances determined by an end-effector pose Tand the robot’s structure (i.e., those commonto all elements of IK(T)), where each p2DGP (eG)corresponds to a particular IK solution 2IK(T). Thegeneric procedure for constructing eGis demonstrated for asimple manipulator in Fig. 1. A more detailed overview of thedistance-geometric graph representation and graph construc-tion is available in [ 13].1Up to any Euclidean transformation of p, since distances are invariant tosuch a transformation.For a complete graph G, we define the GNN node featuresas a combination of point positions p=fpigNi=12RNDand general features h=fhigNi=1, where each hiis afeature vector containing extra information about the node.We use a three-dimensional one-hot-encoding, hi2f0;1g3and∑3j=1hi;j= 1, that indicates whether the node definesthe base coordinate system, a general joint or link, or the end-effector. Similarly, we define the Mknown point positionsof the partial graph eGas~p=f~pigMi=12RMDandset the remaining unknown NMnode positions to zero.The partial graph shares the same general features has thecomplete graph. In both cases, the edge features are simplythe corresponding inter-point distances between known nodepoint positions or initialized to zero if unknown.III. G ENERATIVE GRAPHICAL INVERSE KINEMATICSAt its core, GGIK is a conditional variational autoencoder(CV AE) model [ 17] that parameterizes the conditional distri-bution p(GjeG)using GNNs. By introducing an unobservedstochastic latent variable z, our generative model is defined asp(GjeG) =∫p(GjeG;z)p(zjeG)dz; (2)where p(GjeG;z)is the likelihood of the full graph, p(zjeG)is the prior, and are the learnable generative parameters. TheFig. 3: Sampled conditional distributions from GGIK for various robotic manipulators. From left to right: KUKA IIWA ,Franka EmikaPanda ,Schunk LWA4D ,Schunk LWA4P , and Universal Robots UR10 . Note that the end-effector poses are nearly identical in allcases, highlighting kinematic redundancy. Our model is able to capture the discrete solution set for the two non-redundant robots as well.likelihood is given byp(GjeG;z) =N∏i=1p(pijeG;zi);withp(pijeG;zi) =N(piji;I);(3)where p=fpigNi=iare the positions of all Nnodes,z=fzigNi=iare the latent embeddings of each node, and=figNi=iare the predicted means of the distribution ofnode positions. We parametrize the likelihood distribution witha GNN decoder, that is, is the output of GNN dec(eG;z).In practice, for the input of GNN dec(), we concatenate eachlatent node with the respective position node features ~pofthe original partial graph eGwhen available and the generalfeatures h. If unavailable, we concatenate the latent nodes withthe initialized point positions set to zero. The prior distributionis given byp(zjeG) =N∏i=1p(zijeG);withp(zijeG) =K∑k=1k;iN(zijk;i;diag(2k;i)):(4)Here, we parameterize the prior as a Gaussian mixture modelwithKcomponents. Each Gaussian is in turn parameterizedby a mean k=fk;igNi=1, diagonal covariance k=fk;igNi=1, and a mixing coefficient k=fk;igNi=1, where∑Kk=1k;i= 1; i= 1; :::; N . We chose a mixture modelto have an expressive prior capable of capturing the latentdistribution of multiple solutions. We parameterize the priordistribution with a multi-headed GNN encoder GNN prior (eG)Algorithm 1: GGIKParameters: eG;Tgoal; K; LResult: Solution configurations with the lowest poseerror2RKnjoints .zLp(zjeG) ▷Sample Llatents zfrom GNN prior .pLp(pjeG;zL) ▷GetLsolutions via GNN dec.L fromPoints (pL) ▷Recover Lconfigurations. selectSolution (Tgoal;L; K) ▷Choose best K.that outputs parameters fk;k;kgKk=1.The goal of learning is to maximize the marginal likelihoodor evidence of the data as shown in Eq. 2. As is commonlydone in the variational inference literature [ 9], we insteadmaximize a tractable evidence lower bound (ELBO):L=Eqφ(zjG)[logp(GjeG;z)]KL(qφ(zjG)jjp(zjeG));(5)where KL(jj)is the Kullback-Leibler (KL) divergence andthe inference model qφ(zjG)with learnable parameters φisqφ(zjG) =N∏i=1qφ(zijG);withqφ(zijG) =N(ziji;diag(2i)):(6)As with the prior distribution, we parameterize the inferencedistribution with a multi-headed GNN encoder, GNN enc(G),that outputs parameters =figNi=1and=figNi=1. Wesummarize the full sampling procedure in Algorithm 1andwe visualize samples of these IK solutions in Fig. 3. Thisprocedure can be done quickly and in parallel on the GPU.IV.E(n)EQUIVARIANCE AND SYMMETRYWe are interested in mapping partial graphs eGinto fullgraphs G. Once trained, our model maps partial point sets tofull point sets f:RMD!RND, where fis a combinationof networks GNN prior and GNN decapplied sequentially. Thepoint positions (i.e., pand ~p) of each node in the distancegeometry problem contain underlying geometric relationshipsthat we would like to preserve with our choice of architecture.Most importantly, the point sets are equivariant to the Eu-clidean group E(n)of rotations, translations, and reflections.LetS:RMD!RMDbe a transformation consisting ofsome combination of rotations, translations and reflections onthe initial partial point set ~p. Then, there exists an equivalenttransformation T:RND!RNDon the complete pointsetpsuch that:f(S(~p)) =T(f(~p)): (7)To leverage this structure or geometric prior in the data, weuseE(n)-equivariant graph neural networks (EGNNs) [ 16] forGNN dec, GNN enc, and GNN prior . The EGNN layer splits upthe node features into an equivariant coordinate or position-based part and a non-equivariant part. We treat the positionsRobot Err. Pos. [mm] Err. Rot. [deg]mean min max Q 1 Q3mean min max Q 1Q3KUKA 5.3 1.7 9.7 3.8 6.6 0.4 0.1 0.6 0.3 0.5Lwa4d 4.7 1.4 9.1 3.2 5.9 0.4 0.1 0.6 0.3 0.5Lwa4p 5.7 2.2 10.2 4.1 7.1 0.4 0.1 0.7 0.3 0.6Panda 12.3 3.2 25.5 7.9 15.9 1.0 0.2 1.8 0.7 1.3UR10 9.2 4.2 14.7 7.3 11.1 0.5 0.2 0.9 0.4 0.7UR10 with DT [ 19] 35.0 - - - - 16.0 - - - -Panda with IKFlow [ 1] 7.7 - - - - 2.8 - - - -Panda with IKNet [ 4] 31.0 - - 13.5 48.6 - - - - -TABLE I: Performance of GGIK on 2,000 randomly generated IK problems for a single model trained on five different robotic manipulators.Taking 32 samples from the learned distribution, the error statistics are presented as the mean and mean minimum and maximum error perproblem and the two quartiles of the distribution. Note that all solutions were produced by a single GGIK model. We include baseline resultsfrom various other models that were trained on a single robot type. Dashed results were unavailable.Model Name Err. Pos. [mm] Err. Rot. [deg] Test ELBOmean min max Q 1 Q3mean min max Q 1 Q3EGNN [ 16] 4.6 1.5 8.5 3.3 5.8 0.4 0.1 0.6 0.3 0.4 -0.05MPNN [ 6] 143.2 62.9 273.7 113.1 169.1 17.7 5.3 13.6 21.6 34.1 -8.3GAT [ 18] - - - - - - - - - - -12.41GCN [ 10] - - - - - - - - - - -12.42GRAPHsage [ 7] - - - - - - - - - - -10.5TABLE II: Comparison of different network architectures. EGNN outperforms existing architectures that are not equivariant in terms ofoverall accuracy and test ELBO. Dashed results are models with output point sets that were too far from a valid joint configuration anddiverged during the configuration reconstruction procedure.pand ~pas the equivariant portion and the general featureshas non-equivariant. As an example, a single EGNN layer lfrom GNN encis then defined as:mij=φe(hli;hlj;∥pliplj∥2)pl+1i=pli+C∑j̸=i(pliplj)φx(mij)mi=∑j̸=imijhl+1i=φh(hli;mi);(8)where, m2Rfmwith a message embedding dimensionoffm,φx:Rfm!R1,C=1N1divides the sum bythe number of elements, and φeandφhare typical edgeand node operations approximated by multilayer perceptrons(MLPs). For more details about the model and a proof of theequivariance property, we refer readers to [ 16].V. E XPERIMENTSWe evaluate GGIK’s capability to learn accurate solutionsand generalize within a class of manipulator structures, andinvestigate the importance of capturing the Euclidean equiv-ariance of the graphical formulation of inverse kinematics.A. Accuracy and GeneralizationIn Table I, we evaluate the accuracy of GGIK for a variety ofexisting commercial manipulators featuring different structuresand numbers of joints: the Kuka IIWA, Schunk LWA4D,Schunk LWA4P, Universal Robots UR10, and Franka EmikaPanda. We trained a single instance of GGIK on a total of2,560,000 IK problems uniformly distributed over all fivemanipulators. We compare GGIK to other learned IK baselines[1,4,19] that are trained specifically for each robot. GGIKachieves better or comparable accuracy to all baselines despitegeneralizing across multiple manipulator types.B. Ablation Study on the Equivariant Network ArchitectureWe conducted an ablation experiment to evaluate the im-portance of capturing the underlying E(n)equivariance ofthe distance geometry problem (Problem II) in our learningarchitecture. We compare the use of the EGNN network [ 16] tofour common and popular GNN layers that are not E(n)equiv-ariant: GRAPHsage [ 7], GAT [ 18], GCN [ 10] and MPNN [ 6].We match the number of parameters for each GNN architec-ture as closely as possible and keep all other experimentalparameters fixed. Out of the five different architectures thatwe compare, only the EGNN and MPNN output point setsthat can be successfully mapped to valid joint configurations.The equivariant EGNN model outperforms all other models interms of the ELBO value attained on a held-out test set.VI. C ONCLUSIONGGIK is a step towards learned “general” IK, that is,a solver (or initializer) that can provide multiple diversesolutions and can be used with any manipulator in a waythat complements or replaces numerical optimization. Thegraphical formulation of IK naturally leads to the use ofa GNN for learning, since the GNN can accept problemsfor arbitrary robots with different kinematic structures anddegrees of freedom. Our formulation also exposes the Eu-clidean equivariance of the problem, which we exploit byencoding it into the architecture of our learned model. Whileour architecture demonstrates a capacity for generalization andan ability to produce diverse solutions, GGIK outputs mayrequire post-processing via local optimization for applicationswith low error tolerances. As future work, we would liketo learn constrained distributions of robot configurations thataccount for obstacles in the task space and for self-collisions;obstacles can be easily incorporated in the distance-geometricformulation of IK [ 5,13].REFERENCES[1]Barrett Ames, Jeremy Morgan, and George Konidaris.IKFlow: Generating diverse inverse kinematics solutions.IEEE Intl. Conf. Robotics and Automation (ICRA) , 7(3):7177–7184, 2022.[2]A. Aristidou, J. Lasenby, Y . Chrysanthou, and A. Shamir.Inverse Kinematics Techniques in Computer Graphics:A Survey. Computer Graphics Forum , 37(6):35–58,September 2018.[3]Patrick Beeson and Barrett Ames. TRAC-IK: An open-source library for improved solving of generic inversekinematics. In 15th Intl. Conf. Humanoid Robots (Hu-manoids) , pages 928–935, 2015.[4]Raphael Bensadoun, Shir Gur, Nitsan Blau, and LiorWolf. Neural inverse kinematic. In Proc. 39th Intl.Conf. Machine Learning , volume 162 of Proceedings ofMachine Learning Research , pages 1787–1797, 2022.[5]Matthew Giamou, Filip Mari ́c, David M. Rosen, ValentinPeretroukhin, Nicholas Roy, Ivan Petrovi ́c, and JonathanKelly. Convex iteration for distance-geometric inversekinematics. IEEE Robot. Autom. Lett. , 7(2):1952–1959,2022.[6]Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley,Oriol Vinyals, and George E. Dahl. Neural messagepassing for quantum chemistry. In Proc. 34th Intl.Conf. Machine Learning , volume 70 of Proceedings ofMachine Learning Research , pages 1263–1272, 2017.[7]Will Hamilton, Zhitao Ying, and Jure Leskovec. Induc-tive representation learning on large graphs. Advances inNeural Information Processing Systems (NeurIPS) , 30,2017.[8]Chi-Kai Ho and Chung-Ta King. Selective inversekinematics: A novel approach to finding multiple so-lutions fast for high-DoF robotic. arXiv preprintarXiv:2202.07869 [cs.RO] , 2022.[9]Diederik P. Kingma and Max Welling. Auto-encodingvariational Bayes. In Yoshua Bengio and Yann LeCun,editors, Intl. Conf. Learning Representations (ICLR) ,2014.[10] Thomas N. Kipf and Max Welling. Semi-SupervisedClassification with Graph Convolutional Networks. InInt. Conf. Learning Representations , 2017.[11] Teguh Santoso Lembono, Emmanuel Pignat, JuliusJankowski, and Sylvain Calinon. Learning constraineddistributions of robot configurations with generative ad-versarial network. IEEE Robot. Autom. Lett. , 6(2):4233–4240, 2021.[12] Leo Liberti, Carlile Lavor, Nelson Maculan, and AntonioMucherino. Euclidean Distance Geometry and Applica-tions. SIAM Rev. , 56(1):3–69, January 2014.[13] Filip Mari ́c, Matthew Giamou, Adam W. Hall, SoroushKhoubyarian, Ivan Petrovi ́c, and Jonathan Kelly. Rie-mannian optimization for distance-geometric inversekinematics. IEEE Trans. Robotics , 38(3):1703–1722,2022.[14] J.M. Porta, L. Ros, F. Thomas, and C. Torras. A branch-and-prune solver for distance constraints. IEEE Trans.Robotics , 21:176–187, April 2005.[15] Hailin Ren and Pinhas Ben-Tzvi. Learning inverse kine-matics and dynamics of a robotic manipulator using gen-erative adversarial networks. Robotics and AutonomousSystems , 124:103386, 2020.[16] V ́ıctor Garcia Satorras, Emiel Hoogeboom, and MaxWelling. E (n) equivariant graph neural networks. InIntl. Conf. Machine Learning (ICML) , pages 9323–9332,2021.[17] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learningstructured output representation using deep conditionalgenerative models. In Advances in Neural InformationProcessing Systems , volume 28, 2015.[18] Petar Velickovic, Guillem Cucurull, Arantxa Casanova,Adriana Romero, Pietro Lio, and Yoshua Bengio. Graphattention networks. In Intl. Conf. Learning Representa-tions (ICLR) , 2018.[19] Tim von Oehsen, Alexander Fabisch, Shivesh Kumar, andFrank Kirchner. Comparison of distal teacher learningwith numerical and analytical methods to solve inversekinematics for rigid-body mechanisms. arXiv preprintarXiv:2003.00225 [cs.RO] , 2020. |
2ZRuyknGLy | MultiFacet: A Multi-Tasking Framework for Speech-to-SignLanguage GenerationMounika Kanakanti∗International Institute of InformationTechnologyHyderabad, Indiamounika.k@research.iiit.ac.inMounika.Kanakanti@mpi.nlShantanu SinghInternational Institute of InformationTechnologyHyderabad, Indiashantanu.singh@research.iiit.ac.inManish ShrivastavaInternational Institute of InformationTechnologyHyderabad, Indiam.shrivastava@research.iiit.ac.inABSTRACTSign language is a rich form of communication, uniquely conveyingmeaning through a combination of gestures, facial expressions, andbody movements. Existing research in sign language generationhas predominantly focused on text-to-sign pose generation, whilespeech-to-sign pose generation remains relatively underexplored.Speech-to-sign language generation models can facilitate effectivecommunication between the deaf and hearing communities. In thispaper, we propose an architecture that utilises prosodic informationfrom speech audio and semantic context from text to generate signpose sequences. In our approach, we adopt a multi-tasking strategythat involves an additional task of predicting Facial Action Units(FAUs). FAUs capture the intricate facial muscle movements thatplay a crucial role in conveying specific facial expressions duringsign language generation. We train our models on an existing IndianSign language dataset that contains sign language videos with audioand text translations. To evaluate our models, we report DynamicTime Warping (DTW) and Probability of Correct Keypoints (PCK)scores. We find that combining prosody and text as input, alongwith incorporating facial action unit prediction as an additionaltask, outperforms previous models in both DTW and PCK scores.We also discuss the challenges and limitations of speech-to-signpose generation models to encourage future research in this domain.We release our models, results and code to foster reproducibilityand encourage future research1.CCS CONCEPTS•Computing methodologies →Neural networks ;Learninglatent representations ;Computer vision ;Information extrac-tion.∗Also with Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.1https://github.com/Mounika2405/MultiFacet-Speech-to-Sign.gitPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.ICMI ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.3616550KEYWORDSspeech to sign language, indian sign language, prosody, pose gen-erationACM Reference Format:Mounika Kanakanti, Shantanu Singh, and Manish Shrivastava. 2023. Multi-Facet: A Multi-Tasking Framework for Speech-to-Sign Language Generation.InINTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI’23 Companion), October 9–13, 2023, Paris, France. ACM, New York, NY, USA,9 pages. https://doi.org/10.1145/3610661.36165501 INTRODUCTIONSign language is a rich form of communication that seamlesslyblends together the fluidity of hand movements and gestures, theexpressiveness of facial expressions and head movements, and thesubtle nuances of body language. It is this harmony of hand move-ments and expression that makes it complete and effective. Accord-ing to the World Health Organization (WHO), over 1.5 billion people,which accounts for approximately 20% of the global population,live with hearing loss, underscoring the importance of accessibil-ity in communication [ 15]. While the field of Natural LanguageProcessing (NLP) has made remarkable progress in developing lan-guage technologies that simplify daily tasks, the advancement intechnology to support sign language has not been as substantial[32]. Towards this end, automatic sign language translation andgeneration systems provide an efficient and accessible means ofcommunication between deaf people and the hearing community.Recent years have seen a surge of interest in sign language tech-nologies, with researchers exploring various computer vision anddeep learning approaches to tackle this complex task [ 17]. Whilemany of these works utilize text or gloss as input for generationtasks, the area of speech-to-sign language generation remains rel-atively underexplored [ 17]. Gloss, often used to represent signlanguage, has been found to lack accuracy in capturing the com-plete linguistic and expressive aspects of sign language [ 29,35]. Astudy on the Phoenix dataset [ 4] showed that a significant portionof the data contained linguistic elements not present in the glossrepresentation [ 35]. While text input can help generate semanticsigns, incorporating prosodic information extracted from audiocan provide more comprehensive data for a richer sign languageoutput[7].Inspired by co-speech gesture generation literature[ 14], whichshares similarities with sign language generation, we utilize audioalong with text as input to generate sign pose sequences. In thispaper, we introduce MultiFacet, an architecture that uses prosodicinformation derived from speech and semantic information sourcedICMI ’23 Companion, October 9–13, 2023, Paris, France Mounika Kanakanti, Shantanu Singh, and Manish Shrivastavafrom text. This integrated data serves as the input for generatingkey points of both facial and hand movements. Furthermore, our ap-proach includes the prediction of Facial Action Units (AUs) withina multi-tasking setup. We evaluate our model using Dynamic TimeWarping (DTW) and Probability of Correct Keypoints (PCK) metricsagainst the existing Indian Sign Language dataset[ 11] and demon-strate the critical importance of prosody and facial action unitprediction in better sign language generation. In summary, ourmain contributions are as follows:•Leveraged prosody information from the audio and semanticcontext from text for generation of continuous sign posesequences.•Exploring the importance of facial action unit prediction forgenerating hand and face key points in Indian sign language.•We conducted ablation studies and extensively discussedthe limitations of our work to inspire future research andadvancements in this domain.2 RELATED WORKSign Pose Generation Most of the works in sign language gen-eration are based on text or gloss as inputs[ 17]. [20] generatedcontinuous hand pose sequences using text as input. While this is agreat step in the field, it is only a partial representation of sign lan-guage, as facial expressions and body language also play a criticalrole in conveying meaning [ 9,16]. Later works attempted to addressthis limitation by including both manual (hand movements) andnon-manual (facial expressions) features in the generation processbut still relied on text or gloss as input. [ 18] used adversarial trainingfor multichannel sign production with text as input. Furthermore,in another study, [ 21] represented sign sequences as skeletal graphstructures with gloss as an intermediate representation. [ 28] gen-erated key points for hands and face by concatenating embeddingoutputs from a text encoder and a gloss encoder. [ 29] first gener-ated Hamnosys notation from text, which was further converted tocontinuous sign pose sequences. These approaches made stridestowards incorporating non-manual features but still lacked the useof prosodic information as input corresponding to the non-manualfeatures in sign language. [ 19,25] generated photo-realistic signvideos using text as input. They first generated skeleton poses fromtext and then generated sign videos conditioned on these poses.To address this concern of loss of prosody in gloss representa-tion, [ 35] presented gloss enhancement strategies for introducingintensity modifiers in gloss annotations using Phoenix dataset [ 4].Intensity modifiers are the ones that quantify nouns, adjectivesor adverbs in a sentence ((e.g., very happy or little happy). Recentworks explored the use of speech Mel spectrogram inputs to gener-ate hand movements in Indian Sign Language [ 11]. While this ap-proach is a step in the right direction, generating hand movementsalone is insufficient to capture the full extent of sign language.Co-speech Gesture Generation Co-speech gesture genera-tion studies have shown the significance of using both speech andtext as input for generating semantically relevant and rythmic ges-tures [ 14]. [1,12,33] have proposed continuous gesture generationsystems using audio and text as input, further underscoring theimportance of multimodal information for generating meaningfulgestures.Non-Manual Recognition in Sign Language [26] presented3D-CNN based multimodal framework for recognition of gram-matical errors in continuous signing videos belonging to differentsentence types. The methodology they employed encompassedtwo primary stages. Initially, 3D-CNN networks were leveraged torecognise the grammatical elements from manual gestures, facialexpressions, and head movements. Subsequently, a sliding win-dow technique was adopted to establish correlations between thesemodalities, thereby facilitating the detection of grammatical errorsin the signing videos.In this paper, by incorporating prosody and non-manual fea-ture recognition, such as predicting Facial Action Units, we aim toimprove the accuracy and naturalness of sign language generation.3SPEECH TO SIGN LANGUAGE GENERATIONGiven audio and text inputs, our aim is to generate sequences of signposes denoted as S, which include both upper body and face key-points. To accomplish this, we adopt a multi-task learning approach,incorporating a speech encoder, a Facial Action Units decoder, anda sign pose decoder. The overall architecture is illustrated in Figure1.3.1 Input EmbeddingsTo facilitate the generation process, we extract two types of embed-dings from the input data: BERT embeddings for text and Tacotron2 GST[ 30] encodings for audio. We use the GST model provided byNVIDIA2which was pre-trained on train-clean-100 subset of Lib-riTTS dataset[ 34] to represent the expressive features in audio. TheBERT embeddings, denoted as Etext, capture the semantic informa-tion embedded within the text, allowing our model to understandthe linguistic context. We represent the input text as a sequenceof tokens{x1,x2,...,xW}, and BERT provides the correspondingembeddings{ex1,ex2,...,exW}with a dimensionality of 768.The Tacotron 2 GST encodings, denoted as Eaudio , extract bothlinguistic content and prosody information from the audio input.The GST model was pre-trained on LibriTTS dataset [ 34] withthe objective of learning a large range of acoustic expressiveness.We represent the audio input as a sequence of mel-spectrograms{m1,m2,...,mT}, where each mel-spectrogram has T×256dimen-sions. Tacotron 2 GST[ 30] provides the corresponding embeddings{em1,em2,...,emT}.3.2 FAUs PreprocessingAmongst various methods for denoting facial expressions, the Fa-cial Action Coding System (FACS) stands as a comprehensive andstandardized tool [ 31]. It has been meticulously designed to de-scribe and analyze these nonverbal cues by precisely identifyingdistinct facial muscle movements. Central to FACS are its actionunits (FAUs), a set of codes representing individual facial muscleactions, which, when combined, proficiently portray a diverse ar-ray of emotions and expressions. As a result of its efficacy, FACSfinds widespread application across various disciplines, includingpsychology, neuroscience, anthropology, and computer graphics,providing an objective and systematic means to categorize andcomprehend facial expressions.2https://github.com/NVIDIA/mellotron/tree/masterMultiFacet: A Multi-Tasking Framework for Speech-to-Sign Language Generation ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 1: The Architecture: We propose a novel architecture to generate sign pose sequences by utilising the prosodic informationfrom speech and semantic context from text. We also incorporate additional decoders to facilitate rich sign pose generation: (i)Facial Action Unit decoder and (ii) Cross modal decoder.While FACS is an index of facial expressions with an anatomicalbasis, it generally does not provide the degree of muscle activation.While there are modifiers that extend this coding system to accom-modate intensities as well, we don’t consider them in our study dueto limited resources and no clear consensus on their use.The use of FACS for sign language translation or generation isrelatively understudied [ 5,6,23]. One of the primary reasons for itslimited use is the costly annotation required for the existing signlanguage datasets. To overcome this issue, we propose using anexisting state-of-the-art model, ME-GraphAU [ 13], to predict theaction units for our chosen dataset and use it as weak-supervisionduring sign-language generation task. We encourage readers torefer to [ 13] for details related to architecture, training dataset andoutput format for the aforementioned model.The output of the chosen model is noisy and lacks temporal con-sistency since the prediction occurs on a per-frame basis. Trainingwith such an output would invariably lead to noisy supervision andpoor learning on the model’s part for the proposed task. As such,we propose a pre-processing pipeline for reducing the noise usingthe following steps:•Threshold the output of the model using the probabilities asconfidence for each action unit and remove any low confi-dence predictions.•For these pruned predictions, we use linear interpolation forestimating their new values.•Finally, to reduce the remaining noise, we use hanning smooth-ing over each action unit and get the final output. We use awindow length of 11, which corresponds to 0.5 seconds at24FPS frame-rate of our source videos.We show an example of the original prediction and output ofeach step in the above-mentioned pipeline in Figure 2.Figure 3 shows the ground truth facial action units extracted.3.3 Model ComponentsThe input embeddings EtextandEaudio are then passed to theirrespective encoders in our model:1.Prosody Encoder : The transformer-based speech encoder, de-noted asEspeech , processes the Tacotron 2 GST encodings Eaudio toobtain intermediate representations Hspeech . This can be expressedas:Hspeech =Espeech(Eaudio)ICMI ’23 Companion, October 9–13, 2023, Paris, France Mounika Kanakanti, Shantanu Singh, and Manish ShrivastavaFigure 2: Illustration of the Facial Action Units (FAUs) prepro-cessing pipeline: thresholding using action unit probabilities,linear interpolation, and Hanning smoothing.Figure 3: Representation of Ground Truth Facial Action Units,generated using Blender[3] for visualization purposes.2.FAUs Decoder : We incorporate the FAUs prediction task as anadditional objective to capture facial expressions. The FAUs decoder,denoted asDFAUs, processes the Tacotron 2 GST encodings Eaudioto predict the Facial Action Units, denoted as FAUs . This can beexpressed as:FAUs =DFAUs(Eaudio)Facial AUs is a widely used facial expression coding system thatconsists of a set of action units that correspond to different facialmuscle movements. We use a transformer-based decoder[ 27] forthis task and train it using cross-entropy loss.LFAUs=−1NN∑︁n=1M∑︁i=1yn,ilog(pn,i) (1)whereNis the number of training examples, Mis the numberof Facial Action Units, yn,iis the ground-truth label for the i-thFacial Action Unit in the n-th example (either 0 or 1), and pn,iisthe predicted probability for the i-th Facial Action Unit in the n-thexample.3.Sign Pose Decoder : Our sign pose decoder, denoted as Dpose,is a transformer-based autoregressive decoder that takes the in-termediate representations Hspeech as input to generate the se-quence of sign poses S. The keypoints for each frame in the signpose sequence are represented as a 3D tensor, with dimensionsnum_frames×85×3. The output of the decoder can be formulatedas:ˆyn,i=DPose(Hspeech,n,yn,0:i−1) (2)Note that during training, the decoder uses ground-truth poses asinput for stability and faster convergence. During inference, thepose inputs to the decoder are its own predictions upto the giventimestep.We use regression loss to train the sign pose decoder, given by:Lpose=1NN∑︁n=185∑︁i=1∥yn,i−ˆyn,i∥2(3)whereNis the number of training examples, yn,iis the ground-truth value of the i-th keypoint for the n-th example, and ˆyn,iis thepredicted value of the i-th keypoint for the n-th example.4.Cross-Modal Discriminator We use the same discriminatorused by [ 11] to match the speech segments with correspondingpose sequences. The loss for the cross-modal discriminator can bedefined as follows:LGANG=1NN∑︁n=1log(1−(Dcross-modal(Hspeech, n,ˆyn))) (4)LGAND=−1NN∑︁n=1log((Dcross-modal(Hspeech, n,yn)))+log(1−(Dcross-modal(Hspeech, n,ˆyn)))(5)where Dcross-modal is the cross-modal discriminator. Hspeech, nis the intermediate representation for the n-th example obtainedby the prosody encoder. Variables ynandˆynare the ground-truthand predicted pose sequences respectively. LGANDandLGANGarethe standard binary cross-entropy loss used for discriminator andgenerator respectively.3.4 Multi-Tasking SetupWe use a weighted sum of the losses from the individual decodersto compute the overall loss.Ltotal=λFAUs·LFAUs+λpose·Lpose+λdiscriminator·LGANGwhereλFAUs,λpose, andλdiscriminator are hyperparameters thatcontrol the relative importance of the FAUs loss, pose loss, anddiscriminator loss, respectively.The weights for each task are chosen to balance the contribution.All the decoders are trained in a multitasking setup. The modelis trained to minimize the multitasking loss Ltotalusing gradient-based optimization techniques.MultiFacet: A Multi-Tasking Framework for Speech-to-Sign Language Generation ICMI ’23 Companion, October 9–13, 2023, Paris, France4 IMPLEMENTATION DETAILSWe set up our transformer model with two layers for both encodersand decoders, each equipped with eight attention heads. Both en-coders and decoders use a hidden size of 512. We use the Adamoptimiser with an initial learning rate of 0.001, which can be re-duced if the training plateaus. We apply gradient clipping with athreshold of 5.0and use a batch size of 32for training efficiency.We incorporate Future Prediction as proposed by [ 20]. The train-ing loss function includes L1 regularisation along with losses forspecific components, each weighted accordingly. For the loss func-tion, the values for λPose,λFAUs,λDiscriminator are1,0.001,0.0001respectively.5 EXPERIMENTS5.1 DatasetThe dataset used in our study is the continuous Indian Sign Lan-guage dataset, which was released by [ 11]. This dataset containssign videos along with corresponding audio and text transcription,covering various topics, such as current affairs, sports, and worldnews. The dataset comprises 9137 videos and has a vocabulary sizeof 10k.To represent the sign videos in our analysis, we extracted 3Djoint position keypoints using Mediapipe [ 8]. This process involveddetecting 37landmark points for the eyes, eyebrows, lips, and faceoutline, along with 6landmark points for the shoulders, elbows, andhips. Additionally, each hand was represented with 21landmarkpoints, bringing the total to 85keypoints for upper body, handsand face.5.2 Baseline ModelsText2Sign We adopt the progressive transformers introduced by[20] as the foundation of our approach. We extend their proposedarchitecture and train them on the Indian Sign Language Datasetwith 3D keypoints for face and upper body.Speech2Sign [11] utilised mel spectrograms as input to generatesign pose sequences of hand movements. They incorporate a textdecoder and a cross-modal discriminator for learning the corre-lation between speech and sign pose sequences. We extend thisarchitecture to generate face and body key points and consider itas our baseline.5.3 Evaluation MetricsDynamic Time Warping (DTW) Dynamic Time Warping (DTW)[10] is one of the evaluation metrics for speech-to-sign languagegeneration models to assess the alignment between the predictedsign language sequences and the ground truth sign language se-quences.LetP=(p1,p2,...,pM)denote the predicted sign languagesequence, where pirepresents the i-th pose in the predicted se-quence, and Mis the length of the predicted sequence. Similarly,let the ground truth sign language sequence be denoted as G=(g1,g2,...,gN), wheregirepresents the i-th pose in the groundtruth sequence, and Nis the length of the ground truth sequence.DTW aims to find an optimal alignment between the sequencesPandGby introducing a warping path W={(w1,w2,...,wK)},wherewk=(i,j)denotes the alignment of piin the predictedsequence with gjin the ground truth sequence. The warping pathsatisfies the conditions: w1=(1,1),wK=(M,N), andwk−wk−1∈{(1,0),(0,1),(1,1)}, allowing for insertions, deletions, and matchesbetween the sequences.The objective of DTW is to minimize the accumulated cost alongthe warping path W, which is defined by a distance or similaritymeasure between the individual poses in the sequences. Let d(pi,gj)represent the distance between piandgjin the pose space. Theaccumulated cost C(W)along the warping path Wis given by:C(W)=K∑︁k=1d(pwk,gwk)To compute the final DTW score, we aim to find the optimalwarping path W∗that minimizes the accumulated cost C(W):DTW(P,G)=minWC(W)The DTW score provides a measure of the alignment between thepredicted and ground truth sign language sequences, consideringthe temporal differences and variations in the movement patterns. Alower DTW score indicates a better alignment and higher similaritybetween the sequences.Probability of Correct Keypoints (PCK) PCK [ 2,24] is awidely used evaluation metric to assess the accuracy of pose esti-mation models. It measures the percentage of correctly predictedkeypoints within a certain threshold distance compared to theground truth keypoints.LetG={g1,g2,...,gN}be the set of ground truth keypoints, andP={p1,p2,...,pN}be the set of predicted keypoints. Each keypoint,giorpi, consists of ( x,y,z ) coordinates representing the positionof a particular body part, such as a hand or face.To compute the PCK score, we need to define a threshold distanceδ. For each ground truth keypoint gi, we check if there exists acorresponding predicted keypoint pjwithin the threshold distanceδ. If such a predicted keypoint exists, and its distance to the groundtruth keypoint is less than or equal to δ, we consider it as a correctprediction.Mathematically, the PCK score can be computed as follows:PCK=1N∑︁iδ(gi,pi)whereNis the total number of keypoints, and δ(gi,pi)is anindicator function defined as:δ(gi,pi)=(1,if||gi−pi||≤δ0,otherwiseHere,||gi−pi||represents the Euclidean distance between theground truth keypoint giand the predicted keypoint pi.The PCK score is then calculated as the average of the indicatorvalues over all keypoints. It represents the percentage of keypointsthat have been correctly predicted within the specified thresholddistanceδ. A higher PCK score indicates better accuracy and align-ment between the predicted and ground truth keypoints.In the context of sign language generation models, PCK can beused to evaluate the quality of the generated sign language poses byICMI ’23 Companion, October 9–13, 2023, Paris, France Mounika Kanakanti, Shantanu Singh, and Manish ShrivastavaTable 1: Comparison of Dynamic Time Warping (DTW) andProbabilty of Correct Keypoints (PCK) scores with base-lines on dev and test sets. B+F indicates model that predictsbody+face keypoints. PE - Prosody Encoder; TE: Text EncoderModel DTW Score ↓PCK↑Dev setText ->Sign[20] 19.55 0.61Speech2sign [11] 15.94 0.72PE + TE ->Sign 16.1 0.74PE + TE ->Sign + FAUs 13.37 0.79Test setText ->Sign[20] 22.55 0.59Speech2sign [11] 14.08 0.78PE + TE ->Sign 17.3 0.72PE + TE ->Sign + FAUs 13.23 0.81comparing them to the ground truth poses. However, it’s importantto note that PCK only considers individual keypoints and does notcapture the overall spatial or temporal coherence of the generatedsign language sequences.5.4 Results and insightsWe report DTW[ 10] and Probability of Correct Keypoints scores onthe Indian Sign Language dataset and compare it with the resultsof both Text2Sign[ 20] and Speech2Sign [ 11] methods. From table1 we observe that our model performs significantly better thanthe existing Speech2Sign[ 11] method. Figure 4 shows the samplequalitative results. An interesting observation from the providedsample results, as well as other instances in our evaluation, is thatwhile our model encounters challenges in accurately capturing theprecise positions of hands and facial features in specific frames,these representations exhibit a visual similarity to the target RGBframes. It is worth noting, however, that minor disparities in handpositions and facial expressions can convey substantially differentmeanings in sign language. Consequently, we refrain from drawingdefinitive conclusions from our qualitative assessments and defersuch considerations to future research endeavors.6 ABLATION ANALYSISTo evaluate the contribution of each component in our proposedarchitecture, we conduct ablation studies on our model. Specifically,we perform experiments where we remove each component fromthe multitasking setup one by one and compare the results withthe full model.Table 2 summarizes the results of our ablation studies. As can beseen, removing the FAUs decoder results in a drop in performancein both metrics. The results demonstrate the effectiveness of ourmultitasking approach in leveraging multiple modalities for signlanguage generation. However, we observe that the results are stillclose to the model that uses only the text encoder.In summary, our ablation studies demonstrate the effectivenessof our multitasking approach in leveraging multiple modalities forsign language generation.Table 2: Comparison of ablation studies. PE - Prosody En-coder; TE-Text EncoderModel DTW Score ↓PCK↑TE ->Sign 13.82 0.81TE ->Sign + FAUs 15.69 0.78PE ->Sign 17.16 0.73PE ->Sign + FAUs 14.52 0.75PE + TE ->Sign + FAUs (Ours) 13.23 0.817 LIMITATIONS & CHALLENGESEvaluation Methods: Although our model has achieved state-of-the-art results based on DTW scores, it is essential to conducthuman evaluation with expert sign language interpreters to ensurethe quality and relevance of the generated sign language. DTWscores only assess the alignment between ground truth poses andpredicted poses but do not measure the correlation with the inputspeech. Correlating these scores with human evaluation ratings iscrucial for understanding the model’s performance in real-worldcommunication scenarios. Metrics that measure the coherence andsynchronization of other non-manual elements, such as body pos-ture, head movements, and eye gaze are also necessary [ 26]. There-fore, when designing a sign language generation model, accountingfor these linguistic elements and their dynamic interactions is es-sential to produce more accurate and culturally appropriate signlanguage outputs.Fine Movements: The current model successfully learns coarsehand movements but lacks the ability to capture fine movementsof fingers and facial parts (See Figure 5 in Appendix A)s. This lim-itation is attributed to the use of Mean Squared Error (MSE) loss,which penalizes larger movements more than fine movements. Toaddress this issue, alternative loss functions, such as a keypoint lossproposed by [ 22], can be explored. This loss involves a hand key-point discriminator pre-trained on 2D hand poses and may improvethe model’s capability to generate more accurate and intricate handmovements.More Linguistic Information: One significant challenge liesin handling the sequential nature of input speech or text, as op-posed to the simultaneous nature of sign language. Speech unfoldsin a linear manner, and sign language relies on the integrationof multiple components in parallel. Thus, capturing and mappingthese linguistic structures effectively requires specialized attention.Understanding how signers use space, directionality, and facialexpressions to indicate different grammatical constructs is crucialfor generating natural and contextually appropriate sign language.Currently, our model focuses primarily on generating hand and fa-cial movements, neglecting other crucial components. Future workshould explore incorporating non-manual markers, body language,and gaze direction into the generation process to enhance the natu-ralness and comprehensiveness of sign language communication.Errors in Skeleton Pose Extraction: One of the significantchallenges in sign language generation is accurately extracting theMultiFacet: A Multi-Tasking Framework for Speech-to-Sign Language Generation ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 4: Qualitative Results illustrating the input text, the original video, the ground truth pose, and the predicted pose.skeleton pose from the input video or speech. The skeleton poseserves as a crucial input to the model, representing the keypoint po-sitions of the signer’s hands, face, and body movements. Althoughadvanced pose estimation techniques like Mediapipe provide robustkeypoint predictions, there are inherent limitations and errors thatcan impact the overall performance of the sign language generationmodel. Sign language videos captured in real-world settings maycontain various forms of noise, occlusions, and artifacts. These im-perfections can lead to inaccuracies in the pose estimation process,resulting in incorrect keypoint positions. For instance, backgroundclutter, complex hand gestures, or fast movements may obscurethe hand keypoints, leading to incomplete or noisy pose represen-tations. Additionally, sign language involves intricate hand andfinger movements that can sometimes be challenging to discernaccurately (See Figure 6 in Appendix A ). The dynamic nature ofsign language requires precise identification of hand shapes, fingerpositions, and gestures. However, the inherent ambiguity in certainsigns or gestures can lead to misinterpretations and inaccuracies inthe extracted skeleton pose.Pose Representation: The representation of sign language askeypoint sequences in videos is abstract and results in the lossof some skeletal information. This may lead to some loss of fine-grained details in the generated sign language. Future researchcould explore alternative representations that preserve more intri-cate skeletal information for more accurate sign language genera-tion.Dataset Size and Variety: Our current dataset size and varietymight be limited, which could impact the model’s ability to capturethe full complexity and richness of sign language. Expanding thedataset or exploring low-resource training techniques is essentialto improve the model’s generalization and performance on diversesigning styles and linguistic patterns.Signer Style: Sign language relies on the signer’s individualstyle and preferences, which can significantly affect the model’sperformance. Investigating the impact of varying signer styles onthe model’s output and devising methods to adapt the model todifferent signing styles are critical for real-world applicability.In conclusion, while our model shows promising results in gener-ating sign language from speech, there are several limitations andchallenges that need to be addressed in future work.8 CONCLUSIONIn this paper, we introduced a multi-tasking approach, the Multi-Facet model, for generating sign language poses from input speechand text. Our model goes beyond just hand movements, also cap-turing facial expressions, resulting in a more comprehensive repre-sentation of sign language.To assess the effectiveness of our model, we conducted experi-ments on the Indian Sign Language dataset provided by [ 11]. Byincorporating a pre-trained prosody encoder and utilizing FacialAction Units, we achieved even better results, surpassing previousmethods. The potential applications of our approach extend beyondsign language communication.ICMI ’23 Companion, October 9–13, 2023, Paris, France Mounika Kanakanti, Shantanu Singh, and Manish ShrivastavaAlthough we achieved better results with the proposed approach,there is significant room for further advancements in several as-pects, including the datasets, methodologies, understanding of theintricate relationship between speech and sign language, and evalu-ation methods. We hope that our work will inspire further researchin this area and contribute to improving accessibility and inclusivityfor the deaf and hard-of-hearing community.9 ETHICAL CONSIDERATIONSIn our study, it is important to acknowledge that we have employeda limited dataset of Indian sign language videos, primarily sourcedfrom YouTube. While this dataset served as a valuable startingpoint for our investigation into speech-to-sign language generationmodels, we recognise its inherent limitations regarding representa-tiveness for the broader sign language community. It is essentialto emphasize that the models proposed in this paper are only toexplore the role of prosody in speech-sign language generationmodels and are not suitable for direct deployment due to their in-sufficient scope and potential biases. Moreover, we acknowledgethat a critical aspect, validation with signers, has not been fullyundertaken within the scope of this study. This is a significantlimitation that warrants further attention and validation in futureresearch endeavours.REFERENCES[1]Chaitanya Ahuja, Dong Won Lee, Ryo Ishii, and Louis-Philippe Morency. 2020.No Gestures Left Behind: Learning Relationships between Spoken Language andFreeform Gestures. In Findings of the Association for Computational Linguistics:EMNLP 2020 . Association for Computational Linguistics, Online, 1884–1895.https://doi.org/10.18653/v1/2020.findings-emnlp.170[2]Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2014.2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In2014 IEEE Conference on Computer Vision and Pattern Recognition . 3686–3693.https://doi.org/10.1109/CVPR.2014.471[3]Blender Foundation. 2023. Blender . https://www.blender.org/ Computer software.[4]Necati Camgoz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden.2018. Neural Sign Language Translation. https://doi.org/10.1109/CVPR.2018.00812[5]Emely Pujólli da Silva, Paula Dornhofer Paro Costa, Kate Mamhy Oliveira Ku-mada, and José Mario de Martino. 2021. Facial action unit detection methodologywith application in Brazilian sign language recognition. Pattern Analysis andApplications 25 (2021), 549 – 565. https://api.semanticscholar.org/CorpusID:239656376[6]Emely Pujólli da Silva, Kate Mamhy Oliveira Kumada, and Paula Dornhofer ParoCosta. 2021. Analysis of Facial Expressions in Brazilian Sign Language (Libras).European Scientific Journal, ESJ (2021). https://api.semanticscholar.org/CorpusID:237828197[7]Svetlana Dachkovsky and Wendy Sandler. 2009. Visual Intonation in the Prosodyof a Sign Language. Language and Speech 52, 2-3 (2009), 287–314. https://doi.org/10.1177/0023830909103175 arXiv:https://doi.org/10.1177/0023830909103175PMID: 19624033.[8]Ivan Grishchenko and Valentin Bazarevsky. 2020. MediaPipe Holistic — Simulta-neous Face, Hand and Pose Prediction, on Device . https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html[9]Carlos Gussenhhoven and Aoju Chen. 2020. The Oxford Handbook of Lan-guage Prosody . Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198832232.001.0001[10] Peter J. Huber. 1964. Robust Estimation of a Location Parameter. The Annalsof Mathematical Statistics 35, 1 (1964), 73 – 101. https://doi.org/10.1214/aoms/1177703732[11] Parul Kapoor, Rudrabha Mukhopadhyay, Sindhu Hegde, Vinay Namboodiri, andC.V. Jawahar. 2021. Towards Automatic Speech to Sign Language Generation.3700–3704. https://doi.org/10.21437/Interspeech.2021-1094[12] Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Si-mon Alexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator:A Framework for Semantically-Aware Speech-Driven Gesture Generation. InProceedings of the 2020 International Conference on Multimodal Interaction (VirtualEvent, Netherlands) (ICMI ’20) . Association for Computing Machinery, New York,NY, USA, 242–250. https://doi.org/10.1145/3382507.3418815[13] Cheng Luo, Siyang Song, Weicheng Xie, Linlin Shen, and Hatice Gunes. 2022.Learning Multi-dimensional Edge Feature-based AU Relation Graph for FacialAction Unit Recognition. In Proceedings of the Thirty-First International JointConference on Artificial Intelligence . International Joint Conferences on ArtificialIntelligence Organization. https://doi.org/10.24963/ijcai.2022/173[14] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter,and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-SpeechGesture Generation. CoRR abs/2301.05339 (2023). https://doi.org/10.48550/arXiv.2301.05339 arXiv:2301.05339[15] World Health Organization. 2023. Hearing Loss. https://www.who.int/health-topics/hearing-loss#tab=tab_2 Accessed: 21-07-2023.[16] Roland Pfau and Josep Quer. 2010. Nonmanuals: their grammatical andprosodic roles . Cambridge University Press, 381–402. https://doi.org/10.1017/CBO9780511712203.018[17] Razieh Rastgoo, Kourosh Kiani, Sergio Escalera, Vassilis Athitsos, and MohammadSabokrou. 2022. All You Need In Sign Language Production. http://arxiv.org/abs/2201.01609 arXiv:2201.01609 [cs].[18] Ben Saunders, Necati Cihan Camgoz, and Richard Bowden. 2020. AdversarialTraining for Multi-Channel Sign Language Production. arXiv:2008.12405 [cs.CV][19] Ben Saunders, Necati Cihan Camgoz, and Richard Bowden. 2020. EverybodySign Now: Translating Spoken Language to Photo Realistic Sign Language Video.arXiv:2011.09846 [cs.CV][20] Ben Saunders, Necati Cihan Camgoz, and Richard Bowden. 2020. ProgressiveTransformers for End-to-End Sign Language Production. http://arxiv.org/abs/2004.14874 arXiv:2004.14874 [cs].[21] Ben Saunders, Necati Cihan Camgoz, and Richard Bowden. 2021. Skeletal GraphSelf-Attention: Embedding a Skeleton Inductive Bias into Sign Language Produc-tion. arXiv:2112.05277 [cs.CV][22] Ben Saunders, Necati Cihan Camgoz, and Richard Bowden. 2022. Signing at Scale:Learning to Co-Articulate Signs for Large-Scale Photo-Realistic Sign LanguageProduction. arXiv:2203.15354 [cs.CV][23] Emely Pujólli da Silva, Paula Dornhofer Paro Costa, Kate Mamhy Oliveira Ku-mada, and José Mario De Martino. 2020. SILFA: Sign Language Facial ActionDatabase for the Development of Assistive Technologies for the Deaf. In 202015th IEEE International Conference on Automatic Face and Gesture Recognition (FG2020) . 688–692. https://doi.org/10.1109/FG47880.2020.00059[24] T. Simon, H. Joo, I. Matthews, and Y. Sheikh. 2017. Hand Keypoint Detectionin Single Images Using Multiview Bootstrapping. In 2017 IEEE Conference onComputer Vision and Pattern Recognition (CVPR) . IEEE Computer Society, LosAlamitos, CA, USA, 4645–4653. https://doi.org/10.1109/CVPR.2017.494[25] Stephanie Stoll, Necati Cihan Camgoz, Simon Hadfield, and Richard Bowden.2020. Text2Sign: Towards Sign Language Production Using Neural MachineTranslation and Generative Adversarial Networks. Int. J. Comput. Vision 128, 4(apr 2020), 891–908. https://doi.org/10.1007/s11263-019-01281-2[26] Elahe Vahdani, Longlong Jing, Yingli Tian, and Matt Huenerfauth. 2020. Recogniz-ing American Sign Language Nonmanual Signal Grammar Errors in ContinuousVideos. http://arxiv.org/abs/2005.00253 arXiv:2005.00253 [cs].[27] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is AllYou Need. http://arxiv.org/abs/1706.03762 arXiv:1706.03762 [cs].[28] Carla Viegas, Mert İnan, Lorna Quandt, and Malihe Alikhani. 2022. IncludingFacial Expressions in Contextual Embeddings for Sign Language Generation.http://arxiv.org/abs/2202.05383 arXiv:2202.05383 [cs].[29] Harry Walsh, Ben Saunders, and Richard Bowden. 2022. Changing the Representa-tion: Examining Language Representation for Neural Sign Language Production.InProceedings of the 7th International Workshop on Sign Language Translationand Avatar Technology: The Junction of the Visual and the Textual: Challengesand Perspectives . European Language Resources Association, Marseille, France,117–124. https://aclanthology.org/2022.sltat-1.18[30] Yuxuan Wang, Daisy Stanton, Yu Zhang, RJ Skerry-Ryan, Eric Battenberg, JoelShor, Ying Xiao, Fei Ren, Ye Jia, and Rif A. Saurous. 2018. Style Tokens: Unsu-pervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis.arXiv:1803.09017 [cs.CL][31] Wikipedia contributors. 2023. Facial Action Coding System — Wikipedia, TheFree Encyclopedia. https://en.wikipedia.org/w/index.php?title=Facial_Action_Coding_System&oldid=1171456612. [Online; accessed 30-August-2023].[32] Kayo Yin, Amit Moryossef, Julie Hochgesang, Yoav Goldberg, and MaliheAlikhani. 2021. Including Signed Languages in Natural Language Process-ing. arXiv:2105.05222 [cs] (July 2021). http://arxiv.org/abs/2105.05222 arXiv:2105.05222.[33] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, JaehongKim, and Geehyuk Lee. 2020. Speech Gesture Generation from the TrimodalContext of Text, Audio, and Speaker Identity. ACM Trans. Graph. 39, 6, Article222 (nov 2020), 16 pages. https://doi.org/10.1145/3414685.3417838MultiFacet: A Multi-Tasking Framework for Speech-to-Sign Language Generation ICMI ’23 Companion, October 9–13, 2023, Paris, France[34] Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J. Weiss, Ye Jia, Zhifeng Chen,and Yonghui Wu. 2019. LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech. arXiv:1904.02882 [cs.SD][35] Mert İnan, Yang Zhong, Sabit Hassan, Lorna Quandt, and Malihe Alikhani. 2022.Modeling Intensification for Sign Language Generation: A Computational Ap-proach. (2022). https://doi.org/10.48550/ARXIV.2203.09679 Publisher: arXivVersion Number: 1.A QUALITATIVE RESULTSFigure 5: Sample result showing the model’s accurate handmovement prediction with inaccurate finger movements.Figure 6: Mediapipe Errors. The keypoints for the fourthframe in the first video and the sixth frame in the secondvideo are predicted incorrectly due to fast/blurry movementswhereas the keypoints for the third frame in the secondvideo are predicted incorrectly as it contains a complex handgesture. |
-OmGWX-wRM | Look What I Made It Do - The ModelIT Method for ManuallyModeling Nonverbal Behavior of Socially Interactive AgentsAnna Lea Reinwarth1, Tanja Schneeberger1, Fabrizio Nunnari1, Patrick Gebhard1, Uwe Altmann2,Janet Wessler11firstname_middlename.lastname@dfki.de, German Research Center for Artificial Intelligence (DFKI), SaarlandInformatics Campus, Saarbruecken, Germany2firstname.lastname@medicalschool-berlin.de, Medical School Berlin (MSB), Berlin, GermanyABSTRACTNonverbal behavior of socially interactive agents (SIAs) is often au-tomatically generated and identical across all users. This approach,though economic, might have counterproductive effects when de-signing applications for diverse and vulnerable populations. Also,it might negatively impact research validity and diminish the effec-tiveness of SIA-based interventions. This paper presents argumentsfor and proposes a method to model nonverbal behavior in SIAs.The ModelIT method enables researchers to ground the modellingof nonverbal behavior in psychological theories. It aims at estab-lishing a standardized and replicable method that promotes openscience practices and facilitates the creation of tailored SIAs. It isa step towards barrier-free and accessible SIA applications acrossdiverse populations. The necessity, guidelines, and limitations ofthe ModelIT method are thoroughly addressed.CCS CONCEPTS•Human-centered computing →HCI design and evaluationmethods ;Interaction design process and methods ;Accessi-bility design and evaluation methods .KEYWORDSSocially Interactive Agents, Nonverbal Behavior, Method, Replica-bility, Accessibility, Adult Attachment, Cultural DifferencesACM Reference Format:Anna Lea Reinwarth1, Tanja Schneeberger1, Fabrizio Nunnari1, PatrickGebhard1, Uwe Altmann2, Janet Wessler1. 2023. Look What I Made It Do -The ModelIT Method for Manually Modeling Nonverbal Behavior of SociallyInteractive Agents. In INTERNATIONAL CONFERENCE ON MULTIMODALINTERACTION (ICMI ’23 Companion), October 9–13, 2023, Paris, France. ACM,New York, NY, USA, 5 pages. https://doi.org/10.1145/3610661.36165491 INTRODUCTIONNonverbal behavior plays a fundamental role in human interaction,enriching and underlining communication [ 10]. While humans pri-marily engage with other humans, in recent years these interactionsinclude objects such as computers. In human-computer interaction,Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.ICMI ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.3616549the study of nonverbal behavior has garnered increased attention,particularly when it involves embodied and humanoid socially in-teractive agents (SIAs). While for specialized interpersonal taskshumans primarily engage with other humans, SIAs are becomingmore and more capable of entering these spaces and performingthese tasks. In a world with a rising shortage of specialists, em-bodied SIAs are the future of health-care [ 15,27], teaching [ 6],coaching [ 35,36], and many more previously human-dominatedfields targeting diverse user groups with diverse needs. Vulnerablepopulations (e.g., people in in-patient facilities) make up a huge tar-get audience for SIA applications, such as therapy-accompanyingSIAs that help users identify and evaluate automatic thoughts [ 38].Though modeling of nonverbal SIA behavior has been an importantresearch focus for years [ 30], there is currently no standardizedapproach for how to model the nonverbal SIA behavior for userswith diverse needs (e.g., diverse and vulnerable populations). Theseneeds have to be identified, extracted, carefully operationalized,implemented, and validated to create SIAs that can successfullysupport their intended population. Therefore, this paper aims toestablish and advocate for a standardized approach — the ModelITmethod — for extracting nonverbal cues from literature and apply-ing these to SIA animation. The goal is to create specialized andoptimized SIAs that can confidently serve their intended purpose.It is a step towards barrier-free and equality-focused applications.The proposed ModelIT method seeks to minimize subjective biases,to ensure maximum objectivity, and to increase replicability whilealso considering current limitations of available technical tools.2 BACKGROUND2.1 Nonverbal Behavior in Human-Human-InteractionNonverbal behavior — including posture, movements, and facialexpressions [ 23] — serves several important functions in human-human interaction [ 24]. It is a fundamental part of building rapport[39], eliciting trust [ 18], and establishing how a person is perceived[9]. Nonverbal behavior has been studied among the general popu-lation for a long time, but in recent years research has also focusedon the many existing interpersonal differences, e.g. differences onthe BIG-5 personality traits [ 5,7,11] and identified specific needsof different populations.2.1.1 Clinical and neurodiverse populations. The display as well asthe interpretation of nonverbal cues differ greatly when focusingon clinical or neurodiverse populations. Such populations mighteven perceive nonverbal cues as negative which are generally re-garded and assessed as positive. For example, eye contact is usuallyICMI ’23 Companion, October 9–13, 2023, Paris, France Reinwarth et al.associated with social presence [ 8] and is therefore used to buildtrust and signify active listening to the interaction partner [ 12].However, autistic people can find direct eye contact uncomfortable[32]. Consequently, nonverbal behavior should be actively adaptedwhen interacting with an autistic person with that specific need tomake them feel more comfortable.2.1.2 Culturally diverse populations. Nonverbal behavior and itscommunicated meaning also varies significantly between differentcultures [ 21]. When those differences during interactions with in-dividuals from different cultural backgrounds are not considered,miscommunication is likely to occur. In some Asian countries likeChina, a head-shake is a nonverbal cue for "yes" and it is seen asrude to contradict another person [ 22]. In many central Europeancountries, a head-shake means "no" and contradicting is a normalpart of an interaction. Such differences can easily generate conflictwhen the interactants are not aware of them.2.2 Nonverbal Behavior inHuman-SIA-InteractionSocially Interactive Agents (SIAs) are virtually or physically em-bodied entities capable of communicating autonomously and em-pathetically with human users and other agents. They can use awide range of multi-modal behaviors [ 19]. Because humans tend tointeract similarly with machines as with other humans, the prin-ciples of human-human-interaction can be applied to human-SIAinteraction [31]. Users expect their SIA partner to act and react incertain ways and attribute human characteristics accordingly [ 41].Thus, the use of certain nonverbal behavior influences how SIAsare perceived by a user [ 14,33,41]. The modeling of nonverbal SIAbehavior has been an important research focus for years [ 30] and re-mains important with the technology becoming more sophisticated.The procedures of modeling nonverbal behavior of SIAs shouldtherefore also be periodically reconsidered and standardized.Automatic generation of nonverbal behavior has been com-monly used in the field of SIAs [ 33]. Many SIA systems utilizepre-programmed algorithms to automatically generate gestures,facial expressions, body movements, and other nonverbal cues.While this approach guarantees a consistent user-experience, itdoes not fully address the many interpersonal differences in humaninteraction. Those differences become more relevant the more SIAapplications exist for a broader spectrum of people.2.2.1 Clinical and neurodiverse populations. Eye contact is gener-ally associated with social presence [ 8]. Therefore SIAs are oftenmodeled to seek eye contact with their user to build trust. Whenautistic people interact with a SIA designed in that way, however,eye contact could even backfire — thus compromising the applica-tion’s effectiveness. It is therefore essential to model eye contactspecifically for the various applications of SIAs designed for autisticpeople [ 28]. This is only one use-case in which identical automati-cally generated nonverbal behavior can be counterproductive.2.2.2 Culturally diverse populations. Culturally varying nonverbalbehavior patterns must be considered when developing SIA appli-cations that are used interculturally. For example, a SIA-applicationdesigned for a North American user-base might completely lose itsfunctionality when implemented in a Syrian refugee program.2.3 Attachment StyleAn attachment style is an internal representation and pattern of therelationship dynamics with one’s close others [ 20,37]. While firstobserved and measured in early infancy [ 1], the concept of adultattachment [ 16] argues that such learned patterns are transferredinto adulthood. Various categorization systems for differentiatingspecific attachments styles are used by researchers. The most com-mon distinction amongst these systems is between a secure andan insecure attachment style [ 1]. Some systems use continual di-mensions to measure attachment style [ 4]: attachment anxiety andattachment avoidance. A person can score high or low on either,leading to four categories: secure (low anxiety/avoidance), preoc-cupied (high anxiety, low avoidance), dismissing (low anxiety, highavoidance), and fearful (high anxiety/avoidance). A person’s attach-ment style influences their nonverbal behavior in attachment re-lated situations [ 3]. Securely attached people show more nonverbalcloseness (laughing, touching, gazing, and smiling) than avoidantpeople during an interaction with their partner [ 40]. People withhigh attachment anxiety use nonverbal cues of anger in attachmentrelated situations [ 25] while people with high attachment avoid-ance employ distancing strategies by inhibiting nonverbal cues oftheir feelings [26, 34].3 THE MODELIT METHODThe ModelIT (Model it! Modeling nonverbal behavior from lITera-ture) method (Figure 1) is a method for standardized modeling ofnonverbal behavior in SIAs. It can be applied to a wide range ofuse-cases and used to define SIA characteristics and specific needsof diverse user populations. It consists of five steps that lead froma research level to an applied level: (1) literature review; (2) non-verbal behavior extraction; (3) nonverbal cue operationalization;(4) nonverbal cue implementation; and (5) validation. These fivesteps are a necessary modeling process because nonverbal cues arefrequently not defined in an applicable way in the existing liter-ature. Especially when designing a SIA for a specific population,most nonverbal behavior is extremely complex and only vaguely de-scribed over a vast quantity of research papers. For example, peoplewith high attachment anxiety regulate distress by seeking closeness[2]. This cannot be directly implemented into an animation but hasto undergo a modeling process to be actually usable.In our application, the nonverbal behavior of the SIA has beenmanually authored. However, given the possibility to express suchrules with a formalism that can be programmatically interpreted, itis straightforward to foresee an automatic behavior modeling byemploying a rule-based inference system [29].3.1 Literature ReviewThe first step involves conducting an in-depth review of existingliterature on nonverbal behavior, social interaction, and relevantfields of the examined use-case. The goal is to gather a compre-hensive understanding of different types of nonverbal behaviorexamined in research and to extract them from specific papers.There are many well-documented guidelines to find papers for lit-erature reviews and meta-analyses [17] which should be followedhere for extracting the literature about nonverbal behavior. Theapproach should be well documented for future replicability (e.g.,Look What I Made It Do - The ModelIT Method for Manually Modeling Nonverbal Behavior of Socially Interactive Agents ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 1: ModelIT methodused websites, search words, search date). This step provides thefoundational knowledge required for all subsequent steps.3.2 Nonverbal Behavior ExtractionAfter collecting all relevant papers, specific nonverbal behavior hasto be identified and extracted. This requires taking relevant quotesabout, e.g., facial expressions, body movements, gestures, and eyecontact directly from papers to preserve nuanced details as accu-rately as possible. The quotes are then categorized and describedin a spreadsheet with corresponding citations in a structured andstandardized manner.3.3 Nonverbal Cue OperationalizationAfter extracting nonverbal behavior, the next step is to operational-ize it into actionable nonverbal cues for the SIA. The challengehereby is to accurately depict the collected research findings. Thisincludes taking the research quotes, transforming abstract behaviorinto concrete movements and adding them to the spreadsheet.3.4 Nonverbal Cue ImplementationThe list of operationalized nonverbal cues can now be implementedfor the specific used animation system. The cues should be exam-ined one by one and individually translated into available anima-tions. This can be challenging because many existing SIA animationsystems only have predefined fixed animations. Those need to beindividually evaluated whether they actually convey the intendedmeaning. For example, an animation called "puzzled" is not neces-sarily categorized by users as puzzled. The end result should be aspreadsheet with directly usable animations and rules as how touse them. The actual animation process can now easily be donewhile following the predefined rules.3.5 ValidationThe last step consists of validating the resulting SIA behavior. Thisstep is crucial to actually test the intended effect of customized SIAs.Two separate types of validation should be considered: validationof the accuracy of the nonverbal behavior (experts) and validationof the functionality of the nonverbal behavior (target population).If resources are limited, validating if the nonverbal behavior fulfillsits function for the specific target population should be prioritized.Accuracy can then be validated informally and consensus-based byat least two experts.4 APPLICATION OF THE MODELIT METHODWe applied the ModelIT method to the modeling of nonverbalbehavior of SIAs representing people with different attachmentstyles. The following section does not show the documentationprocess exhaustively, but it exemplifies the ModelIT method stepby step and highlights its importance.4.1 Literature ReviewThe goal of the literature review was finding nonverbal behaviorof different attachment styles. The first step was implementing asearch strategy. Search terms were for example: "attachment stylenonverbal", "attachment presentation nonverbal", "attachment stylebehavior", "adult attachment behavior",... The used sites were "Webof Science" and "GoogleScholar". We decided to only use nonver-bal behavior of attachment style in adults and not children. Thechosen categorization system for the SIA was: secure, dismissing,and preoccupied. Importantly, we still wanted to utilize literaturereferring to every other attachment style classification system. Thisstrategy was adopted to extract as much information as possiblebecause literature on this specific topic is sparse. The classificationsystems are comparable and therefore convertible into each other,which was done in a later step. However, it is crucial to note that indifferent scenarios such comparability might not be given, thus ne-cessitating a comprehensive documentation of inclusion/exclusioncriteria.4.2 Nonverbal Behavior ExtractionThe result of our literature review was a list of papers with infor-mation about nonverbal behavior for the dimensions attachmentanxiety (high) and attachment avoidance (high/low) as well as thecategories secure, insecure generally, preoccupied, and dismissingattachment. We organized them into a spreadsheet and sorted thenonverbal behavior into this framework with direct quotes.4.3 Nonverbal Cue OperationalizationThe step from abstract nonverbal behavior to concrete movementsis very complex due to the nature of the preexisting literature. Non-verbal behavior is often ambivalent or not precisely defined andcannot be categorized into replicable movements — it needs to befurther operationalized and formalized. For example, dismissingattachment compared to secure is associated with less movementin general and less movement complexity [ 3]. Applying such find-ings to modeling SIA behavior shows the complexity. We defined abaseline behavior that this behavior varies from. Here, the baselineis the nonverbal behavior of secure attachment. Then we decidedwhich movements will be shown less, for example, full body, hand,head, or even every kind of movement. The concept of movementcomplexity is also ambivalent and could apply to many facets andtypes of movements, making further formalization necessary. WeICMI ’23 Companion, October 9–13, 2023, Paris, France Reinwarth et al.Table 1: Example: ModelIT spreadsheet for high Attachment AvoidanceReference Extraction Operationalization ApplicationMikulincer & Shaver (2005) [26] “blunted affect” smiles without teeth emot.smile (not emot.happy)neutral facial expression as baseline emot.bored as baselinelow frequency of facial expression changes emot.bored > 80 percent of the timeonly small movements sad03 (sigh, tiny head-shake, no full-body movement)Fraley & Shaver (1998) [13] “frequent avoidant behaviors” turning away lookto.right80.01looking away from the user lookat.07.01keeping distance hand movement number.handl.5avoiding eye contact eye contact < 50 percent of the timechose standard movements for secure attachment style and thensearched for similar movements with less intensity (e.g., "shruggingmotion" as a shoulder movement with simultaneous arm move-ments vs. shoulder movement only). Quantifying "intensity" whileworking with fixed animations can be challenging, and should relyon the consensus of several raters. Individuals with insecure at-tachment tend to show discrepancies between verbal content andnonverbal behavior [ 3]. Therefore, the content of the spoken textneeds to contradict the content of the nonverbal behavior whileconsidering intercultural differences: In a central European culture,a discrepancy is created when the SIA says "yes" while shaking theirhead. This effect vanishes for a Chinese SIA [ 22]. Further, for ourexample, individuals typically only express nonverbal behavior pat-terns of attachment style in specific attachment related situations,e.g. while taking about their early caregivers [ 3]. This is addressedby defining attachment related situations based on research andlabeling them in the action/interaction. The SIA then displays themodeled behavior only during those labeled sections.4.4 Nonverbal Cue ImplementationEvery operationalized nonverbal cue on the spreadsheet was matchedwith available animations in Vuppetmaster1– a tool for modelingSIA behavior. This resulted in a detailed and comprehensible spread-sheet used to animate videos of SIAs displaying nonverbal behaviorof secure, dismissing, or preoccupied attachment. Table 1 showsan excerpt from our spreadsheet, including the referenced paper,abstract concept, operationalization, and application in the form ofa Vuppetmaster command. See Figure 2 for examples.4.5 ValidationIt must be validated that e.g., dismissingly attached people actuallyshow the modeled nonverbal behavior. This should involve expertslike psychotherapists. Then, the functionality of the SIAs must bevalidated and rated by people with the corresponding attachment.5 DISCUSSIONIn this paper, the five-step ModelIT method is proposed for model-ing nonverbal behavior in SIAs: (1) literature review; (2) nonverbalbehavior extraction; (3) nonverbal cue operationalization; (4) non-verbal cue implementation; and (5) validation. Scientific literaturefrequently lacks information about nonverbal behavior of specificpopulations. When available, it can be ambivalent and vague. There-fore, it has to be carefully examined and evaluated to operationalizeit into actionable cues. This should be done transparently whiledocumenting each modeling step and whether it is research- orconsensus-based. The ModelIT method gives researchers a strong1https://vuppetmaster.de/Figure 2: Examples of nonverbal Cues for a: idle behavior, b:secure attachment, c: preoccupied attachment, d: dismissingattachmenttheoretical foundation and transparent guide to ground their mod-eling of nonverbal behavior in psychological theories.5.1 Limitations and Future WorkWhile the ModelIT method is based on a thorough theoretical foun-dation, one goal for future work is to empirically compare it to au-tomatic nonverbal behavior generation. Our approach to modelingnonverbal behavior is more time-consuming than automatic genera-tion. But the output of the ModelIT method can be used to employ arule-based inference system for automatic behavior modeling [ 29].Additionally, the benefit for vulnerable and diverse populationsjustifies additional costs and time investment. Despite best effortsto produce a standardized procedure, the proposed method canstill (though less) be impacted by human biases. Especially whenthe team of researchers modeling the SIA is homogeneous (e.g.,concerning cultural background, gender identity). This must becounteracted with the proposed documentation to make humanerror at least traceable and therefore hopefully solvable.6 CONCLUSIONThere is a growing need for careful modeling of nonverbal SIAbehavior to further a barrier-free and intercultural world. Interper-sonal differences need to be considered in order to minimize biases.Moreover, transparent documentation is an essential step towardsopen science and accessible SIA applications. Therefore, this paperintroduced the ModelIT method for modeling appropriate nonver-bal behavior in SIAs, enhancing their ability to engage with specificuser populations that have been overlooked in the past.ACKNOWLEDGMENTSThis work is funded by the German Federal Ministry for Educationand Research (BMBF) within the UBIDENZ project (funding code13GW0568D). Thanks to Shailesh Mishra for making the TaylorSwift joke that turned into the title of this paper.Look What I Made It Do - The ModelIT Method for Manually Modeling Nonverbal Behavior of Socially Interactive Agents ICMI ’23 Companion, October 9–13, 2023, Paris, FranceREFERENCES[1]Mary D Salter Ainsworth, Mary Catherine Blehar, Everett Waters, andSally Nichols Wall. 1978. Patterns of attachment: A psychological study of thestrange situation . Lawrence Erlbaum.[2]Ravin Alaei, Germain Lévêque, Geoff MacDonald, and Nicholas O Rule. 2020.Accuracy and bias in first impressions of attachment style from faces. Journal ofPersonality 88, 5 (2020), 940–949.[3]Uwe Altmann, Catharina Friemann, Theresa S Frank, Mareike C Sittler, DésiréeSchoenherr, Sashi Singh, Susan Schurig, Bernhard Strauss, and Katja Petrowski.2021. Movement and emotional facial expressions during the adult attachmentinterview: Interaction effects of attachment and anxiety disorder. Psychopathology54, 1 (2021), 47–58.[4]Kim Bartholomew and Leonard M Horowitz. 1991. Attachment styles amongyoung adults: a test of a four-category model. Journal of personality and socialpsychology 61, 2 (1991), 226.[5]Diane S Berry and Jane Sherman Hansen. 2000. Personality, nonverbal behavior,and interaction quality in female dyads. Personality and Social Psychology Bulletin26, 3 (2000), 278–292.[6] Chirag Bhuvaneshwara, Manuel Anglet, Bernhard Hilper, Lara Chehayeb, Ann-Kristin Meyer, Daksitha Withanage Don, Dimitra Tsovaltzi, Patrick Gebhard,Antje Biermann, Sinah Auchtor, Nils Lauinger, Andreas Kaiser, Fabian Kersting,and Gregor Mehlmann. 2023. MITHOS-Mixed Reality Interactive Teacher Train-ing System for Conflict Situations at School. In Proceedings of the InternationalSociety of the Learning Sciences .[7]Carola Bloch, Ralf Tepest, Mathis Jording, Kai Vogeley, and Christine M Falter-Wagner. 2022. Intrapersonal synchrony analysis reveals a weaker temporalcoherence between gaze and gestures in adults with autism spectrum disorder.Scientific Reports 12, 1 (2022), 20417.[8]Yevgenia Bondareva, Lydia Meesters, and Don Bouwhuis. 2006. Eye contact asa determinant of social presence in video communication. In Proceedings of the20th International Symposium on Human Factors in Telecommunication .[9]Bella M. Depaulo and Howard S. Friedman. 1998. Nonverbal communication. InThe Handbook of Social Psychology , Daniel T. Gilbert, Susan T. Fiske, and GardnerLindzey (Eds.). McGraw-Hill, 3–40.[10] Paul Ekman and Wallace V Friesen. 1969. The repertoire of nonverbal behavior:Categories, origins, usage, and coding. semiotica 1, 1 (1969), 49–98.[11] Hans-Ulrich Fisch, Siegfried Frey, and Hans-Peter Hirsbrunner. 1983. Analyzingnonverbal behavior in depression. Journal of abnormal psychology 92, 3 (1983),307.[12] Mălureanu Flavia and Luiza Enachi-Vasluianu. 2016. The importance of elementsof active listening in didactic communication: a student’s perspective. In CBUInternational Conference Proceedings , Vol. 4. 332–335.[13] R Chris Fraley and Phillip R Shaver. 1998. Airport separations: A naturalisticstudy of adult attachment dynamics in separating couples. Journal of personalityand Social Psychology 75, 5 (1998), 1198.[14] Patrick Gebhard, Tobias Baur, Ionut Damian, Gregor Mehlmann, Johannes Wag-ner, and Elisabeth André. 2014. Exploring interaction strategies for virtualcharacters to induce stress in simulated job interviews. (2014).[15] Patrick Gebhard, Tanja Schneeberger, Michael Dietz, Elisabeth André, and Nidaul Habib Bajwa. 2019. Designing a Mobile Social and Vocational ReintegrationAssistant for Burn-out Outpatient Treatment. In Proceedings of the 19th ACMInternational Conference on Intelligent Virtual Agents . Association for ComputingMachinery, 13–15. https://doi.org/10.1145/3308532.3329460[16] Cindy Hazan and Phillip R. Shaver. 1987. Romantic love conceptualized as anattachment process. Journal of personality and social psychology 52, 3 (1987),511–24.[17] Sascha Kraus, Matthias Breier, Weng Marc Lim, Marina Dabić, Satish Kumar, Do-minik Kanbach, Debmalya Mukherjee, Vincenzo Corvello, Juan Piñeiro-Chousa,Eric Liguori, et al .2022. Literature reviews as independent studies: guidelinesfor academic practice. Review of Managerial Science 16, 8 (2022), 2577–2595.[18] Gale Lucas, Giota Stratou, Shari Lieblich, and Jonathan Gratch. 2016. Trust me:multimodal signals of trustworthiness. In Proceedings of the 18th ACM interna-tional conference on multimodal interaction . 5–12.[19] Birgit Lugrin. 2021. Introduction to Socially Interactive Agents. In The Handbookon Socially Interactive Agents: 20 Years of Research on Embodied ConversationalAgents, Intelligent Virtual Agents, and Social Robotics. Volume 1: Methods, Behavior,Cognition , Birgit Lugrin, Catherine Pelachaud, and David Traum (Eds.). Associa-tion for Computing Machinery, 77–104. https://doi.org/10.1145/3477322.3477326[20] Mary Main, Nancy Kaplan, and Jude Cassidy. 1985. Security in infancy, childhood,and adulthood: A move to the level of representation. Monographs of The Societyfor Research in Child Development 50 (1985), 66–104.[21] David Matsumoto and Hyisung C Hwang. 2016. The cultural bases of nonver-bal communication. In APA handbook of nonverbal communication. AmericanPsychological Association, 77–101.[22] David Matsumoto and Hyi Sung Hwang. 2013. Cultural influences on nonverbalbehavior. Nonverbal communication: Science and application (2013), 97–120.[23] Albert Mehrabian. 1968. Some referents and measures of nonverbal behavior.Behavior Research Methods & Instrumentation 1, 6 (1968), 203–207.[24] Albert Mehrabian. 2017. Nonverbal communication . Routledge.[25] Mario Mikulincer. 1998. Attachment working models and the sense of trust: Anexploration of interaction goals and affect regulation. Journal of personality andsocial psychology 74, 5 (1998), 1209.[26] Mario Mikulincer and Phillip R Shaver. 2005. Attachment theory and emotionsin close relationships: Exploring the attachment-related dynamics of emotionalreactions to relational events. Personal relationships 12, 2 (2005), 149–168.[27] Madison Milne-Ives, Caroline de Cock, Ernest Lim, Melissa Harper Shehadeh,Nick de Pennington, Guy Mole, Eduardo Normando, and Edward Meinert. 2020.The effectiveness of artificial intelligence conversational agents in health care:systematic review. Journal of medical Internet research 22, 10 (2020), e20346.[28] Jacqueline Nadel, Ouriel Grynszpan, and Jean-Claude Martin. 2022. Autism andsocially interactive agents. In The Handbook on Socially Interactive Agents: 20years of Research on Embodied Conversational Agents, Intelligent Virtual Agents,and Social Robotics Volume 2: Interactivity, Platforms, Application , Birgit Lugrin,Catherine Pelachaud, and David Traum (Eds.). Association for Computing Ma-chinery, 437–462.[29] Ken Pedersen. 1989. Expert Systems Programming: Practical Techniques for Rule-Based Systems . John Wiley & Sons, Inc., USA.[30] Catherine Pelachaud, Carlos Busso, and Dirk Heylen. 2021. Multimodal behaviormodeling for socially interactive agents. In The Handbook on Socially InteractiveAgents: 20 Years of Research on Embodied Conversational Agents, Intelligent VirtualAgents, and Social Robotics Volume 1: Methods, Behavior, Cognition . 259–310.[31] Byron Reeves and Clifford Nass. 1996. The Media Equation : How People TreatComputers, Television, and New Media Like Real People and Places . CambridgeUniversity Press.[32] M Roy and D Wolfgang. 2015. Eye contact in adult patients with Aspergersyndrome. Fortschritte der Neurologie-psychiatrie 83, 5 (2015), 269–275.[33] Carolyn Saund and Stacy Marsella. 2021. Gesture generation. In The Handbookon Socially Interactive Agents: 20 years of Research on Embodied ConversationalAgents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior,Cognition . 213–258.[34] Dory A Schachner, Phillip R Shaver, and Mario Mikulincer. 2005. Patterns ofnonverbal behavior and sensivity in the context of attachment relations. Journalof Nonverbal Behavior 29 (2005), 141–169.[35] Tanja Schneeberger, Patrick Gebhard, Tobias Baur, and Elisabeth André. 2019.PARLEY: a transparent virtual social agent training interface. In Proceedings ofthe 24th International Conference on Intelligent User Interfaces: Companion . 35–36.[36] Tanja Schneeberger, Naomi Sauerwein, Manuel S. Anglet, and Patrick Gebhard.2021. Stress Management Training Using Biofeedback Guided by Social Agents.InProceedings of the 26th International Conference on Intelligent User Interfaces .Association for Computing Machinery, 564–574. https://doi.org/10.1145/3397481.3450683 *Honorable Mention*.[37] Felix D Schönbrodt and Jens B Asendorpf. 2012. Attachment dynamics in avirtual world. Journal of Personality 80, 2 (2012), 429–463.[38] Kazuhiro Shidara, Hiroki Tanaka, Hiroyoshi Adachi, Daisuke Kanayama, YukakoSakagami, Takashi Kudo, and Satoshi Nakamura. 2022. Automatic thoughts andfacial expressions in cognitive restructuring with virtual agents. Frontiers inComputer Science 4 (2022), 762424.[39] Linda Tickle-Degnen and Robert Rosenthal. 1990. The nature of rapport and itsnonverbal correlates. Psychological inquiry 1, 4 (1990), 285–293.[40] Joan S Tucker and Sherry L Anders. 1998. Adult attachment style and nonverbalcloseness in dating couples. Journal of nonverbal behavior 22 (1998), 109–124.[41] Janet Wessler, Tanja Schneeberger, Leon Christidis, and Patrick Gebhard. 2022.Virtual backlash: nonverbal expression of dominance leads to less liking of dom-inant female versus male agents. In Proceedings of the 22nd ACM InternationalConference on Intelligent Virtual Agents . 1–8. |
k0lKzukz1E | 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116A Methodology for Evaluating Multimodal Referring ExpressionGeneration for Embodied Virtual AgentsAnonymous Author(s)∗ABSTRACTRobust use of definite descriptions in a situated space often involvesrecourse to both verbal and non-verbal modalities. For IVAs, vir-tual agents designed to interact with humans, the ability to bothrecognize and generate non-verbal and verbal behavior is a criticalcapability. To assess how well an IVA is able to deploy multimodalbehaviors, including language, gesture, and facial expressions, wepropose a methodology to evaluate the agent’s capacity to gener-ate object references in a situational context, using the domain ofmultimodal referring expressions as a use case. Our contributionsinclude: 1) developing an embodied platform to collect human refer-ring expressions while communicating with the IVA. 2) comparinghuman and machine-generated references in terms of evaluableproperties using subjective and objective metrics. 3) reporting pre-liminary results from trials that aimed to check whether the agentcan retrieve and disambiguate the object the human referred to,if the human has the ability to correct misunderstanding usinglanguage, deictic gesture, or both; and human ease of use whileinteracting with the agent.CCS CONCEPTS•Human-centered computing →HCI design and evaluationmethods ;•Computing methodologies →Natural languagegeneration .KEYWORDSEmbodied agents, non-verbal behaviours, multimodality, referringexpression generationACM Reference Format:Anonymous Author(s). 2023. A Methodology for Evaluating Multimodal Re-ferring Expression Generation for Embodied Virtual Agents. In Proceedingsof Make sure to enter the correct conference title from your rights confirma-tion emai (Conference acronym ’XX). ACM, New York, NY, USA, 10 pages.https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONRecent achievements in generative language modeling, of whichOpenAI’s ChatGPT is an exemplar, have demonstrated remarkableabilities in producing topically coherent, grammatically correct,and contextually appropriate text. Prior to the generative AI boom,language models such as BERT [ 10] and GPT-2 [ 54] achieved statePermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from permissions@acm.org.Conference acronym ’XX, June 03–05, 2018, Woodstock, NY©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXof the art results on various language processing tasks. It may betempting, therefore, to believe that language generation for con-versational agents (CAs) is a solved problem. However, a commoncritique of large language models (LLMs) is that they lack ground-ingorunderstanding . Bender and Koller [ 4] argue that learningonly from the textual form does not provide information about the“meaning” connecting utterance to communicative intent.Humans, meanwhile, communicate in multiple non-verbal modal-ities, and mix these fluently with verbal modalities. A telling exam-ple is the ability of a human to answer a question like “what am Ipointing at?” with appropriate situational context, which even amultimodal LLM like GPT-4 cannot. Given the recent developmentsin language modeling, we can expect the ability to fluently mix andmatch modalities to be a critical capability in the next generationof CAs. As interactive agents become more sophisticated, and seeand interpret both visual and linguistic context concurrently, userswill expect them to behave more like humans.Agent embodiment is one channel to provide information neededto enable CAs to understand language in context. If one modality(e.g., language) is not communicative, another modality (e.g., ges-ture) can be used to disambiguate or correct the failure. As objectsin a shared situated context provide anchors for the construction ofcommon ground between interlocutors [ 7,50,51], a valuable usecase to understand multimodal language use in context is multi-modal referring expressions (MREs) that exploit informationabout both object characteristics and locations [ 8]. It is thereforenecessary to come up with principled strategies to evaluate mixed-modality referring expression generation systems.In this paper, we propose a methodology to carefully evaluategeneration of multimodal referring expressions by a particular classof CAs, namely embodied interactive virtual agents (IVAs), withthe goal of aiding the development of IVAs that interact with hu-mans with symmetrical, bidirectional use of non-verbal and verbalbehavior. Our novel contributions are:•An embodied virtual agent testbed with an IVA who usesgesture and language [ 26,40] to elicit MREs from humans;•Establishing bidirectional and symmetric communicationbetween humans and IVAs using verbal and non-verbalbehavior synthesis;•Evaluation metrics thereof that apply to both humans andIVAs, combining qualitative and quantitative metrics;•Analysis of preliminary data gathered from interactionswith our test agent.2 RELATED WORKThe psycholinguistic literature shows the impact of deictic gestureon the successful communication of intent and reference for bothspeakers and hearers [ 17,41]. Nonetheless, much earlier work in thearea of referring expression (RE) generation has focused on linguis-tic description, such as relative and absolute properties of objects1117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232(e.g., size and color) [ 16,61], spatial references [ 12,32,37], and re-lational episodic descriptions [ 13]. Where non-verbal information,such as deictic gesture, is considered, much prior work focuses onRE comprehension rather than generation, e.g., [ 5,35,52,57], andadditionally typically lacks features related to agent embodiment[22,23]. Where generation is addressed [ 13], it is often separatedfrom comprehension. As such, we seek to build and evaluate modelsfor generating MREs that are fluent and clear, and symmetric andbidirectional in the context they exploit when compared to human-generated REs. Doing so requires developing evaluation metricsthat indicate when IVA-generated non-verbal behavior provides ameaningful boost in communicative capability compared to verbalbehavior only.Datasets. A number of datasets and corpora exist of human-generated descriptions of target objects in visual scenes, includingBishop [ 18], Drawer [ 63], GRE3D3 [ 64], TUNA [ 16], RS-VS [ 37],and recent corpora by Kunze et al. [ 32] and Doğan et al. [ 12]. OtherRE corpora collected for the purpose of training comprehensionmodels fall into three categories—verbal references only [ 6,9,20,39,42,45,67], gestures only [ 56,58,59], and embodied multimodalREs including language and gesture [30, 55].Metrics. Correspondence between human corpora and machinegenerated references can be measured either by automatic metricsor human judgments. Overlap in the properties of human and ma-chine descriptions can been computed according to Dice Coefficient[11], MASI [ 44], Levenshtein Distance [ 34], BLEU [ 43], ROUGE[36], CIDER [ 62], or METEOR [ 2]. Alternatively, human judgescan evaluate generated REs according to adequacy of reference ornaturalness. While adequacy is evaluated by object identificationtasks [ 12,13,15,32], naturalness is evaluated by (1) metrics such aserror rate, identification time, and reading time [ 3,29] or (2) humanranking of generated references for objects in a set of images orvideos [12, 30, 32].Prior work on embodied agents argues for the role of embodi-ment in representing the salient content of objects in a scene [ 49],in contributing to mutual understanding [ 25], and in evaluating theoutputs of interactive systems [ 31]. Relatedly, Kozierok et al. [ 21] ar-gue that evaluating multimodal interactions require a combinationof quantitative and qualitative criteria, particularly in task-basedsituations. We therefore present a task-oriented setting designed torequire the use of MREs, and a proposal for evaluating how non-verbal strategies complement verbal strategies for situated meaning[53].In the remainder of this paper, we will discuss the platform weuse to collect and generate MREs in a human-agent interaction(Sec. 3), specify the evaluation metrics we propose to use (Sec. 4),present preliminary results of initial data collected according to theproposed evaluation (Sec. 5), and discuss future directions (Sec. 6).3 METHODOLOGYFirst, we develop an interactive virtual agent system for an objectidentification task that interprets human language and simulatedgesture inputs, and responds with language and animated gestures.We then proposed metrics to address the fluency and clarity ofreferring expressions used. Since our goal is to create symmetric,bidirectional communication between humans and agents, thesemetrics may apply to either human or agent behaviors, and wecompare the use of verbal and non-verbal modalities. We thenanalyze preliminary data for indications of where human and agentuse of different modalities aids communication, for the purposes ofassessing the contribution of non-verbal behavior to the interaction.3.1 Interactive Virtual Agent (IVA) DevelopmentTheDiana system [ 26,47] was developed as a collaborative virtualagent who responds to instructions given via both live gesture andspeech and collaborates with humans in situated task-based inter-actions. We adapted the existing system into a standalone versionwhere human participants are presented with a sequence of 10scenes, each involving (1) ten equally sized target blocks randomlyplaced on a table that (in simulated units in the Unity-based envi-ronment) is approximately 1.6m wide. There are two of each colorof block: red, green, blue, pink, and yellow; and (2) two landmarkobjects ( plate andcup) available for use when describing the tar-get blocks. This setting requires the IVA to ask for disambiguationbased on factors like color and location if needed, and the humanto provide complex descriptions including verbal (e.g., relational,historical) references, non-verbal (e.g., deictic pointing) references,or ensemble. Diana initially asks a question, e.g., “Which objectshould we focus on?”, as shown in Fig. 1, without providing anyprior knowledge of what she understands, e.g., specific domainwords or actions. Participants are informed that they are able touse multiple input channels, e.g., automatically recognized speechand mouse-based deixis, to clearly express their intent. To replicatethe variability in pointing displayed in the Diana system with livegesture recognition, and the gesture-semantic notion of a pointingcone [24], the center of deixis fluctuates within a circle of radius±0.3m around the mouse location and the size of the deictic reticle(see Fig. 1) randomly fluctuates in size within a range of 14–186%of the default radius (17.32cm). This variability prevents users fromrelying on fully accurate pointing with the mouse as a method ofunambiguously indicating objects, and encorages the use of speechinput for object specification.Figure 1: Experimental Diana System: the purple circle indi-cates where the user is pointing. Without disambiguation,any object within the pointing circle is a potential candi-date for a deixis-only RE. Diana’s utterances both appear onscreen and are spoken aloud via TTS.2233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents Conference acronym ’XX, June 03–05, 2018, Woodstock, NY291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348Table 1: Predicate logic format (PLF) transformation for co-gestural verbal REs (Att _RE: Attributive RE, Trans_RE: TransitiveRE, Rel_RE: Relational RE, Hist_RE: Historical RE, and Comp_RE: Compound RE).∗Numerals in brackets denote variablesthat must be assigned from prior conversational or non-verbal context (e.g., “it,” “there,” etc.).Speech Prompt PLF Verbal Non-Verbal RE TypePick up that red block grasp(that(red(block))) ✓ ✓ (Att_RE)Put this block to the right of the blue block put(this(block),right(the(blue(block)))) ✓ ✓ (Trans_RE)Grasp the green block beside the plate grasp(the(green(beside _adj(plate(block)))) ✓ - (Rel_RE)Lift the block you just put down lift(the(put_adj(block))) ✓ - (Hist_RE)Take this block and put it there take(this(block))+Put({0},{1})∗✓ ✓ (Comp_RE)Interpreting Verbal and Non-Verbal Expressions. Multimodal refer-ring expressions can be considered special cases of gesture utterancesas specified in [ 48], in that they contain a gestural component anda verbal component that must be unified for a complete interpreta-tion by either human or machine. In addition, MREs may be mixedwith unimodal REs in a discourse, but even unimodal REs may relyon meaning that was previously established in the discourse usingmultimodal communication. Therefore, our motivation for devel-oping a bidirectional evaluation scheme is to create methodologiesfor evaluating combined verbal and non-verbal behavior that applyequally well to human and IVA behaviors.We follow an analysis of the EGGNOG dataset, a collection ofhuman-human interactions in a Blocks World domain [ 65], whereinhuman-generated verbal REs are expected to fall into three complexcategories, potentially involving both verbal and non-verbal con-tent: Attributive REs , which describe object properties; RelationalREs, which describe objects in relation to each other; and Histor-ical REs , which describe objects already mentioned or interactedwith. All three of these may be aligned with deictic gesture, butin different ways. To replicate these exhibited interpretive capabil-ities, we first developed four main algorithms to interpret verbalREs: (1) <ParsingToPLF > recursively follows a set of rules, usingthe Stanford CoreNLP dependency tree [ 38] to compose linguis-tic constituents into a predicate logic format (PLF). Table 1 showsthe PLFs of different speech inputs and whether they need to beaccompanied by non-verbal information for a complete interpre-tation. Multimodal references are interpreted with respect to theVoxML modeling language [ 33,46] and the scene in the VoxWorldsimulation platform [ 27,28]. (2) The < RelationalRE > algorithmleverages spatial relations between objects that are tracked by theVoxWorld platform using calculi such as RCC-3D [ 1]. The inter-preter extracts mentioned objects, localizes the target relative toother objects, and acts upon it as shown by command #7 in Fig. 2.(3) The <HistoricalRE > algorithm processes those sub-predicatesthat indicate actions that have previously been taken in the dia-logue, e.g., in Fig. 2 #9, by extracting objects that were the subjectsof previous actions. (4) The < AttributiveRE > algorithm interpretsattributes of the object (in this setting, color or demonstratives).There are two subcategories of attributive REs: Transitive REs , anattributive RE embedded within a transitive clause (see Table 1);andCompound REs involving multiple events, as shown by Fig. 2 #1.A simple, non-transitive, non-compound attributive RE is shownin Fig. 2 #3. This utterance also shows how demonstratives mayinvolve the human deictically disambiguating the target. Deicticinputs can also be sequential pointing prompts with no speech, e.g.,Fig. 2 #13–16, in which the human points to BlueBlock then to theGreenBlock to instruct Diana to put the blue block on the greenblock.human: [1] Take the red block and put it to the left of the plate.diana: [2] Which red block?human: [3] This one. [ Points to a red block ]diana: [4] OK! [ Acts on the red block ]human: [5] Grasp this pink block and put it on the red block.[points to a pink block ]diana: [6] Do you mean the red block I just put down? [ Pointsto a red block ]human: [7] No, the red block behind the blue block.diana: [8] OK! [ Acts on the red block ]human: [9] Pick up the block that you just grasped.diana: [10] I’ve just grasped the pink block (that is) on thered block. Is this what you mean? [ points to a pink block ]human: [11] Yes.diana: [12] OK! [ Acts on the pink block ]human: [13] [ Points to a blue block ]diana: [14] [ Points to the same blue block ]human: [15] [ Points over the green block ]diana: [16] [ Puts the blue block on the green block ]Figure 2: Sample dialogue: the interaction from 1–12 is mul-timodal (co-gestural speech) and from 13–16 is unimodal(deictic gesture only).Generating Verbal and Non-Verbal Expressions. In addition to in-terpreting multimodal inputs, being able to generate non-verbalbehavior is essential for interactive agents to add social fluencyto the interaction [ 66]. Diana is able to generate speech via text-to-speech, deictic gesture via animation and inverse kinematicsexecuted on her body rig, and action by manipulating virtual ob-jects in the scene. (1) When the human indicates a block withoutsupplying an action to execute, Diana points to it, confirming un-derstanding of the RE with her own deictic RE, as shown in Fig.3. (2) She directly acts on all aforementioned verbal prompts (e.g.,multimodal commands in Fig. 2, #1–12) by either disambiguatingcandidate target objects or carrying out the requested action inthe virtual space. (3) She also acts on non-verbal prompts (e.g., uni-modal commands in Fig. 2 from 13-16) by performing the denotedactions after the human specifies the focus and target locations. (4)As shown in Fig. 4, she expresses emotions (e.g., confusion and joy),in response to human inputs, such as being confused when thereis an ambiguity in RE or action interpretation, or joy at havinginterpreted an input successfully. Appropriate generation, then,3349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464becomes a question of correctly generating the content of an ut-terance, movement through space of a gesture, or specific facialexpression at the right time, to serve a communicative purpose.Figure 3: Generating deictic gestures. Diana will respond towhat she interprets the RE as referring to by pointing toit, which can be used to assess the correctness of her objectgrounding depending on which object the human actuallyintended to reference.4 EVALUATIONWith the goal to enable bidirectional communication between ma-chines and humans using multimodal referring expressions as atestbed use case, specific evaluable properties must be enumeratedto demonstrate where a fully-symmetrical system is more success-ful than one that maintains communicative asymmetry between thetwo interlocutors. The key research question with evaluation is: dothe metrics used clearly establish whether both interlocutors are ableto extract the communicative intents of the others from their behav-ior?Therefore, good metrics will answer if the non-verbal behaviorgeneration methods used for an IVA is effectively contributing tothe human interlocutor’s understanding, as defined as the abilityto extract communicative intent from utterances and actions. Weconsider properties that are related to deictic and linguistic contextawareness, as used in the evaluation of human-machine collabo-ration [ 21], and propose quantitative and qualitative metrics thatassess the following properties of multimodal RE usage in a task-based environment: 1) efficient and collaborative task completion,2) software reliability and consistency, 3) ability of humans andmachines to understand diverse communications, and 4) agent con-tribution of meaningful content. The version of the Diana systemdescribed above is presented to human subjects to collect samples(a) (b)Figure 4: Diana’s facial expressions. (a) Confusion (e.g., undo-ing an action or responding to a negative acknowledgment).(b) Joy (e.g., welcoming users at the beginning of interactionsor responding to a positive acknowledgment).of bidirectional collaborations and evaluate successful multimodalcommunication strategies for RE generation using both loggedinteractions and human judgments.4.1 Human-Machine Collaboration DataCollectionDuring a single human-agent interaction session, the participantviews 10 scenes containing 10 randomly-placed target objects tobe referenced. Referencing is considered successful when Diana isable to ground the human’s MRE to the same object as the humanintends to describe. The IVA’s and participant’s utterances, non-verbal behavior, and actions are logged (e.g., Fig. 5) for analysis andfuture training and evaluating of multimodal referring expressiongeneration models.4.2 Evaluation MetricsTo evaluate the success of the IVA w.r.t. the key characteristics ofhuman-machine collaboration from Sec. 4, we define 19 metrics asfollows:(1) Multimodal Prompt Completion Efficiency (MPCE).(2) Linguistic Prompt Completion Efficiency (LPCE).The difference in target identification and the related task comple-tion times when using multimodal REs vs. verbal only REs indicatesthe increase in RE effectiveness when using multimodal generationvs. linguistic generation methods only.(3)Human-machine completion efficiency (HMCE): Time taken tocomplete the task. Since the task as a whole is normalized (anobject referencing with 10 scenes each containing 10 objects),completion time can be directly related to referring strategiesused by each interlocutor.(4)Machine Appropriate Response Success Rate (MARSR): Rateof IVA responses to human prompts that are not followed by anegative response (e.g., no, nevermind).(5)Proceed Without Reset (PWR): Rate of interactions that proceedwithout resets.(6)Machine Interpretation of Human Communication (MIHC):Rate of correctly executed prompts.(7)Machine Interpretation of Relational REs (MIRRE): Rate of cor-rectly executed relational prompts.(8)Machine Interpretation of Historical REs (MIHRE): Rate of cor-rectly executed historical prompts.(9)Human Interpretation Efficiency of Machine Communication(HIEMC): Time from generation of machine’s reference to targetidentification by human.(10) Agent Pointing Success Rate (APSR): Rate of agent successfullypointing out the target object.(11) Mutual Contribution Success Rate (MCSR): Difference betweennumber of verbose human turns and verbose agent turns (“ver-bose” being defined as a meaningful contribution beyond posi-tive or negative acknowledgement or disambiguatory question—in our MRE use case this typically means a distinct referringexpression).(12) Machine-generated referring expressions (MGRE): Rate of machine-generated referring expressions compared to total utterances/discourse moves.4465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents Conference acronym ’XX, June 03–05, 2018, Woodstock, NY523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580(13) Recognition of Previously Mentioned Entities (RPME): Rateof previously mentioned entities grounded at the end of eachdiscourse move.(14) Machine Historical Referencing Success (MHRS): Rate of histor-ical references generated by the agent relative to total numberof generated REs.(15) Machine Relational Referencing Success (MRRS): Rate of re-lational references generated by the machine relative to totalnumber of generated REs.The above metrics 1–15 are all calculated directly from datalogged during human-agent interactions. The following metrics arecollected post facto from the judgments of 3rd-party evaluators (seeSec. 6.1).(16) Machine Object Identification Success Rate (MOISR): Rate ofcorrectly identified objects (by machine).(17) Human Object Identification Success Rate (HOISR): Rate ofcorrectly identified objects (by humans).(18) Machine References Fluency Rate (MRFR): Rate of top-ratedmachine references according to 3rd-party human judgments.(19) Human References Fluency Rate (HRFR): Rate of the top-ratedhuman references according to 3rd-party human judgments.In this paper, we include preliminary results for the followingmetrics: Multimodal Prompt Completion Efficiency (MPCE), HumanInterpretation Efficiency of Machine Communication (HIEMC), andAgent Pointing Success Rate (APSR), in addition to the illustrationsof generated referring expressions by each of the IVA and sub-ject, IVA’s ability to disambiguate, human’s ability to correct IVA’smisunderstanding, the impact of deictic gesture on interlocutors’understanding, and IVA’s dialogue history.5 PRELIMINARY RESULTS5.1 Automated Quantitative EvaluationIn a preliminary study, constituting the complete 10-scene inter-action with a sample test subject, we logged 330 different humanreferring expressions, including 141 pointing-only references fortarget object identification, 141 pointing-only references for targetlocation identification, 33 multimodal REs, and 15 linguistic REs, asdepicted in Fig. 6 a. Linguistically, as shown in Fig. 6 c, 84% REs aretransitive attributive references (e.g., move the red block to the plate ).Similarly, we logged 330 different machine referring expressions,including 141 pointing-only REs to the referents, 174 multimodalREs, and 15 linguistic REs, as depicted in Fig. 6 b. Consequently, weused these logged data to obtain preliminary results regarding theease of agent disambiguation, human recognition of agent intentfrom verbal and non-verbal behavior, and overall interaction.In Fig. 5 a, interlocutors’ moves, including actions, speech, andgestures, are logged with their timestamps. We see that the hu-man started pointing to the focus object ( BlueBlock1 ) and movingit behind YellowBlock1 . Logs also include the positions of each, dis-tance from agent to each, and the agent’s action after pointing toeach of the two blocks. The human then used language only (“Pickup the yellow block”) to instruct Diana to pick up YellowBlock2 .This instruction required Diana ask for disambiguation: “Whichyellow block?”, as there are two yellow blocks in the scene. Todisambiguate, the human uses pointing, and the object, its position,(a)(b)Figure 5: (a) Trial sample of Diana’s ability to disambiguatethe target; (b) Trial sample of human’s ability to correct mis-understanding.and distance are logged, along with Diana’s action. This illustratesDiana’s capability to clearly disambiguate the object the humanreferenced and efficiently execute the human’s prompt as shown inFig. 7 aandb, which leads to bidirectional communicative efficiency,with both human and agent combining verbal and non-verbal be-havior. When Diana has a misunderstanding, the human can correctit using language, deictic gesture, or both (Fig. 5 b). Diana confirmsthat disambiguation was successful using deictic gesture to thecorrect object.In human-human interactions, pointing reduces cognitive load[17]. Similarly, this is observed with the IVA as shown in the con-tingency table, Table 2. The agent shows her understanding of thehuman’s intended meaning when providing a sequence of pointingREs or co-gestural speech (Multimodal REs) without asking for dis-ambiguation by pointing to the referents; nonetheless, using onlyspeech for communication requires the agent to ask for additionalinformation, i.e., gestures, to clearly identify the target and point toit as depicted in Fig. 7 c. We see that a relationship exists betweenthe modalities used and the level of ambiguity, such that use ofpointing significantly reduces the ambiguity level of the prompt(p-value <0.001using Fisher’s exact test [14]).In addition to language and deictic gesture, prior actions con-tribute to building speakers’ knowledge of descriptions of objectsas defined by Grice’s maxim of quantity [ 19]. Therefore, we inte-grated a dialogue history to the IVA. This stack stores all requestedactions along with target objects, and accomodates interpretationsof verbal, gestural, and multimodal inputs. Fig. 8 shows the numberof actions in the dialogue history by the end of each scene in thepreliminary data. These stored actions are available for use by both5581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696(a)(b)(c)Figure 6: Preliminary results on (a) Human generated REs (b)Diana generated REs: categories and quantity (c) Categoriesof human verbal REs.Table 2: Contingency table of human RE ambiguity andmodalities used: # ambiguous REs by modality typeModality Did Agent Disambiguate?No YesMultimodal RE 15 0Pointing Only RE 141 0Speech Only RE 0 33p-value <2.2e−16humans and the IVA to refer to objects that may have previouslybeen interacted with, as described in Sec. 3.1.Table 3 shows how the IVA’s dialogue history is constructed andrevisited to understand the human’s intents within a shared space.(a)(b)(c)Figure 7: (a) Human Interpretation Efficiency of MachineCommunication (Metric #3: HIEMC); (b) Multimodal PromptCompletion Efficiency (Metric #1: MPCE) by Diana; (c) AgentPointing Success Rate (Metric #10: APSR).After recognizing the human’s intent and executing the parsed-outprompt, the IVA pushes the action and referent (extracted from thePLF of the prompt) to two separate stacks (an actions stack anda referents stack) as shown by Table 3, #1–3. If the human uses amention of a previously executed action to indicate an object asin Table 3, #4 (“grasp the block you just slid”), the IVA visits thedialogue history to 1) retrieve the most recently referenced objectthat is relevant to the provided action (in this case, GreenBlock2 , asit satisfies the adj_slid(·)predicate), 2) push the new most recentaction and referent onto the stack for future retrieval if necessary.6697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents Conference acronym ’XX, June 03–05, 2018, Woodstock, NY755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812Table 3: Sample of dialogue history, including previously mentioned actions and related objects after executing multimodal(co-gesure speech) or unimodal (speech only or pointing only) prompts.No. Modality PLF Actions Stack Referents Stack4 Speech Only grasp(the(adj_slid((block))) grasp put put GreenBlock2 RedBlock1 GreenBlock13 Multimodal slide(GreenBlock 2;left(the(plate))) slide put put GreenBlock2 RedBlock1 GreenBlock12 Pointing Only put(RedBlock 1;left(the(plate))) put put RedBlock1 GreenBlock11 Pointing Only put(GreenBlock 1;<0.5919505; 1.12487;−0.3801433 >) put GreenBlock1Figure 8: IVA’s dialogue history length at end of each scene.6 FUTURE EVALUATIONA larger study is preparation with a goal to collect data from roughly150 participants who use REs of different types and strategies whilecollaborating with Diana to perform the task described above. Eachparticipant views 10 scenes to refer to 10 randomly placed targetobjects, resulting in a total of 15,000 samples and recorded videos.Recorded video will consist of screen captures showing the humaninstructions as they are rendered in the scene, but direct video ofthe participants will not be collected. The gathered data will thenbe used to train generative models (e.g., fine-tuning an open-sourcelarge language model such as LLaMA [ 60] or similar) to producecontextually correct and situationally fluent REs that combine lan-guage and gesture. These REs will be evaluated according to themetrics discussed above, as well as human judgments as describedbelow.6.1 Human EvaluationTo evaluate the success of multimodal referring expression gen-eration (MREG) models, two human-based experiments will beconducted using crowdsourcing platforms such as Amazon Me-chanical Turk (AMT). We propose two primary criteria to assesshow generative modules imbued with situational awareness andthe ability to prompt non-verbal behavior could be compared withhumans’ generation capabilities. Criterion 1: how well the agent-generated strategies qualitatively compared to humans-generatedstrategies, as evaluated using a preference ordering method; Crite-rion 2: how well the agent-generated multimodal references quanti-tatively compared to humans-generated multimodal references, asevaluated using task completion. Fig. 9 shows the MREG evaluationframework including the design, participants and procedures.Figure 9: Crowdsourcing framework for evaluating multi-modal referring expression generation models.6.2 Study DesignHuman MREs will be selected from the data gathered according tothe strategy outlined in Sec. 4.1. These will be compared with REsgenerated by the virtual agent when driven by a generative modeltrained over the human data. A total of 2,800 videos (7 references ×10 blocks×20 configurations×2 agents—human and Diana) willbe collected. The 7 referencing strategies for each target object willuse pointing only once, speech only three times, and a multimodalensemble three times. This follows the pattern established for datacollection in Krishnaswamy and Putejovsky’s EMRE dataset [ 30]which allows for variability in the language used in linguistic ormultimodal REs. Videos will be used in a set of AMT human intelli-gence tasks (HITs), where each HIT will involve workers rating 28videos for both fluency and clarity, including 7 machine generatedREs and 7 human REs, for a total of 100 HITs. Each HIT will be com-pleted by 10 workers, for a total of 1,000 HITs and 28,000 individualjudgments (2,000 for each individual RE in the dataset). Recruited7813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928Figure 10: Each set in the HIT includes two tasks for quanti-tative and qualitative evaluation of human REs and IVA REs.workers will be fluent English speakers between 18 and 60 yearsold and be given 15 minutes for each task while being compensatedfor their time via the platform.Each HIT will require workers to evaluate 2 sets of 14 videosaccording to both the aforementioned criteria (Sec. 6.1). Each setwill contain 7 videos of human REs and 7 of machine-generated REs.Workers will be informed whether the descriptions are generatedby humans or by the embodied agent. As shown in Fig. 10, firstparticipants will be asked to rate the “fluency” of each descriptionin the video using a Likert-type scale (from 5—best—to 1—worst).Then they will be asked to locate the target object that is mentionedby the video, which will be compared to the actual object that wasintended to be referenced, as stored in the dataset. This assessesthe correctness of the referring expression: does a human listenercorrectly retrieve the object that was intended to be referenced,and how do verbal and non-verbal signals each contribute to theability to correctly retrieve the object from the referring expressionprovided?7 CONCLUSIONAs interactive agents become more widespread in everyday use,developers will need principled ways of evaluating their behavior.Modern generative large language models already demand newmethods of evaluation beyond metrics such as accuracy, precision,and recall on benchmark datasets. Factors such as fluency, reliabil-ity, correctability, and ease of use must be taken into account. Thisis doubly the case when non-linguistic modalities are involved, aswould be the case with embodied IVAs. In this paper, we proposeda quantitative and qualitative evaluation framework to assess thequality of generated multimodal referring expressions, includinglanguage, gesture, and actions grounded in a shared virtual environ-ment. We developed an instance of an IVA for an object referencingtask designed to elicit multimodal referring expressions from hu-man interlocutors and developed a set of metrics for evaluatingthe quality of referring expressions that apply equally to those pro-duced by both humans and humanoid IVAs using combined verbaland non-verbal information. We showed preliminary results fromnaive users of the experimental platform, and analyzed system out-puts based on a subset of our proposed metrics to showcase theirutility for evaluating the contribution of non-verbal informationtoward bidirectional interpretation and disambiguation of definitedescriptions of objects in context. We also detailed how our pre-liminary study will be expanded and scaled up. Our frameworktargets both timing and fluency of the interaction and proposesa set of qualitative and quantitative metrics that we hope will bebeneficial for researchers in the IVA and multimodal interactioncommunities to assess dialogue and behavior generation strategiesfor multimodal interaction systems.REFERENCES[1] Julia Albath, Jennifer L Leopold, Chaman L Sabharwal, and Anne M Maglia. 2010.RCC-3D: Qualitative Spatial Reasoning in 3D.. In CAINE . 74–79.[2]Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric forMT evaluation with improved correlation with human judgments. In Proceedingsof the acl workshop on intrinsic and extrinsic evaluation measures for machinetranslation and/or summarization . 65–72.[3] Anja Belz and Albert Gatt. 2008. Intrinsic vs. extrinsic evaluation measures forreferring expression generation. In Proceedings of ACL-08: HLT, Short Papers .197–200.[4] Emily M Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning,form, and understanding in the age of data. In Proceedings of the 58th AnnualMeeting of the Association for Computational Linguistics . 5185–5198.8929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents Conference acronym ’XX, June 03–05, 2018, Woodstock, NY987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044[5] Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019.Touchdown: Natural language navigation and spatial reasoning in visual streetenvironments. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition . 12538–12547.[6]Zhenfang Chen, Peng Wang, Lin Ma, Kwan-Yee K Wong, and Qi Wu. 2020.Cops-ref: A new dataset and task on compositional referring expression com-prehension. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition . 10086–10095.[7] Herbert H Clark, Robert Schreuder, and Samuel Buttrick. 1983. Common groundat the understanding of demonstrative reference. Journal of verbal learning andverbal behavior 22, 2 (1983), 245–258.[8] Robert Dale and Ehud Reiter. 1995. Computational interpretations of the Griceanmaxims in the generation of referring expressions. Cognitive science 19, 2 (1995),233–263.[9] Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle,and Aaron Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In Proceedings of the IEEE Conference on Computer Vision andPattern Recognition . 5503–5512.[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding.InProceedings of the 2019 Conference of the North American Chapter of the Asso-ciation for Computational Linguistics: Human Language Technologies, Volume 1(Long and Short Papers) . Association for Computational Linguistics, Minneapolis,Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423[11] Lee R Dice. 1945. Measures of the amount of ecologic association betweenspecies. Ecology 26, 3 (1945), 297–302.[12] Fethiye Irmak Doğan, Sinan Kalkan, and Iolanda Leite. 2019. Learning to generateunambiguous spatial referring expressions for real-world environments. In 2019IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) . IEEE,4992–4999.[13] Rui Fang, Malcolm Doering, and Joyce Y Chai. 2015. Embodied collaborativereferring expression generation in situated human-robot interaction. In Proceed-ings of the Tenth Annual ACM/IEEE International Conference on Human-RobotInteraction . 271–278.[14] Ronald Aylmer Fisher et al .1936. Statistical methods for research workers.Statistical methods for research workers. 6th Ed (1936).[15] Albert Gatt, Anja Belz, and Eric Kow. 2009. The TUNA-REG Challenge 2009:Overview and evaluation results. Association for Computational Linguistics.[16] Albert Gatt and Kees Van Deemter. 2007. Lexical choice and conceptual perspec-tive in the generation of plural referring expressions. Journal of Logic, Languageand Information 16, 4 (2007), 423–443.[17] Susan Goldin-Meadow. 1999. The role of gesture in communication and thinking.Trends in cognitive sciences 3, 11 (1999), 419–429.[18] Peter Gorniak and Deb Roy. 2004. Grounded semantic composition for visualscenes. Journal of Artificial Intelligence Research 21 (2004), 429–470.[19] Herbert P Grice. 1975. Logic and conversation. In Speech acts . Brill, 41–58.[20] Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014.Referitgame: Referring to objects in photographs of natural scenes. In Proceed-ings of the 2014 conference on empirical methods in natural language processing(EMNLP) . 787–798.[21] Robyn Kozierok, John Aberdeen, Cheryl Clark, Christopher Garay, Bradley Good-man, Tonia Korves, Lynette Hirschman, Patricia L McDermott, and Matthew WPeterson. 2021. Assessing open-ended human-computer collaboration systems:applying a hallmarks approach. Frontiers in artificial intelligence 4 (2021), 670009.[22] Emiel Krahmer and Ielka van der Sluis. 2003. A new model for generatingmultimodal referring expressions. In Proceedings of the ENLG , Vol. 3. 47–54.[23] Alfred Kranstedt, Stefan Kopp, and Ipke Wachsmuth. 2002. Murml: A multi-modal utterance representation markup language for conversational agents. InAAMAS’02 Workshop Embodied conversational agents-let’s specify and evaluatethem![24] Alfred Kranstedt, Andy Lücking, Thies Pfeiffer, Hannes Rieser, and IpkeWachsmuth. 2006. Deixis: How to determine demonstrated objects using apointing cone. In Gesture in Human-Computer Interaction and Simulation: 6thInternational Gesture Workshop, GW 2005, Berder Island, France, May 18-20, 2005,Revised Selected Papers 6 . Springer, 300–311.[25] Nikhil Krishnaswamy and Nada Alalyani. 2021. Embodied Multimodal Agents toBridge the Understanding Gap. In Proceedings of the First Workshop on BridgingHuman–Computer Interaction and Natural Language Processing . Association forComputational Linguistics, Online, 41–46. https://aclanthology.org/2021.hcinlp-1.7[26] Nikhil Krishnaswamy, Pradyumna Narayana, Rahul Bangar, Kyeongmin Rim,Dhruva Patil, David McNeely-White, Jaime Ruiz, Bruce Draper, Ross Beveridge,and James Pustejovsky. 2020. Diana’s World: A Situated Multimodal InteractiveAgent. In Proceedings of the AAAI Conference on Artificial Intelligence , Vol. 34.13618–13619.[27] Nikhil Krishnaswamy, William Pickard, Brittany Cates, Nathaniel Blanchard,and James Pustejovsky. 2022. The VoxWorld platform for multimodal embod-ied agents. In Proceedings of the Thirteenth Language Resources and EvaluationConference . 1529–1541.[28] Nikhil Krishnaswamy and James Pustejovsky. 2016. VoxSim: A Visual Plat-form for Modeling Motion Language. In Proceedings of COLING 2016, the 26thInternational Conference on Computational Linguistics: Technical Papers . ACL.[29] Nikhil Krishnaswamy and James Pustejovsky. 2018. An evaluation frameworkfor multimodal interaction. In Proceedings of the Eleventh International Conferenceon Language Resources and Evaluation (LREC 2018) .[30] Nikhil Krishnaswamy and James Pustejovsky. 2019. Generating a novel datasetof multimodal referring expressions. In Proceedings of the 13th InternationalConference on Computational Semantics-Short Papers . 44–51.[31] Nikhil Krishnaswamy and James Pustejovsky. 2021. The Role of Embodimentand Simulation in Evaluating HCI: Experiments and Evaluation. In InternationalConference on Human-Computer Interaction . 220–232.[32] Lars Kunze, Tom Williams, Nick Hawes, and Matthias Scheutz. 2017. Spatialreferring expression generation for hri: Algorithms and evaluation framework.In2017 AAAI Fall Symposium Series .[33] Kiyong Lee, Nikhil Krishnaswamy, and James Pustejovsky. 2023. An AbstractSpecification of VoxML as an Annotation Language. In Workshop on InteroperableSemantic Annotation (ISA-19) . 66.[34] Vladimir I Levenshtein et al .1966. Binary codes capable of correcting deletions,insertions, and reversals. In Soviet physics doklady , Vol. 10. Soviet Union, 707–710.[35] Xinghang Li, Di Guo, Huaping Liu, and Fuchun Sun. 2022. Reve-ce: Remoteembodied visual referring expression in continuous environment. IEEE Roboticsand Automation Letters 7, 2 (2022), 1494–1501.[36] Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summariesusing n-gram co-occurrence statistics. In Proceedings of the 2003 human lan-guage technology conference of the North American chapter of the association forcomputational linguistics . 150–157.[37] Aly Magassouba, Komei Sugiura, and Hisashi Kawai. 2020. Multimodal attentionbranch network for perspective-free sentence generation. In Conference on RobotLearning . PMLR, 76–85.[38] Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard,and David McClosky. 2014. The Stanford CoreNLP Natural Language Process-ing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Com-putational Linguistics: System Demonstrations . Association for ComputationalLinguistics, Baltimore, Maryland, 55–60. https://doi.org/10.3115/v1/P14-5010[39] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille,and Kevin Murphy. 2016. Generation and comprehension of unambiguous objectdescriptions. In Proceedings of the IEEE conference on computer vision and patternrecognition . 11–20.[40] David G McNeely-White, Francisco R Ortega, J Ross Beveridge, Bruce A Draper,Rahul Bangar, Dhruva Patil, James Pustejovsky, Nikhil Krishnaswamy, Kyeong-min Rim, Jaime Ruiz, et al .2019. User-aware shared perception for embodiedagents. In 2019 IEEE International Conference on Humanized Computing andCommunication (HCC) . IEEE, 46–51.[41] David McNeill. 1985. So you think gestures are nonverbal? Psychological review92, 3 (1985), 350.[42] Alessandro Moschitti, Bo Pang, and Walter Daelemans. 2014. Proceedings ofthe 2014 Conference on Empirical Methods in Natural Language Processing(EMNLP). In Proceedings of the 2014 Conference on Empirical Methods in NaturalLanguage Processing (EMNLP) .[43] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: amethod for automatic evaluation of machine translation. In Proceedings of the40th annual meeting of the Association for Computational Linguistics . 311–318.[44] Rebecca Passonneau. 2006. Measuring agreement on set-valued items (MASI)for semantic and pragmatic annotation. (2006).[45] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hock-enmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings ofthe IEEE international conference on computer vision . 2641–2649.[46] James Pustejovsky and Nikhil Krishnaswamy. 2016. VoxML: A VisualizationModeling Language. Proceedings of LREC (2016).[47] James Pustejovsky and Nikhil Krishnaswamy. 2020. Embodied human-computerinteractions through situated grounding. In Proceedings of the 20th ACM Interna-tional Conference on Intelligent Virtual Agents . 1–3.[48] James Pustejovsky and Nikhil Krishnaswamy. 2021. Embodied human computerinteraction. KI-Künstliche Intelligenz 35, 3-4 (2021), 307–327.[49] James Pustejovsky and Nikhil Krishnaswamy. 2022. Multimodal semanticsfor affordances and actions. In International Conference on Human-ComputerInteraction . Springer, 137–160.[50] James Pustejovsky, Nikhil Krishnaswamy, and Tuan Do. 2017. Object Embodi-ment in a Multimodal Simulation. In AAAI Spring Symposium: Interactive Multi-sensory Object Perception for Embodied Agents .[51] James Pustejovsky, Nikhil Krishnaswamy, Bruce Draper, Pradyumna Narayana,and Rahul Bangar. 2017. Creating common ground through multimodal sim-ulations. In Proceedings of the IWCS workshop on Foundations of Situated andMultimodal Communication .91045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.1103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160[52] Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, ChunhuaShen, and Anton van den Hengel. 2020. Reverie: Remote embodied visual re-ferring expression in real indoor environments. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition . 9982–9991.[53] Francis Quek, David McNeill, Robert Bryll, Susan Duncan, Xin-Feng Ma, CemilKirbas, Karl E McCullough, and Rashid Ansari. 2002. Multimodal human dis-course: gesture and speech. ACM Transactions on Computer-Human Interaction(TOCHI) 9, 3 (2002), 171–193.[54] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, IlyaSutskever, et al .2019. Language models are unsupervised multitask learners.OpenAI blog 1, 8 (2019), 9.[55] Boris Schauerte and Gernot A Fink. 2010. Focusing computational visual at-tention in multi-modal human-robot interaction. In International conference onmultimodal interfaces and the workshop on machine learning for multimodalinteraction . 1–8.[56] Boris Schauerte, Jan Richarz, and Gernot A Fink. 2010. Saliency-based identi-fication and recognition of pointed-at objects. In 2010 IEEE/RSJ InternationalConference on Intelligent Robots and Systems . IEEE, 4638–4643.[57] Mohit Shridhar, Dixant Mittal, and David Hsu. 2020. INGRESS: Interactive visualgrounding of referring expressions. The International Journal of Robotics Research39, 2-3 (2020), 217–232.[58] Dadhichi Shukla, Ozgur Erkent, and Justus Piater. 2015. Probabilistic detection ofpointing directions for human-robot interaction. In 2015 international conferenceon digital image computing: techniques and applications (DICTA) . IEEE, 1–8.[59] Dadhichi Shukla, Özgür Erkent, and Justus Piater. 2016. A multi-view handgesture rgb-d dataset for human-robot interaction scenarios. In 2016 25th IEEEinternational symposium on robot and human interactive communication (RO-MAN) . IEEE, 1084–1091.[60] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-AnneLachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, FaisalAzhar, et al .2023. Llama: Open and efficient foundation language models. arXivpreprint arXiv:2302.13971 (2023).[61] Kees Van Deemter. 2006. Generating referring expressions that involve gradableproperties. Computational Linguistics 32, 2 (2006), 195–222.[62] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider:Consensus-based image description evaluation. In Proceedings of the IEEE confer-ence on computer vision and pattern recognition . 4566–4575.[63] Jette Viethen and Robert Dale. 2006. Algorithms for generating referring ex-pressions: do they do what people do?. In Proceedings of the fourth internationalnatural language generation conference . 63–70.[64] Jette Viethen and Robert Dale. 2008. The use of spatial relations in referringexpression generation. In Proceedings of the Fifth International Natural LanguageGeneration Conference . 59–67.[65] Isaac Wang, Mohtadi Ben Fraj, Pradyumna Narayana, Dhruva Patil, GururajMulay, Rahul Bangar, J Ross Beveridge, Bruce A Draper, and Jaime Ruiz. 2017.EGGNOG: A continuous, multi-modal data set of naturally occurring gestureswith ground truth labels. In 2017 12th IEEE International Conference on AutomaticFace & Gesture Recognition (FG 2017) . IEEE, 414–421.[66] Isaac Wang, Jesse Smith, and Jaime Ruiz. 2019. Exploring virtual agents foraugmented reality. In Proceedings of the 2019 CHI Conference on Human Factorsin Computing Systems . 1–12.[67] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg.2016. Modeling context in referring expressions. In European Conference onComputer Vision . Springer, 69–85.Received 21 July 202310 |
Uz5CrjdU_z | Towards the generation of synchronized and believablenon-verbal facial behaviors of a talking virtual agentAlice Delbosc∗†Davi, The HumanizersPuteaux, Francealice.delbosc@lis-lab.frMagalie OchsCNRS, LIS, Aix-Marseille UniversityMarseille, Francemagalie.ochs@lis-lab.frNicolas SabouretCNRS, LISN, Paris-Saclay UniversityOrsay, Francenicolas.sabouret@universite-paris-saclay.frBrian RavenetCNRS, LISN, Paris-Saclay UniversityOrsay, Francebrian.ravenet@limsi.frStéphane AyacheCNRS, LIS, Aix-Marseille UniversityMarseille, Francestephane.ayache@lis-lab.frABSTRACTThis paper introduces a new model to generate rhythmically rel-evant non-verbal facial behaviors for virtual agents while theyspeak. The model demonstrates perceived performance compara-ble to behaviors directly extracted from the data and replayed ona virtual agent, in terms of synchronization with speech and be-lievability. Interestingly, we found that training the model withtwo different sets of data, instead of one, did not necessarily im-prove its performance. The expressiveness of the people in thedataset and the shooting conditions are key elements. We alsoshow that employing an adversarial model, in which fabricatedfake examples are introduced during the training phase, increasesthe perception of synchronization with speech. A collection ofvideos demonstrating the results and code can be accessed at:https://github.com/aldelb/non_verbal_facial_animation.CCS CONCEPTS•Computing methodologies →Neural networks ;Animation .KEYWORDSNon-verbal behavior, behavior generation, embodied conversa-tional agent, neural networks, adversarial learning, encoder-decoderACM Reference Format:Alice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, and StéphaneAyache. 2023. Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent. In INTERNATIONALCONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23 Companion),October 9–13, 2023, Paris, France. ACM, New York, NY, USA, 10 pages.https://doi.org/10.1145/3610661.3616547∗Also with CNRS, LIS, Aix-Marseille University.†Also with CNRS, LISN, Paris-Saclay University.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.Conference acronym ’XX, June 03–05, 2018, Woodstock, NY©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/10.1145/3610661.36165471 INTRODUCTIONInterest in virtual agents has grown in the last few years, theirapplications are multiplying in games or virtual environments,for instance in the medical domain [ 1,50]. However, these virtualagents are not yet widely used in practice, partly because of theirlack of natural interaction, which discourages user engagement [ 5].In order to address this issue, Cassell [8]propose to integratevarious natural modalities of human behavior into the virtual agent,including speech, facial expressions, hand gestures, body gestures,and more. Facial expressions, gestures, and gaze direction are ex-amples of non-verbal behavior, encompassing actions distinct fromspeech. While psychologists argue about the percentage of informa-tion non-verbally exchanged during an interaction [ 77], it is clearthat the non-verbal channel plays an important role in understand-ing human behavior.Several studies show that facial expressions, gaze direction, andhead movements are essential non-verbal behaviors that play acrucial role in conveying a speaker’s intentions and emotional state[49], and could even improve the way a virtual agent is perceivedin general [ 6,42]. Munhall et al . [46] also showed that the rhythmicbeat of head movements increases speech intelligibility. In the sameway, Tinwell et al . [63] showed that "uncanniness” is increasedfor a character with a perceived lack of facial expressions. In thispaper, we present a machine-learning based model to generate non-verbal facial behaviors that take into account facial expressions,head movements and gaze direction.The process of generating non-verbal behavior for a specificspeech can be approached from different angles, such as generatingnatural and believable behaviors, generating behaviors that arerhythmically synchronized with the speech, adapted to the intona-tion or appropriate to the semantic content of the speech. In thiswork, we chose to focus on generating rhythmically relevant andbelievable non-verbal behaviors for the virtual agent as he speaks.This involves creating a framework that generates non-verbal fea-tures that align with the rhythmic patterns of speech.The paper is organized as follows. We provide an overview ofexisting works in section 2, followed by the formulation of thelearning problem in section 3. After presenting the datasets usedand their processing methodologies in section 4, we describe theused architecture in section 5. In section 6 we present our researchConference acronym ’XX, June 03–05, 2018, Woodstock, NY Delbosc, et al.question and hypotheses. The section 7 is dedicated to our eval-uation methods and results. Finally, we conclude the paper andintroduce perspectives in section 8.2 RELATED WORKThe research works on behavior generation can be characterizedby various aspects, such as the adopted approach (rule-based ordata-driven), the dataset characteristics, the inputs, and outputsof the model, and more. To provide a structured overview of thestate-of-the-art, we organize it as follows: in section 2.1, we presentexamples of rule-based models; in section 2.2, we describe data-driven models including deep learning models; in section 2.3 wepresent the different possible input for the models and their impacton the generated behaviors; and in section 2.4, we discuss theoutput’s representation of the models.2.1 Rule-based approachesThe first approaches explored for the automatic generation of vir-tual characters’ behavior were based on sets of rules. The rulesdescribe the mapping of words or speech features to a facial ex-pression or gesture. One of the first works to explore the latentrelationship between speech and gesture to generate realistic ani-mation was Cassell et al . [9]with Animated Conversation . Kopp andWachsmuth [33] proposed a model-based approach for generatingcomplex multimodal utterances ( i.e., speech and gesture) from XMLspecifications.The development of new rule-based systems often required thedevelopment of a new domain-specific language (DSL). These DSLswere often incompatible with each other, even if the systems solvedsimilar or overlapping goals [ 49]. A group of researchers developeda unified language for generating multimodal behaviors for virtualagents, called behavior Markup Language (BML). BML has becomethe standard format for rule-based systems, and many other workshave followed using this format [43, 56].It is important to point out that these approaches focused onintention. They were highly effective in terms of communication,but not very natural, since they mainly inserted predefined ani-mations [ 49]. More recent research has therefore begun to exploredata-driven systems.2.2 Data-driven approachesData-driven approaches do not depend on experts in animationand linguistics. They learn the relationships between speech andmovements or facial expressions from data. They are born out ofproof of a strong correlation between an individual’s speech andher/his non-verbal behavior [ 31,44]. For example, Yehia et al . [70]and Honda [27] show that pitch is correlated with head motions.Mariooryad and Busso [42] proposed to replace rules with Dy-namic Bayesian Networks (DBN). In Chiu and Marsella [10], aGaussian Process Latent Variable Model (GPLVM) has been usedto learn a low-dimensional layer and select the most likely move-ments given the speech as input. Recently, Yang et al . [69] proposeda motion graph-based statistical system that generates gestures andother body movements for dyadic conversations. Hidden MarkovModels (HMM) were used to select the most likely body motion[39,43] or head motion [ 60] based on speech. However, these re-search works are still based on an animation dictionary, limiting thediversity of the generated movements. Moreover, in these models,there is only one motion sequence for an input audio signal. It sup-ports the hypothesis that the speech-to-motion correspondence isinjective, but the correspondence between acoustic speech featuresand non-verbal behavior is a “One-To-Many” problem [38].More recently, deep neural networks have demonstrated theirsuperiority in learning from large datasets by generating a sequenceof features for non-verbal behavior. The main objectives of thesedeep learning-based systems are the naturalness and the synchro-nization between audio and speech. For example, Kucherenko et al .[35] proposed an encoder-decoder speech to motion. However, thetraditional deterministic generative models employed in this ap-proach often suffer from a limitation: they tend to generate averagemotion representations [ 36]. To address this limitation, researchershave explored the integration of probabilistic components into theirgenerative models. Notably, popular probabilistic models such asGenerative Adversarial Networks (GANs) [ 18,58,62], VariationalAutoencoders (VAEs) [ 21,24,40], and diffusion models [ 12,13,73]have been employed.GANs [ 20] can be used to convert acoustic speech features intonon-verbal behaviors while preserving the diversity and multiplenature of the generated non-verbal behavior. However, GANs arereputed to be unstable and suffer from a specific problem called thecollapse mode. The collapse mode is a very common failure thatcauses the model to generate only one behavior. Numerous of workhas been done to improve their training [2, 45].In comparison with rule-based approaches, data-driven approacheshave made advancements in terms of naturalness. The generatedbehaviors played on virtual agent have shown a perceived natural-ness that, in certain work, surpasses the actual behaviors. However,several limitations persist, particularly concerning the perceivedappropriateness of these behaviors in relation to the accompanyingspeech, still quite far away from the ground truth [38].Moreover, even though numerous gestural properties can stillbe inferred from speech, the generated behaviors will unavoidablyoverlap with the audio and text channels [ 37]. That implies thatdata-driven approaches are significantly less communicative thanrule-based approaches.New architecture has recently begun to combine these two ap-proaches, in an attempt to take advantage of the benefits of bothwhile minimizing the drawbacks. For example, the work of Zhuanget al. [76] uses a transformer-based encoder-decoder for face ani-mation and a motion graph retrieval module for body animation.Another example is the work of Ferstl et al . [19] , who generatesparameters such as acceleration or velocity of motion from theaudio, before finding a corresponding motion in a database.As we chose to focus on the generation of behaviors that arerhythmically coherent and believable, regardless of semantic ap-propriateness, we chose a data-driven approach. Given the perfor-mance of GANs in the area of non-verbal behavior generation, weimplemented an adversarial model, more precisely a WassersteinGenerative Adversarial Network (WGAN).2.3 Inputs of the modelsInputs to motion generation models can take the form of audioinput [26, 34], textual input [4, 72], or both [17, 71].Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent Conference acronym ’XX, June 03–05, 2018, Woodstock, NYKucherenko et al . [37] showed that text and audio differ in theiruse, the time-aligned text helps predict gesture semantics, andprosodic audio features help predict gesture phase. Early deep learn-ing systems ignored semantics, taking only audio inputs. Theseapproaches using only audio can produce well-synchronized move-ments, which correspond to rhythmic gestures, but the absence oftext transcription implies that they will lack structure and context,such as semantic meaning [ 49]. More recent approaches attemptto integrate semantics to generate meaningful gestures, taking asinput text or audio and text.Other forms of input are used, such as non-linguistic modalities(e.g. interlocutor behavior) [ 28,48] or control input (e.g. style pa-rameters transmitted during model inference) [ 17,23]. The abilityto control body motion based on a specific input signal, such astheir emotional state or a social attitude, can significantly improvethe usability of the method [23].Since our objective in this work is limited to generating behaviorsthat are as rhythmically coherent and credible as possible, we willonly use audio as input for our model.2.4 Outputs of the modelsSpeech-driven facial animation is a process that automatically syn-thesizes speaking characters based on speech signals. The major-ity of work in this field, such as those presented above, creates acorrespondence between audio and behavioral features. Then thebehavioral features are used to animate a character, such as theGreta virtual agent. This is the approach we use in our work.Other systems directly generate face images, often from real in-dividuals, without relying on behavioral features. We are not goingto discuss these models in detail, as their outputs are very differentfrom ours. However, we note that the architectures employed arerelatively similar. Vougioukas et al . [65] , Zhou et al . [75] used atemporal GAN, and Kim and Ganapathi [32] used a VAE model.Works that generate behavioral features can generate them invarious categories. Many studies focus on the automatic generationof body movements [ 13,74]. Most head- and/or face-based methodsgenerate either facial animations or head movements exclusively.The generation of facial expressions and head movements poses dis-tinct challenges: head movements exhibit greater diversity acrossindividuals compared to facial expressions. However, it is importantto acknowledge that facial expressions and head movements are in-herently interconnected and synchronized with speech [ 9]. Habibieet al. [25] or Delbosc et al . [11] introduced an adversarial approachfor the automatic generation of facial expressions and head move-ments jointly. Drawing inspiration from these works, our researchfocuses on analyzing facial expressions and head movements in acombined manner, with a representation of facial expressions usingexplainable features, specifically facial action units.Body and head movements are generated using 3D coordinates,ensuring uniformity in their representation. However, the genera-tion of facial expressions offers a range of approaches. They can begenerated directly with the 3D coordinates of the face, like Karraset al. [30] or describe using a model, such as FLAME model [ 41] inJonell et al . [28] , FaceWarehouse model [ 7] in Pham et al . [54] orBasel Face ModelPaysan et al. [52].Our task requires a representation that allows the encoding ofthe facial gestures from video, the simulation of them on a virtualagent, and the possibility to manipulate the generated facial expres-sions. Therefore, we represent the facial expressions using actionunits (AUs) based on the well-known Facial Action Coding System(FACS)[ 15]. Thanks to this representation, we can use Openfaceto extract the facial expression from videos, play our generatedfacial expression on the Greta platform and, in the future, adapt thegenerated action units to express particular social attitude [ 14,64](see section 8). This is why we consider it particularly important torepresent facial expressions with AUs.3 PROBLEM FORMULATIONOur task can be formulated as follows: given a sequence of acousticspeech features Fspeech[0 :T]extracted from a specific segmentof speech input at regular intervals t, the task is to generate thesequence of corresponding behavior θbehavior[0 :T]that a virtualagent should perform while speaking.The sequence θbehavior[0 :T]consists of three components:θhead[0 :T],θgaze[0 :T], andθAU[0 :T], representing head move-ments, gaze orientation, and facial expressions, respectively. Thehead movements θhead[0 :T]and gaze orientation θgaze[0 :T]are specified using 3D coordinates, while the facial expressionsθAU[0 :T]are defined using action units (AUs) based on the Fa-cial Action Coding System (FACS) [ 15]. These notations will beconsistently employed throughout this article.After generating the behaviors, we evaluate them with bothobjective and subjective evaluations. To simulate the generatedbehaviors on a virtual agent, we use the Greta platform [ 53]. Thisprocess of generating and evaluating the behaviors is visually de-scribed in figure 1.Figure 1: The generation and evaluation processCompared to the state of the art, the contributions of this workare: (1) a new adversarial model for speech-driven non-verbal fa-cial behavior generation, with facial behavior generation based onaction units; (2) a comparison between using a small amount ofsuitable data and a larger amount of data (adding less suitable data),to train our model; (3) an evaluation of the effects of adding newrelevant fake fabricated examples during the training phase of theadversarial model.4 FACIAL BEHAVIORS DATASETSAmong the main challenges linked to the generation of non-verbalbehaviors, the research community frequently highlights someissues. In particular, the difficulty of finding suitable training data.Various methods, which differ in terms of cost and time require-ments, are available for data collection. On one hand, there areexpensive and time-consuming approaches, such as employingmultiple cameras and motion capture systems. On the other hand,faster but less precise methods involve simple recording techniquesConference acronym ’XX, June 03–05, 2018, Woodstock, NY Delbosc, et al.combined with tools designed to extract the desired features di-rectly from videos. Even if datasets exist, they may be small, notcontain the required features, their quality can be insufficient, etc.In our specific task, we require dataset that emphasize facialrecordings. For anticipating our future work, we also need them tocontain interaction scenarios and various social attitudes. Withouta doubt, behaviors generated through data-driven approaches willinevitably be constrained by the data on which they are trained.For instance, when it comes to generating behaviors based on aparticular social attitude, the ability to generate “angry” behaviorwill rely on the presence of such behavior in the initial dataset. Weutilize two datasets in our research.4.1 Selected datasetsWe utilize the Trueness dataset [ 51], a newly created multimodalcorpus containing 18 interaction scenes on discrimination aware-ness in a forum theater. All interactions are in French. We chose itfor several reasons. Firstly, it contains scenes of interaction, sim-ulating conflicts, played by actors with different social attitudes(denial, aggressive, conciliatory). What’s more, the scenes are shotby actors who make sure they stay in the camera’s field of view, sothe camera only films the face and torso.For a larger amount of data, we employ additionally the Cheesedataset [ 55], selecting 10 interaction scenes involving free conver-sation of students, in French. We chose this dataset because it alsocontains interaction scenes. The difference with Trueness is thatthese aren’t actors, and they aren’t conflict scenes, so their behav-ior is less expressive. This dataset also differs in terms of shootingconditions, the students are located a little further away from thecamera and almost their entire bodies are filmed.For both dataset, each video is divided into two parts, repre-senting the perspectives of the first and second persons of theinteraction. We obtain approximately 3h40 of recording time forTrueness and approximately 5h of recording time for Cheese . Asthese are interaction scenes, we’ve made sure that both parts of thesame interaction belong to the same subset (train set or test set).We aim to investigate the impact of incorporating the seconddataset during training on the model’s performance. By utilizingboth datasets, the model will have access to a larger volume oftraining data. However, the Trueness dataset contains more expres-sive facial expressions, and the actors are filmed at closer angles.Consequently, throughout this article, we will refer to the otherdataset, Cheese , as having “farther-away shooting conditions” andbeing “less expressive”.To integrate these data sets into our models, we automaticallyextract behavioral features and acoustic speech features from theexisting videos using state-of-the-art tools, namely Openface [3]andOpenSmile [16].4.2 Features extraction and processingOpenface is a toolkit that detects automatically the head position,gaze orientation, and facial action units of a person on a video.Features are extracted at the frequency of 25 frames per second(25 fps). We consider the eye gaze position represented in worldcoordinates, the eye gaze direction in radians, the head rotationFigure 2: Extraction and processing of datain radians, and 17 facial action units in intensity from 1 to 51. Weobtain a total of 28 features characterizing the head, gaze, and facialmovements. These features, noted θbehavior∈R28, are used for thetraining and constitute the output of the generation model.OpenSmile is a toolbox that extracts the eGeMAPS audio featuresfrom speech. This tool extracts features at a frequency of 50 fps.To eliminate redundancy between acoustic speech features, weconducted a correlation study, and we finally kept seven spectraland frequency parameters2. For each of them, the first and secondderivatives are computed [ 25,68]. We also add a binary featurethat indicates whether the person is speaking or not (i.e. 1 for“speaking” and 0 for “listening”). In total, we consider 22 audiofeatures. The audio features extracted from the human speech arenotedFspeech∈R22.To ensure that our model learns from clean and plausible data, weneed to remove the frames that Openface has incorrectly processed.For example, frames where faces are obscured by a hand or hair, orwhere excessive head movements are done. Thanks to the visualanalysis of a few behaviors extracted with Openface and directlyreplayed on Greta , we identify outliers and the treatment required:•identification and deletion of outliers frames;•creation of transitions if frames have been deleted;•smoothing of features with a median filter with a windowsize of 7 to eliminate Openface noises;•centering of the head and gaze coordinates so that the virtualagent faces the user;•alignment of acoustic speech and behavioral features at 25fps.Finally, to enhance the model’s understanding of speaking andlistening behaviors and improve behavior synchronization withspeech, we set the coordinates of the head and gaze, and the inten-sity of the AUs at a constant when the protagonist is not speaking.These adjustments highlight the distinction between “speaking”and “listening” behaviors.The most widely used method for the generation of humanbehavior consists in working on short segments over a slidingwindow varying from a few seconds to several minutes dependingon the socio-emotional phenomena studied [ 47]. Inspired by thismethod, the videos in the dataset were cut into segments of 4seconds.1AU01, AU02, AU04, AU05, AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17,AU20, AU23, AU25, AU26, AU45.2alphaRatio, hammarbergIndex, mfcc1, mfcc2, mfcc3, F0semitoneFrom27.5Hz,logRelF0-H1-H2.Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent Conference acronym ’XX, June 03–05, 2018, Woodstock, NY5 FACIAL BEHAVIOR GENERATION MODELFollowing the research conducted during the state of the art, ourproposed model3adopts an adversarial encoder-decoder framework.It generates head movements and gaze as 3D coordinates and facialexpressions as AUs intensities. Unlike certain works [ 35,59], nosmoothing is applied to the output.To minimize the generation of highly improbable behaviors, weemploy a normalization step at the input of our models, coupledwith sigmoid activation layers in the model’s output. The normal-ization scales the input data between 0 and 1 and the sigmoid layerconstrains the generated data to fall within the range of valuesobserved in our training data. As a result, the generated data shouldclosely resemble real data while still allowing the generation ofnovel patterns.As we adopt an adversarial approach, the model work as a gamebetween two networks: a generator and a discriminator. Whilethe discriminator is optimized to recognize whether an input isgenerated by the generator or taken from the real data, the generatortries to fool the discriminator by learning how to generate data thatlooks like real data. Figure 3 illustrates our model.Figure 3: The overall architecture of our model5.1 The generatorThe generator generates data by sampling from a noise distributionZand acoustic speech features Fspeech[0...T]. The noise enables tokeep the randomness of the generated movements. To generate ournoise, we generate two random digits, and use these two values tocreate a noise of size 200, with transition digits that progressivelyfollow between the first and second digits. This allows us to createa gradual evolution from the first digit to the second, ensuring acertain cohesion in the noise generated for each sequence.The generator takes the form of a 1D encoder-decoder. It is anadaptation of the U-Net implementation [ 57] originally created for2D image segmentation. The encoder starts by learning a represen-tation of the acoustic speech features, then concatenates it withthe noise. It consists of five blocks, which we call DoubleConv . TheDoubleConv block is constituted of convolution 1D, batch normal-ization 1D, and Relu, and those twice. The convolutional layershave kernels of size 3 and dropout after each of them. The last 4blocks are followed by MaxPool. Then, three decoders are createdto generate believable behaviors.3https://github.com/aldelb/non_verbal_facial_animation.Each decoder is associated with a data type with different valueintervals: a decoder for head movements, a decoder for eye move-ments, and a decoder for AUs. They consist of four DoubleConvblocks and UpSampling after the first 4 blocks. As the decodersare symmetric with the encoder, it uses skip-connectivity withthe corresponding layers of the encoder. They end with a sigmoidactivation layer. Figure 4 illustrates this architecture.We supervise our generator Gwith the following loss function:LG=Lgaze+Lhead+LAULgaze,Lhead andLAUare the root mean square errors (RMSEs)of the gaze orientation, head movement, and AUs features.Lgaze=T−1∑︁t=0(θgaze[t]−ˆθgaze[t])2Lhead=T−1∑︁t=0(θhead[t]−ˆθhead[t])2LAU=T−1∑︁t=0(θAU[t]−ˆθAU[t])25.2 The discriminatorThe generator receives real examples from real data and fake ex-amples generated by the generator. Both the generator and thediscriminator receive acoustic speech features Fspeech[0...T]. Thediscriminator can thus measure if the behavior looks natural, butabove all if the behavior looks natural with respect to these acousticspeech features, and if the temporal alignment is respected.An important aspect of our architecture is that the discrimina-tor does not only receive real and fake generated examples. Wecreate a new data type, called Newbehavior .Newbehavior are fakeexamples designed to facilitate the learning of synchronization be-tween speech and behaviors. These examples associate acousticspeech features of a “speaking” person with behavior features of a“listening” person (and vice versa).The discriminator starts by learning a representation of theacoustic speech features and a representation of the behavioralfeatures. After concatenating these two representations, there arefourDoubleConv blocks and MaxPool after each block. We adddropout after the convolutional layers. It ends with a linear layerand a sigmoid activation layer. Figure 4 illustrates this architecture.5.3 Training detailsWe choose to implement a Wasserstein GAN [ 2] and, more specifi-cally, a Wasserstein GAN with gradient penalty [ 22]. GANs try toreplicate a probability distribution, this implementation uses a lossfunction that reflects the distance between the distribution of thedata generated and the distribution of the real data.We pose the adversarial loss function with the discriminator D:Ladv(G,D)=EFspeech[D(Fspeech,G(Z,Fspeech)]−EFspeech,θbehavior[D(Fspeech,θbehavior)]+λEˆx∼Pˆx[(||∇ ˆxD(ˆx)||2−1)2]The point ˆx, used to calculate the gradient norm, is any pointsampled between the distributions of the generated data and theConference acronym ’XX, June 03–05, 2018, Woodstock, NY Delbosc, et al.Figure 4: The detailed architecturereal data ˆx=tθbehavior+(1−t)Fbehavior with 0≤t≤1.As the original paper, we use λ=10We use Adam for training, with a learning rate of 10−4for thegenerator and 10−5for the discriminator. Our batch is size 32.Combining the adversarial loss with the direct supervisory loss,our objective is :L=LG+w.Ladv(G,D)With w set to 0.1 to ensure that each term is equally weighted.Based on this architecture, we would like to analyze two impor-tant aspects: the data considered during model training and theaddition of fake examples Newbehavior .6 RESEARCH QUESTIONS AND HYPOTHESESWe want to know which factors influence the model to obtain moreor less human-like behaviors and speech-matched behaviors. Wemake the following assumptions:H1The perception of speech/behavior synchronization will beimproved with the addition of our Newbehavior examplesduring training (section 5.2).H2The addition of Cheese during training, will improve theperception of believability.H3The addition of Cheese during training, will degrade theperception of synchronization.Our intuition behind the last two hypotheses is that the actors’dataset Trueness is more distant from everyday behavior than the“less expressive” dataset Cheese . On the other hand, the “farther-away shooting conditions” dataset Cheese is less suited to the gen-eration of facial behaviors (section 4.1). Based on our hypotheses,we will compare the following models:m1architecture presented in section 5, trained on Trueness dataset.m2m1model with the association of Trueness andCheese datasetsfor training.m3modelm1without our fake examples Newbehavior , duringmodel training.GTS “Ground Truth Simulated” are the extracted behavior fromthe data, directly simulated on the virtual agent. We use theterm “simulated” because the resulting videos are not exactlya replication of the human’s behavior, due to the limitationofOpenface andGreta (limited number of AUs for example).Videos of each condition can be found on YouTube4.7 EVALUATIONWe evaluate our models through both objective and subjective meth-ods. Objective evaluations are quantitative metrics, while subjectiveevaluation is done through user-perceptive study.Objective metrics are often inappropriate [ 38] and always insuffi-cient when it comes to comparing different architectures in behaviorgeneration. These metrics fail to capture the coherence between be-haviors and speech, as they primarily focus on statistical similarityto recorded motion rather than contextual appropriateness. Subjec-tive evaluations play a crucial role in assessing the complexity ofsocial communication. However, conducting subjective studies canbe time-consuming and complex, which is why objective metricsare employed to complement the evaluation process.Comparing results across different behavior-generation studiesis challenging due to the lack of a standardized baseline in the field.Different works often rely on disparate data sources for training4https://www.youtube.com/playlist?list=PLRyxHB7gYN-Cs127qTMJIR78fsQu_8tZQTowards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent Conference acronym ’XX, June 03–05, 2018, Woodstock, NYtheir models, the features generated differ according to their objec-tives, and the visual representations of the generated gestures alsovary with different avatars and software, thereby influencing theperception of the generated behavior [67].It is therefore important to note that, due to the unique natureof our task, especially the differences in output compared to previ-ous research works, direct performance comparisons with existingmodels are not applicable. For the time being, comparison withbehaviors extracted and replayed directly on virtual agents enablesus to get around some of these issues, we refer to it here as groundtruth orground truth simulated (GTS).7.1 Objective evaluationThe objective measures are based on algorithmic approaches andreturn quantitative values reflecting the performance of the model.We consider comparisons of distributions and measurements ofacceleration and jerk.Dynamic Time Warping: DTW is used to compare the distancebetween ground truth distributions and generated distributions. Wemeasure the distance for each generated feature and present theresults averaged over all features. The distribution closest to theground truth distribution is the one with the lowest value.Table 1: Distance between GTS and generated distributions –Average score (mean) and standard deviation (std).m1 m2 m3mean std mean std mean stdDTW 451.23 11.08 484.57 11.55 460.90 12.11Average acceleration and jerk: The second derivative of theposition is called acceleration, and the third time derivative ofthe position is called jerk. It is commonly used to quantify motionsmoothness [ 38]. A natural system should have average accelerationand jerk very similar to the ground truth. We calculate these twometrics for the first eye, the second eye, the head, and present theresults averaged over all of these features.Table 2: Acceleration (Acc.) and jerk – Average score (mean) andstandard deviation (std).GTS m1 m2 m3mean std mean std mean std mean stdAcc. 10.71 0.79 13.49 1.38 9.10 0.42 19.02 1.89Jerk 458.48 47.49 545.66 52.04 358.62 16.59 768.00 58.91These metrics were evaluated for all the videos with Truenesstest set. Tables 1 and 2 show the results, the closest numbers fromthe simulated ground truth are bold.In terms of acceleration and jerk. We note that m2is smootherthanGTS . According to our hypotheses, the perceived believabilityof the smoother model must be the best. For the distance betweenthe generated distributions and the ground truth distribution, them1model is the closest. The perception of the synchronization ofthe model with the closest distribution to the ground truth must besuperior to others.If objective metrics provide valuable insights, they have limi-tations and are not sufficient to assess the complexity of socialcommunication. We need to conduct subjective studies to confirmor refute our hypotheses (section 6).7.2 Subjective evaluationThe subjective measures are based on the evaluation of humanobservers. To select the appropriate evaluation criteria, we baseour subjective evaluation study on previous research [ 38,66]. Weevaluate two criteria through direct questions:o believability: how human-like do the behaviors appear?otemporal coordination: how well does the agent’s behaviormatch the speech? (In terms of rhythm and intonation)We randomly selected four videos from our Trueness test set,two with female voices and two with male voices. This selectionallowed us to demonstrate the flexibility of our models in generatingnon-verbal behaviors for different virtual agents on Greta .Following the recommendation of Wolfert et al . [66] , we optedfor a rating-based evaluation. In this method, participants assignratings to the generated behaviors in all conditions ( GTS,m1,m2,m3). Ratings rather than pairwise comparisons are recommendedwhen more than 3 conditions are under consideration, pairwisecomparisons tend to become unwieldy for 4 or more conditions.Figure 5: Interface of our subjective evaluation toolTo create the study, we developed an interface inspired by theworks of Jonell et al . [29] and Schoeffler et al . [61] . Through severalConference acronym ’XX, June 03–05, 2018, Woodstock, NY Delbosc, et al.videos, we ask participants to rate the behaviors of several virtualagents in terms of believability and coordination. We specified thatwhen we talked about behaviors, we referred to facial expressions,head movements, and gaze.We divided the evaluation of each criterion into separate partswith specific instruction pages. First, the evaluation of believability,in which the videos were silent, they did not include any audio.This allows videos to be rated only with the behaviors performedand not on their relationship to the speech. Secondly, the evaluationof temporal coordination, for which the videos are presented withsound corresponding to the virtual agent’s behaviors.Figure 5 provides an example of one of the pages used in theevaluation process. On each page, at the top and in bold, a questionis displayed, corresponding to the criterion being rated. Participantsmust watch 4 videos on the page (corresponding to each condition)and rate each video using the scales. The scale is from 0 (worst) to100 (best) and can be set by adjusting a slider for each video. Onany given page, the videos can be viewed as many times as theylike, but they can’t go back to the previous page.By considering the two evaluated criteria, the four selected se-quences of videos, and the four conditions, we obtained a total of32 videos to rate, each approximately of 30 seconds in duration.The whole evaluation takes about 20 minutes.Thirty persons, with a good level of French, recruited on socialnetworks, participated in our study (16 males and 14 females). Theaverage age of the participants is 28 years, with a standard deviationof 8.06. They viewed each of the videos, in a random order, andrated them on each of the criteria. Table 3 presents the results ofthis subjective evaluation for our three selected models and GTS.Table 3: Results of the perceptive study – Average score (mean)and standard deviation (std) for both coordination (Coo.) and Believ-ability (Bel.) on all 4 conditions.GTS m 1 m2 m3mean std mean std mean std mean stdCoo. 36.53 19.67 43.42 19.04 38.82 18.69 38.77 20.91Bel. 47.60 17.33 45.39 14.81 58.74 15.48 39.02 16.17Statistical analysis is conducted to assess significant differencesbetween the models. First, the normality of the data is assessedusing the Shapiro-Wilk test, which indicates that the data are froma normally distributed population. Therefore, a repeated measuresANOVA is performed.The results reveal the superiority of m1compared to m3in termsof synchronization ( p<.05) and also in terms of believability(p<.01). Our first hypothesis is significantly validated, and theaddition of our fake example during the training of our adversarialmodel improves the perception of speech/behavior synchronization.We can also observe the dominance of m2in terms of believabil-ity compare to m1 ( p<.01) but the superiority of m1in terms ofcoordination ( p<.05). Hypotheses two and three are also signifi-cantly validated. The addition of “less expressive” and “farther-awayshooting conditions” data increases the perception of believability,but reduces the perception of synchronization.Another interesting result is the comparison between m1andGTS . The differences are not significant, but m1tends to outperformGTS in terms of synchronization ( p=.067), an uncommon resultin the field of behavior generation. We hypothesize that setting“listening” behaviors to 0 and adding our fake examples greatlyimproves the perception of synchronization with speech.8 DISCUSSION AND FUTURE WORKWe presented a new approach for the generation of rhythmicallycoherent behavior during the speech of a virtual agent. Our modeldemonstrates perceived performance comparable to behaviors ex-tracted from data and replayed on a virtual agent, in terms of syn-chronization with speech and believability. This approach, basedon an adversarial model, is enriched with fake examples of our owncreation and trained on one or two datasets.We found that adding data during the training, doesn’t neces-sarily increase performance. The expressiveness of people withinthe dataset and shooting conditions are key elements. The additionof these data during training generates smoother movements, in-creasing the perceived believability of the generated behaviors butreducing the perception of synchronization with speech.The fake examples provided to the model reduce the distancebetween the distributions of generated data and ground truth data,enhancing the perception of synchronization and believability ofgenerated behaviors.These results should be interpreted cautiously, especially due topotential influences of our non-verbal behavior extraction and vi-sualization tools on participant perception in subjective evaluation.Given the subjective evaluation duration and complexity, we onlytested 4 randomly chosen sequences, unlikely to represent the fulldataset. Moreover, participant numbers might not unveil all notabledifferences between conditions, particularly comparing simulatedground truth and our model.This work is part of a larger project to generate socio-affectivenon-verbal behaviors during social interaction training. Severalperspectives are therefore on the horizon. After the generation ofrhythmically coherent behavior during speech, we aim to generatesemantically and contextually relevant non-verbal behaviors forthe virtual agent during speech. This entails associating specificbehaviors with the semantic content of the agent’s speech. By align-ing non-verbal behaviors with the intended meaning of the agent’sutterances, we will enhance the communicative effectiveness andexpressiveness of the virtual agent.To incorporate the socio-affective dimension and be able to sim-ulate different types of scenarios, we will introduce a constraintin the generation process, focusing on a particular social attitude.This step involves encoding the desired social attitude (aggressive-ness, consilience, or denial), and using it to guide the generation ofnon-verbal behaviors.After that, we will take into account the signals and behaviorsexhibited by the human interlocutor. This will enable the virtualagent to dynamically adjust its non-verbal behavior to match andengage with the interlocutor.Studies on behavior generation opens the way to agents capableof generating expressive behaviors from speech. A great opportu-nity in the field of training, where they can reproduce believablesituations in a safe environment, while ensuring user engagement.Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent Conference acronym ’XX, June 03–05, 2018, Woodstock, NYREFERENCES[1]Glenn Albright, Craig Bryan, Cyrille Adam, Jeremiah McMillan, and KristenShockley. 2018. Using virtual patient simulations to prepare primary health careprofessionals to conduct substance use and mental health screening and briefintervention. Journal of the American Psychiatric Nurses Association 24, 3 (2018),247–259.[2]Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein gan.arXiv 2017. arXiv preprint arXiv:1701.07875 30, 4 (2017).[3]Tadas Baltrušaitis, Peter Robinson, and Louis-Philippe Morency. 2016. Openface:an open source facial behavior analysis toolkit. In 2016 IEEE Winter Conferenceon Applications of Computer Vision (WACV) . IEEE, 1–10.[4]Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan,Aniket Bera, and Dinesh Manocha. 2021. Text2gestures: A transformer-basednetwork for generating emotive body gestures for virtual agents. In 2021 IEEEvirtual reality and 3D user interfaces (VR) . IEEE, 1–10.[5]Dario Bombari, Marianne Schmid Mast, Elena Canadas, and Manuel Bachmann.2015. Studying social interactions through immersive virtual environment tech-nology: virtues, pitfalls, and future challenges. Frontiers in psychology 6 (2015),869.[6]Carlos Busso, Zhigang Deng, Michael Grimm, Ulrich Neumann, and ShrikanthNarayanan. 2007. Rigid head motion in expressive speech animation: Analysisand synthesis. IEEE transactions on audio, speech, and language processing 15, 3(2007), 1075–1086.[7]Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. 2013. Faceware-house: A 3d facial expression database for visual computing. IEEE Transactionson Visualization and Computer Graphics 20, 3 (2013), 413–425.[8]Justine Cassell. 2000. Embodied conversational interface agents. Commun. ACM43, 4 (2000), 70–78.[9]Justine Cassell, Catherine Pelachaud, Norman Badler, Mark Steedman, BrettAchorn, Tripp Becket, Brett Douville, Scott Prevost, and Matthew Stone. 1994.Animated conversation: rule-based generation of facial expression, gesture &spoken intonation for multiple conversational agents. In Proceedings of the 21stannual conference on Computer graphics and interactive techniques . 413–420.[10] Chung-Cheng Chiu and Stacy Marsella. 2014. Gesture generation with low-dimensional embeddings. In Proceedings of the 2014 international conference onAutonomous agents and multi-agent systems . 781–788.[11] Alice Delbosc, Magalie Ochs, and Stéphane Ayache. 2022. Automatic facialexpressions, gaze direction and head movements generation of a virtual agent.InCompanion Publication of the 2022 International Conference on MultimodalInteraction . 79–88.[12] Chenpng Du, Qi Chen, Tianyu He, Xu Tan, Xie Chen, Kai Yu, Sheng Zhao,and Jiang Bian. 2023. DAE-Talker: High Fidelity Speech-Driven Talking FaceGeneration with Diffusion Autoencoder. arXiv preprint arXiv:2303.17550 (2023).[13] Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, andArtsiom Sanakoyeu. 2023. Avatars grow legs: Generating smooth human motionfrom sparse tracking inputs with diffusion model. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition . 481–490.[14] Paul Ekman. 2002. Facial action coding system (FACS). A Human Face, Salt LakeCity (2002).[15] Paul Ekman and Wallace V Friesen. 1978. Facial action coding system. Environ-mental Psychology & Nonverbal Behavior (1978).[16] Florian Eyben, Martin Wöllmer, and Björn Schuller. 2010. Opensmile: the munichversatile and fast open-source audio feature extractor. In Proceedings of the 18thACM international conference on Multimedia . 1459–1462.[17] Mireille Fares, Catherine Pelachaud, and Nicolas Obin. 2023. Zero-shot styletransfer for gesture animation driven by text and speech using adversarial dis-entanglement of multimodal style encoding. Frontiers in Artificial Intelligence 6(2023), 1142997.[18] Ylva Ferstl, Michael Neff, and Rachel McDonnell. 2019. Multi-objective adversarialgesture generation. In Proceedings of the 12th ACM SIGGRAPH Conference onMotion, Interaction and Games . 1–10.[19] Ylva Ferstl, Michael Neff, and Rachel McDonnell. 2021. ExpressGesture: Ex-pressive gesture generation from speech through database matching. ComputerAnimation and Virtual Worlds 32, 3-4 (2021), e2016.[20] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarialnets. Advances in neural information processing systems 27 (2014).[21] David Greenwood, Stephen Laycock, and Iain Matthews. 2017. Predicting headpose from speech with a conditional variational autoencoder. ISCA.[22] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, andAaron C Courville. 2017. Improved training of wasserstein gans. Advancesin neural information processing systems 30 (2017).[23] Ikhsanul Habibie, Mohamed Elgharib, Kripasindhu Sarkar, Ahsan Abdullah,Simbarashe Nyatsanga, Michael Neff, and Christian Theobalt. 2022. A motionmatching-based framework for controllable gesture synthesis from speech. InACM SIGGRAPH 2022 Conference Proceedings . 1–9.[24] Ikhsanul Habibie, Daniel Holden, Jonathan Schwarz, Joe Yearsley, and TakuKomura. 2017. A recurrent variational autoencoder for human motion synthesis.InProceedings of the British Machine Vision Conference (BMVC) .[25] Ikhsanul Habibie, Weipeng Xu, Dushyant Mehta, Lingjie Liu, Hans-Peter Seidel,Gerard Pons-Moll, Mohamed Elgharib, and Christian Theobalt. 2021. Learningspeech-driven 3d conversational gestures from video. In Proceedings of the 21stACM International Conference on Intelligent Virtual Agents . 101–108.[26] Dai Hasegawa, Naoshi Kaneko, Shinichi Shirakawa, Hiroshi Sakuta, and KazuhikoSumi. 2018. Evaluation of speech-to-gesture generation using bi-directional LSTMnetwork. In Proceedings of the 18th International Conference on Intelligent VirtualAgents . 79–86.[27] Kiyoshi Honda. 2000. Interactions between vowel articulation and F0 control. InProceedings of Linguistics and Phonetics: Item Order in Language and Speech (LP98 (2000).[28] Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, and Jonas Beskow. 2020. Let’sFace It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Ges-tures in Dyadic Settings. In Proceedings of the 20th ACM International Conferenceon Intelligent Virtual Agents . 1–8.[29] Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, and Gustav EjeHenter. 2021. HEMVIP: Human evaluation of multiple videos in parallel. InProceedings of the 2021 International Conference on Multimodal Interaction . 707–711.[30] Tero Karras, Timo Aila, Samuli Laine, Antti Herva, and Jaakko Lehtinen. 2017.Audio-driven facial animation by joint end-to-end learning of pose and emotion.ACM Transactions on Graphics (TOG) 36, 4 (2017), 1–12.[31] Adam Kendon. 2004. Gesture: Visible action as utterance . Cambridge UniversityPress.[32] Byung-Hak Kim and Varun Ganapathi. 2019. Lumi \erenet: Lecture video synthe-sis from audio. arXiv preprint arXiv:1907.02253 (2019).[33] Stefan Kopp and Ipke Wachsmuth. 2002. Model-based animation of co-verbalgesture. In Proceedings of Computer Animation 2002 (CA 2002) . IEEE, 252–257.[34] Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and HedvigKjellström. 2019. Analyzing input and output representations for speech-drivengesture generation. In Proceedings of the 19th ACM International Conference onIntelligent Virtual Agents . 97–104.[35] Taras Kucherenko, Dai Hasegawa, Naoshi Kaneko, Gustav Eje Henter, and HedvigKjellström. 2021. Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation. International Journalof Human–Computer Interaction 37, 14 (2021), 1300–1316.[36] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, SimonAlexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator: A frame-work for semantically-aware speech-driven gesture generation. In Proceedings ofthe 2020 International Conference on Multimodal Interaction . 242–250.[37] Taras Kucherenko, Rajmund Nagy, Michael Neff, Hedvig Kjellström, and Gus-tav Eje Henter. 2021. Multimodal analysis of the predictability of hand-gestureproperties. arXiv preprint arXiv:2108.05762 (2021).[38] Taras Kucherenko, Pieter Wolfert, Youngwoo Yoon, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. Evaluating gesture-generationin a large-scale open challenge: The GENEA Challenge 2022. arXiv preprintarXiv:2303.08737 (2023).[39] Sergey Levine, Christian Theobalt, and Vladlen Koltun. 2009. Real-time prosody-driven synthesis of body language. In ACM SIGGRAPH Asia 2009 papers . 1–10.[40] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and LinchaoBao. 2021. Audio2Gestures: Generating Diverse Gestures from Speech Audiowith Conditional Variational Autoencoders. In Proceedings of the IEEE/CVF Inter-national Conference on Computer Vision . 11293–11302.[41] Tianye Li, Timo Bolkart, Michael. J. Black, Hao Li, and Javier Romero. 2017.Learning a model of facial shape and expression from 4D scans. ACM Transactionson Graphics, (Proc. SIGGRAPH Asia) 36, 6 (2017), 194:1–194:17. https://doi.org/10.1145/3130800.3130813[42] Soroosh Mariooryad and Carlos Busso. 2012. Generating human-like behaviorsusing joint, speech-driven models for conversational agents. IEEE Transactionson Audio, Speech, and Language Processing 20, 8 (2012), 2329–2340.[43] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, andAri Shapiro. 2013. Virtual character performance from speech. In Proceedingsof the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation .25–35.[44] David McNeill. 2000. Language and gesture . Vol. 2. Cambridge University PressCambridge.[45] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. 2016. Unrolledgenerative adversarial networks. arXiv preprint arXiv:1611.02163 (2016).[46] Kevin G Munhall, Jeffery A Jones, Daniel E Callan, Takaaki Kuratate, and EricVatikiotis-Bateson. 2004. Visual prosody and speech intelligibility: Head move-ment improves auditory speech perception. Psychological science 15, 2 (2004),133–137.[47] Nora A Murphy and Judith A Hall. 2021. Capturing Behavior in Small Doses:A Review of Comparative Research in Evaluating Thin Slices for BehavioralMeasurement. Frontiers in psychology 12 (2021), 667326.Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Delbosc, et al.[48] Tan Viet Tuyen Nguyen and Oya Celiktutan. 2022. Context-Aware Body Ges-ture Generation for Social Robots. In ICRA 2022 Workshop on Prediction andAnticipation Reasoning for Human-Robot Interaction .[49] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter,and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-SpeechGesture Generation. arXiv preprint arXiv:2301.05339 (2023).[50] Magalie Ochs, Daniel Mestre, Grégoire De Montcheuil, Jean-Marie Pergandi,Jorane Saubesty, Evelyne Lombardo, Daniel Francon, and Philippe Blache. 2019.Training doctors’ social skills to break bad news: evaluation of the impact ofvirtual environment displays on the sense of presence. Journal on MultimodalUser Interfaces 13 (2019), 41–51.[51] Magalie Ochs, Jean-Marie Pergandi, Alain Ghio, Carine André, Patrick Sainton,Emmanuel Ayad, Auriane Boudin, and Roxane Bertrand. 2023. A forum theatercorpus for discrimination awareness. Frontiers in Computer Science 5 (2023),1081586.[52] Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and ThomasVetter. 2009. A 3D face model for pose and illumination invariant face recognition.In2009 sixth IEEE international conference on advanced video and signal basedsurveillance . Ieee, 296–301.[53] Catherine Pelachaud. 2015. Greta: an interactive expressive embodied conversa-tional agent. In Proceedings of the 2015 International Conference on AutonomousAgents and Multiagent Systems . 5–5.[54] Hai Xuan Pham, Yuting Wang, and Vladimir Pavlovic. 2018. End-to-end learningfor 3d facial animation from speech. In Proceedings of the 20th ACM InternationalConference on Multimodal Interaction . 361–365.[55] Béatrice Priego-Valverde, Brigitte Bigi, and Mary Amoyal. 2022. CHEESE!: Corpus«CHEESE!». TIPA. Travaux interdisciplinaires sur la parole et le langage 38 (2022).[56] Brian Ravenet, Catherine Pelachaud, Chloé Clavel, and Stacy Marsella. 2018.Automating the production of communicative gestures in embodied characters.Frontiers in psychology 9 (2018), 1144.[57] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutionalnetworks for biomedical image segmentation. In International Conference onMedical image computing and computer-assisted intervention . Springer, 234–241.[58] Najmeh Sadoughi and Carlos Busso. 2018. Novel realizations of speech-drivenhead movements with generative adversarial networks. In 2018 IEEE InternationalConference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 6169–6173.[59] Najmeh Sadoughi and Carlos Busso. 2019. Speech-driven animation with mean-ingful behaviors. Speech Communication 110 (2019), 90–100.[60] Mehmet E Sargin, Yucel Yemez, Engin Erzin, and Ahmet M Tekalp. 2008. Analysisof head gesture and prosody patterns for prosody-driven head-gesture animation.IEEE Transactions on Pattern Analysis and Machine Intelligence 30, 8 (2008), 1330–1345.[61] Michael Schoeffler, Sarah Bartoschek, Fabian-Robert Stöter, Marlene Roess, Su-sanne Westphal, Bernd Edler, and Jürgen Herre. 2018. webMUSHRA—A com-prehensive framework for web-based listening tests. Journal of Open ResearchSoftware 6, 1 (2018).[62] Kenta Takeuchi, Dai Hasegawa, Shinichi Shirakawa, Naoshi Kaneko, HiroshiSakuta, and Kazuhiko Sumi. 2017. Speech-to-gesture generation: A challengein deep learning approach with bi-directional LSTM. In Proceedings of the 5thInternational Conference on Human Agent Interaction . 365–369.[63] Angela Tinwell, Mark Grimshaw, Debbie Abdel Nabi, and Andrew Williams. 2011.Facial expression of emotion and perception of the Uncanny Valley in virtualcharacters. Computers in Human Behavior 27, 2 (2011), 741–749.[64] Michel François Valstar and Maja Pantic. 2006. Biologically vs. logic inspiredencoding of facial actions and emotions in video. In 2006 IEEE InternationalConference on Multimedia and Expo . IEEE, 325–328.[65] Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic. 2020. Realisticspeech-driven facial animation with gans. International Journal of ComputerVision 128 (2020), 1398–1413.[66] Pieter Wolfert, Jeffrey M Girard, Taras Kucherenko, and Tony Belpaeme. 2021. Torate or not to rate: Investigating evaluation methods for generated co-speech ges-tures. In Proceedings of the 2021 International Conference on Multimodal Interaction .494–502.[67] Pieter Wolfert, Nicole Robinson, and Tony Belpaeme. 2022. A review of evalu-ation practices of gesture generation in embodied conversational agents. IEEETransactions on Human-Machine Systems 52, 3 (2022), 379–389.[68] Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi, and Hiroshi Ishiguro. 2021.Modeling the conditional distribution of co-speech upper body gesture jointlyusing conditional-GAN and unrolled-GAN. Electronics 10, 3 (2021), 228.[69] Yanzhe Yang, Jimei Yang, and Jessica Hodgins. 2020. Statistics-based MotionSynthesis for Social Conversations. In Computer Graphics Forum , Vol. 39. WileyOnline Library, 201–212.[70] Hani Yehia, Takaaki Kuratate, and Eric Vatikiotis-Bateson. 2000. Facial animationand head motion driven by speech acoustics. In 5th Seminar on Speech Production:Models and Data . Kloster Seeon, Germany, 265–268.[71] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim,and Geehyuk Lee. 2020. Speech gesture generation from the trimodal contextof text, audio, and speaker identity. ACM Transactions on Graphics (TOG) 39, 6(2020), 1–16.[72] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and GeehyukLee. 2019. Robots learn social skills: End-to-end learning of co-speech gesturegeneration for humanoid robots. In 2019 International Conference on Robotics andAutomation (ICRA) . IEEE, 4303–4309.[73] Fan Zhang, Naye Ji, Fuxing Gao, and Yongping Li. 2023. DiffMotion: Speech-driven gesture synthesis using denoising diffusion model. In International Con-ference on Multimedia Modeling . Springer, 231–242.[74] Chi Zhou, Tengyue Bian, and Kang Chen. 2022. Gesturemaster: Graph-basedspeech-driven gesture generation. In Proceedings of the 2022 International Confer-ence on Multimodal Interaction . 764–770.[75] Hang Zhou, Yu Liu, Ziwei Liu, Ping Luo, and Xiaogang Wang. 2019. Talkingface generation by adversarially disentangled audio-visual representation. InProceedings of the AAAI conference on artificial intelligence , Vol. 33. 9299–9306.[76] Wenlin Zhuang, Jinwei Qi, Peng Zhang, Bang Zhang, and Ping Tan. 2022.Text/speech-driven full-body animation. arXiv preprint arXiv:2205.15573 (2022).[77] Goranka Zoric, Karlo Smid, and Igor S Pandzic. 2009. Towards facial gesturesgeneration by speech signal analysis using huge architecture. In MultimodalSignals: Cognitive and Algorithmic Issues: COST Action 2102 and euCognitionInternational School Vietri sul Mare, Italy, April 21-26, 2008 Revised Selected andInvited Papers . Springer, 112–120. |
oW4rUGjbMYg | The KCL-SAIR team’s entry to the GENEA Challenge 2023Exploring Role-based Gesture Generation in Dyadic Interactions:Listener vs. SpeakerViktor Schmuck, Nguyen Tan Viet Tuyen, Oya CeliktutanCentre for Robotics Research, Department of Engineering, King’s College LondonLondon, UK{viktor.schmuck;tan_viet_tuyen.nguyen;oya.celiktutan}@kcl.ac.ukABSTRACTThis paper presents the KCL-SAIR team’s contribution to the GE-NEA Challenge 2023. As this year’s challenge addressed gesturegeneration in a dyadic context instead of a monadic one, our aimwas to investigate how the previous state-of-the-art approach can beimproved to be more applicable for the generation of both speakerand listener behaviours. The presented solution investigates howtaking into account the conversational role of the target agent dur-ing training and inference time can influence the overall socialappropriateness of the resulting gesture generation system. Oursystem is evaluated qualitatively based on three factors, includinghuman likeness, appropriateness for agent speech, and appropriate-ness for interlocutor speech. Our results show that having separatemodels for listener and speaker behaviours could have potential,especially to generate better listener behaviour. However, the under-lying model structures between the speaker and listener behaviourshould be different, building on previous state-of-the-art monadicand dyadic solutions.CCS CONCEPTS•Human-centered computing →HCI theory, concepts andmodels ;Empirical studies in interaction design ;User studies .KEYWORDSdatasets, Tacotron2, gesture generation, dyadic interactionACM Reference Format:Viktor Schmuck, Nguyen Tan Viet Tuyen, Oya Celiktutan. 2023. The KCL-SAIR team’s entry to the GENEA Challenge 2023 Exploring Role-based Ges-ture Generation in Dyadic Interactions: Listener vs. Speaker. In Proceedingsof 25th ACM International Conference on Multimodal Interaction (ICMI’25).ACM, New York, NY, USA, 6 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONThe generation of non-verbal behaviours in order to accompanythe speech of both embodied agents and social robots enhancestheir perceived acceptance. Due to its importance, there has been aPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from permissions@acm.org.ICMI’25, October 2023, Paris, France©2023 Association for Computing Machinery.ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00https://doi.org/XXXXXXX.XXXXXXXgrowing effort related to this line of research in the past years [1, 9,10, 13, 17, 23, 25]. Agents employing gestures during communicationallows them to add emphasis to the information they convey and toexpress their intentions or emotions. It is important to differentiatebetween monadic and dyadic settings when generating behaviours.In a monadic setting the agent exists alone, while in a dyadic one,its behaviour should be related to an interlocutor’s, as it participatesin a dynamic exchange taking turns speaking and listening.Previous work established a state-of-the-art approach for gen-erating gestures in a monadic setting based on an agent’s speechand text information [9]. To extend this method to a dyadic set-ting, the interlocutor’s verbal and non-verbal signals should also betaken into account. However, the listener and speaker behavioursof agents are significantly different [2, 6]; the listener is muchmore passive and occasionally mimics the speaker gestures with de-layed synchrony. Therefore, this problem could benefit from a splittraining approach, where gesture generation in a dyadic context isbroken down into listener and speaker behaviours.Motivated by the importance of gesture generation for both vir-tual and embodied agents and the stark difference between listenerand speaker behaviours in a dyadic context, this paper investigatesthe effect of training and employing multiple gesture generationmodels based on the speaker status of the agent. The qualitativeassessments of our contribution show that compared to the simpledyadic extension of previous state-of-the-art [9], this technique ison par with several model improvement based techniques and theprevious baseline.2 BACKGROUND AND PRIOR WORKIn recent years there has been a growing interest in the researcharea of co-speech gesture generation for virtual [3, 5, 27] and em-bodied [22, 24] agents. The approach to gesture generation can be di-vided into two groups: rule-based [8] and data-driven approach [13,24]. With the rule-based approach, the association of text or speechand gestures are pre-defined by a set of rules [8]. Consequently,this approach can only produce gestures in pre-designed contexts.With the data-driven approach, the relationship between gesturesand text or speech is captured by end-to-end learning frameworks.Several studies used an Encoder-Decoder [5] architectures, Gen-erative Adversarial Networks (GANs) [22, 26, 27] or ConditionalGANs (cGAN) which were designed with Convolutional NeuralNetworks [24].To foster the development of more appropriate gesture genera-tion, GENEA Challenge 2023 [14] provides a dataset and a platformto create and evaluate non-verbal behaviour generation solutions.ICMI’25, October 2023, Paris, France Schmuck, Tuyen, and CeliktutanThe organisers provided a refined and split dataset based on theTalking With Hands 16.2 M [16] data. Moreover, they provided abaseline model [9, 18] which was adapted from the monadic gesturegeneration winner of a previous year’s challenge.In dyadic interaction, an essential aspect of co-speech gestures isthe dynamic exchange of non-verbal signals between two partnersfor adapting to interacting social norms [15] and building a commonground [19]. As a result, the work presented in this paper will shedlight on this important aspect. Specifically, our solution describedbelow builds upon the baseline provided for the challenge [9] andinvestigates the effect of training separate speaker and listenergesture models. This approach is supported by the work of Alibaliet al. [2] and Binder [6] who explored the non-verbal behaviour ofspeakers and listeners in a conversation. Alibali et al. [2] state thatthe listener behaviour can be limited to back-channel feedbackssuch as nodding, saying "uh-huh", and occasional head movementindicating that something is not clear. Similarly, Binder [6] foundthat listeners also exhibit behavioural synchrony which plays asignificant role in the positive perception of conversation partners.Based on their research, due to the stark difference between thebehaviour expected from speakers and listeners, we believe thetraining of separate speaker and listener models is a promisingavenue.3 DATA AND DATA PROCESSINGThe solution presented in this paper is using the training and valida-tion sets of the Talking With Hands 16.2 M dataset presented by Leeet al.[16]. Using the same training and validation practices of themonadic motion generation solution proposed by Chang et al.[9],our solution utilises the speaker identity, text, audio, and motioninformation of the main-agent. In addition, the interlocutor’s text,audio, and motion information is also used in order to extend thebaseline to a dyadic setting.Following the preprocessing practices presented by Chang etal. [9], we produce a mel spectogram and MFCC features, as wellas audio prosody features such as audio intensity, pitch, and theirderivatives. To process text data, a FastText word embedding [7]is generated with 300 dimensions. As for the motion input data,we use the joint angle information provided in the dataset andextract information for 25 joints, 19 and 6 for the upper- and lower-body joints respectively. The joint angles were parameterised withexponential map [11]. Finger motion data was not used due to itsreliability in the dataset, and we also use a root position of the body,resulting in 26∗3=78features, 3 dimensions (i.e., 3D orientationinformation) for each joint information. This feature engineeringwas kept consistent with the one described by the state-of-the-artin order to provide a reliable comparison to the baseline methodof Chang et al.[9] and to observe the direct effect of our sampleselection method described below.4 METHODOur method is primarily based on the baseline method proposed byChang et al. [9]. This solution used a Tacotron2 [21] based archi-tecture that was aimed to align speech features with gestures. Thissequence-to-sequence approach was extended to use the interlocu-tor’s motion, audio, text, and speaker identity features as inputs toappropriate it to a dyadic context. Due to the increased input size,the original model’s [9] hyperparameters were individually tunedas described in the challenge description paper [14].Regarding our core contribution, we introduced the trainingof two separate models, constructed based on the baseline modelstructure. When training and validating the models, in one case,when selecting training samples, the speaker identity labelling ofthe agent was used to determine when the agent was speaking. Thisinformation was acquired from the dataset by concatenating thetext input with the speaker_id using the same sample generationpipeline as the IVI baseline did [9]. If the sampling window yieldeda non-zero sum of the resulting feature array, the agent was con-sidered speaking. If both the main agent and the interlocutor werespeaking, we consider the agent as ‘speaking’.Only training samples with speech were used to train a speakingmodel (SM) which was validated on samples where the agent wasspeaking. Similarly, our second model was trained and validatedsolely on samples where the speaker identity indicated that theagent was listening, resulting in a trained listener model (LM). Inthe dataset, there are some instances when both the agent and theinterlocutor are speaking. These samples were used to train the SM,as the required gestures should still be appropriate for the agentproviding expressions to support its speech.The models were trained on the training set of the Talking WithHands 16.2M dataset [16] with the same hyperparameters as es-tablished in the challenge description paper [14], however, thebatch size was reduced to 32from the original 64due to computa-tional constraints. The full parameter list can be found at [18] inTacotron2/common/hparams_dyadic.py .Our models were trained until convergence, as stated in [18],around 20to30thousand epochs. SM converged after 28kiterations,while LM converged by the 30kmark.The training was performed on a Dell XPS 15, i9-13900H (14cores, up to 5.40GHz Turbo), 32GB RAM, NVIDIA GeForce RTX4070 - 8GB. Training and validation took around 16 and 18 hoursfor the SM and LM respectively.During inference, both models were loaded into the ‘gener-ate_all_gestures.py’ script provided by [18]. Outputs were gener-ated frame by frame, selecting SM or LM depending on the speakeridentity of the agent as described above. The resulting outputs wereconverted to joint angles utilising the built-in functions providedby the evaluation script.A representation of the training and testing of the proposedmodels can be seen in Figure 1. The source code of our solution,adapted from the [18] repository can be found at [20].5 EVALUATIONThe evaluation was performed with the other GENEA Challenge2023 submissions by the organisers as presented in [14]. The pro-vided test set was formatted the same way as the training andvalidation sets of the Talking With Hands 16.2 M dataset [16] withthe exception of the agent not having the motion samples for thissplit. The agent gestures were generated as described above, inSection 4.Due to the lack of ground truth data, features such as AveragePrecision Error (APE), difference in Acceleration, and Jerk were notThe KCL-SAIR team’s entry to the GENEA Challenge 2023 ICMI’25, October 2023, Paris, FranceFigure 1: A representation of how speaker identity -basedsampling was introduced to train and test two models: onetrained on speaker data; and one trained on listener data.measured. Instead, the resulting dyadic gestures were evaluatedwith regard to Human Likeness, Appropriateness for agent speech,and Appropriateness for the interlocutor in a large-scale crowd-sourced subjective evaluation. Human likeness measures whetherthe generated gesture resembles real human gestures. The appropri-ateness of the agent and interlocutor speech evaluations measurewhether the generated gestures look natural with regard to therespective speaker. In the following sections, they are also referredto as monadic anddyadic appropriateness . Notably, appropriatenessscores were measured by pairing the gestures generated for the cor-rect speech segments, but also by pairing and showing mismatchedspeech-gesture stimuli pairs to participants.For further details regarding the evaluation please refer to themain Challenge description paper [14].Human-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualising the ratings distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05confidence interval); yellow diamonds are meanratings (also with a 0.05confidence interval). Box edges are at25and 75percentiles, while whiskers cover 95%of all ratingsfor each condition. Conditions are ordered by descendingsample median rating.6 RESULTS AND DISCUSSIONThis section reports the three aspects of the qualitative evaluationperformed on our solution. In the following sections, the proposedsolutions will be labelled SA-SL , the baseline method’s [9] monadicversion is labelled BM, and the dyadic BD. Finally, the ground truthgestures recorded in the original dataset [16] are labelled NA(i.e.,natural). Our proposed solution is labelled SD.6.1 Human LikenessBased on the responses of 200participants, the median ratings be-tween different conditions were analysed based on Mann-WhitneyU tests, which is an unpaired non-parametric test. After acquiringthe p-values, they were adjusted for multiple comparisons with theHolm-Bonferroni method [12].The rating distribution of the human-likeness test and the sig-nificance of pairwise differences between conditions can be seen inFigure 2 and 3 respectively.Based on the results, only 12 condition pairs out of the overall105 were significantly different at α=0.05. Regarding our solution,its conditions were not different from other generated gesturesin the set of {BD, BM, SD, SH}. However, they were statisticallydifferent from the set of {SE, SJ, SL}. This means that our solutionachieved the same human likeness scores as SH, and the dyadicand monadic baselines. Finally, it was rated better with regard tohuman-likeness compared to SA, SB, SC, SI, and SK.Based on these results, specifically examining human-likeness,we can say that our proposed approach does not hinder performancecompared to the benchmarks. However, using speaker and listenermodels alone is not enough in a dyadic setting, as indicated by thesignificantly better-performing set of {SE, SF, SG, SJ, SL} models, andICMI’25, October 2023, Paris, France Schmuck, Tuyen, and Celiktutan...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significance of pairwise differences between con-ditions. White means that the condition listed on the y-axisrated significantly above the condition on the x-axis, blackmeans the opposite ( yrated below x), and grey means nostatistically significant difference at the level α=0.05afterHolm-Bonferroni correction [12]. Conditions are listed inthe same order as in Figure 2.the significantly higher mean and median human-likeness scoresof the ground truth.6.2 Appropriateness for agent speechThe appropriateness for agent speech (i.e., monadic appropriate-ness) was evaluated with 600participants who contributed 36rat-ings to this part of the study, with every condition receiving atleast 1766 scores. The scores represent a mean appropriatenessscore (MAS), which is calculated by converting user responses toa5-point scale ranging from −2to2. The MAS are shown in Ta-ble 1(a) and represented in Figure 4(a). Furthermore, similar to thehuman-likeness evaluation, the pairwise comparison of solutionscan be seen in Figure 5(a). To compare the performance of differentsolutions, Welch’s t-test, an unpaired statistical test was used. Tocorrect the test results for multiple comparisons a technique calledthe BH non-adaptive one-stage linear step-up procedure [4] wasused.Based on the results, our solution is statistically different fromchance level performance (see dashed line in Figure 4(a)). The natu-ral (NA) condition was significantly more appropriate comparedto all synthetic condiditons. Regarding the condition of our pro-posed solution (SD), it was significantly more appropriate than thecondition sets of {SC, SL} and {SA, SB, SH}. Moreover, it was notsignificantly different from the condition set of {BD, SE, SI, SK}. Theremaining 7 conditions and NA were found to be significantly moreappropriate than SD. As for preference comparison, SD was signif-icantly more preferred than SC and SL, and it was less preferredthan conditions NA, SG, and SJ.Furthermore, we can infer that our proposed solution can matchother conditions with regard to user preference when it comes tomonadic appropriateness. However, it fails to be distinguished fromBD. This might be due to BD being trained on all available samples,while SD is only trained for dyadic cases on samples where theagent is speaking. It could be that with an equal number of trainingsamples, its performance would show significant improvement.However, it seems approaches focusing on model improvementscan improve monadic appropriateness more reliably.6.3 Appropriateness for interlocutor speechThe appropriateness for interlocutor speech (i.e., dyadic appropri-ateness) was evaluated with 600participants who contributed 36ratings to this part of the study, with every condition receiving atleast 993scores. Just as in the case of the monadic appropriatenessevaluation, the scores are mean appropriateness scores and arecalculated as described in Section 6.2. The mean appropriatenessscores are shown in Table 1(b) and represented in Figure 4(b). Thepairwise comparison of solutions can be seen in Figure 5(b). Thecomparative analysis and correction for multiple condition com-parisons were performed the same way as presented in Section 6.2.The results show that our condition (SD), with 7 other conditions,{SE, SF, SI, BM, SJ, SC, SK}, is not significantly different from achance level performance (see dashed line in Figure 4(b)). As forthe pairwise comparison, once again NA was significantly moreappropriate than other conditions. Consequently, while our solutionwas significantly less appropriate than NA, it was significantly moreappropriate than condition SH and on par with all other conditions.It can be observed that regarding dyadic appropriateness, nu-merous conditions failed to be significantly different from a chancelevel score and, when compared to each other, they performedwithout significant difference. Regarding our solution, this meansthat despite addressing the problem in two predicting models, thegenerated listener behaviour was not improved compared to otherapproaches.7 CONCLUSIONS AND TAKEAWAYSThis work presented an approach targeting a dyadic gesture gener-ation problem utilising a Tacotron2-based solution. Based on thedifferent behaviours an agent is expected to exhibit while speak-ing contrary to when it is listening, we investigated the effect oftraining separate models for solving this task. Our solution usedthe dyadic version of the model proposed by Chang et al. [9] andwas trained on the speaking and listening samples of the TalkingWith Hands 16.2 M dataset [16]. Based on the GENEA Challenge2023 [14] evaluation metrics, it did not perform significantly dif-ferently from the dyadic baseline and a few other conditions withregard to human-likeness, and monadic and dyadic behaviour ap-propriateness.We see as a possible improvement the individual tuning of hy-perparameters of the dyadic baseline model for the two distinctmodels we wish to produce. We believe that revising the input fea-tures of the two models would also be worthwhile. We base this onthe observation that the monadic baseline (notably not using inter-locutor features) performed better in the monadic appropriateness,and similarly, the dyadic baseline (using all features) performedbetter in the dyadic appropriateness evaluations. Perhaps if ourproposed models would reflect these changes in features, or refinethe current model structures based on the validation set, it couldThe KCL-SAIR team’s entry to the GENEA Challenge 2023 ICMI’25, October 2023, Paris, France(a)Monadic appropriatenessCondi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803(b)Dyadic appropriatenessCondi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014Table 1: Summary statistics of user-study responses from both appropriateness studies (a - monadic; b - dyadic), with confidenceintervals for the mean appropriateness score (MAS) at the level α=0.05; “Pref. matched” identifies how often test-takerspreferred matched motion in terms of appropriateness after splitting ties. Conditions are ordered by MAS.(a)NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched (b)NA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 4: Bar plots visualising the response distribution in the appropriateness studies (a - monadic; b - dyadic). The blue bar(bottom) represents responses where subjects preferred the matched motion, the light grey bar (middle) represents tied (“Theyare equal”) responses, and the red bar (top) represents responses preferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each category. Lighter colours correspond to slight preference, and darkercolours to clear preference. On top of each bar is also a confidence interval for the mean appropriateness score, scaled to fit thecurrent axes. The dotted black line indicates chance-level performance. Conditions are ordered by mean appropriateness score.achieve better results. Finally, we hypothesize that a conditionalGAN-based (cGAN) model could improve our models’ performance.Consequently, we will benchmark its performance on this dataset,and perform ablations for the splitting on the speaker and listenermodels. This line of thought forms the basis of our planned futurework in relation to the GENEA Challenge and its dataset.ACKNOWLEDGMENTThis work was supported by the European Union project SERMASand EPSRC project LISI (EP/V010875/1).REFERENCES[1] Hyemin Ahn, Timothy Ha, Yunho Choi, Hwiyeon Yoo, and Songhwai Oh.2018. Text2action: generative adversarial synthesis from language to action. In2018 IEEE International Conference on Robotics and Automation (ICRA) . IEEE,5915–5920.[2] Martha W Alibali, Dana C Heath, and Heather J Myers. 2001. Effects of visibilitybetween speaker and listener on gesture production: some gestures are meantto be seen. Journal of Memory and Language , 44, 2, 169–188.[3] Tenglong Ao, Qingzhe Gao, Yuke Lou, Baoquan Chen, and Libin Liu. 2022.Rhythmic gesticulator: rhythm-aware co-speech gesture synthesis with hi-erarchical neural embeddings. ACM Transactions on Graphics (TOG) , 41, 6,1–19.[4] Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate:a practical and powerful approach to multiple testing. Journal of the Royalstatistical society: series B (Methodological) , 57, 1, 289–300.ICMI’25, October 2023, Paris, France Schmuck, Tuyen, and Celiktutan(a)NA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y... (b)NA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...Figure 5: Significant differences between conditions in the two appropriateness studies. White means the condition listed onthey-axis achieved an MAS significantly above the condition on the x-axis, black means the opposite ( yscored below x), andgrey means no statistically significant difference at level α=0.05after correction for the false discovery rate. Conditions usethe same order as the corresponding subfigures in Figure 4.[5] Uttaran Bhattacharya, Elizabeth Childs, Nicholas Rewkowski, and DineshManocha. 2021. Speech2affectivegestures: synthesizing co-speech gestureswith generative adversarial affective expression learning. In Proceedings of the29th ACM International Conference on Multimedia , 2027–2036.[6] Jens F Binder. 2023. Establishing conversational engagement and being effec-tive: the role of body movement in mediated communication. Acta Psychologica ,233, 103840.[7] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017.Enriching word vectors with subword information. Transactions of the associa-tion for computational linguistics , 5, 135–146.[8] Justine Cassell, Hannes Högni Vilhjálmsson, and Timothy Bickmore. 2001. Beat:the behavior expression animation toolkit. In Proceedings of the 28th annualconference on Computer graphics and interactive techniques , 477–486.[9] Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The ivi lab entry tothe genea challenge 2022–a tacotron2 based method for co-speech gesturegeneration with locality-constraint attention mechanism. In Proceedings of the2022 International Conference on Multimodal Interaction , 784–789.[10] Will Feng, Anitha Kannan, Georgia Gkioxari, and C Lawrence Zitnick. 2017.Learn2smile: learning non-verbal interaction through observation. In 2017IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) .IEEE, 4131–4138.[11] F Sebastian Grassia. 1998. Practical parameterization of rotations using theexponential map. Journal of graphics tools , 3, 3, 29–48.[12] Sture Holm. 1979. A simple sequentially rejective multiple test procedure.Scandinavian journal of statistics , 65–70.[13] Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and Hed-vig Kjellström. 2019. Analyzing input and output representations for speech-driven gesture generation. In Proceedings of the 19th ACM International Confer-ence on Intelligent Virtual Agents , 97–104.[14] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, TeodorNikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge2023: A large-scale evaluation of gesture generation models in monadic anddyadic settings. In Proceedings of the ACM International Conference on Multi-modal Interaction (ICMI ’23). ACM.[15] Jessica L Lakin, Valerie E Jefferis, Clara Michelle Cheng, and Tanya L Char-trand. 2003. The chameleon effect as social glue: evidence for the evolutionarysignificance of nonconscious mimicry. Journal of nonverbal behavior , 27, 145–162.[16] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srini-vasa, and Yaser Sheikh. 2019. Talking with hands 16.2 m: a large-scale datasetof synchronized body-finger motion and audio for conversational motion anal-ysis and synthesis. In Proceedings of the IEEE/CVF International Conference onComputer Vision , 763–772.[17] Yu Liu, Gelareh Mohammadi, Yang Song, and Wafa Johal. 2021. Speech-basedgesture generation for robots and embodied agents: a scoping review. In Pro-ceedings of the 9th International Conference on Human-Agent Interaction , 31–38.[18] [SW] Rajmund Nagy, Thanh Hoang-Minh, and Youngwoo Yoon, GENEA 2023baselines 2023. url: https://github.com/genea-workshop/2023_ivi_baseline/tree/fe7827bc95f8a1123f26c25c0fe0173ae8d8ee51, vcs: https://github.com/genea-workshop/2023_ivi_baseline/tree/fe7827bc95f8a1123f26c25c0fe0173ae8d8ee51.[19] Lior Noy, Erez Dekel, and Uri Alon. 2011. The mirror game as a paradigm forstudying the dynamics of two people improvising motion together. Proceedingsof the National Academy of Sciences , 108, 52, 20947–20952.[20] [SW] Viktor Schmuck and Nguyen Tan Viet Tuyen, GENEA 2023 KCL-SAIRsubmission 2023. url: https://github.com/d4rkspir1t/2023_GENEA_Challenge_KCL-SAIR, vcs: https://github.com/d4rkspir1t/2023_GENEA_Challenge_KCL-SAIR.[21] Jonathan Shen et al. 2018. Natural tts synthesis by conditioning wavenet onmel spectrogram predictions. In 2018 IEEE international conference on acoustics,speech and signal processing (ICASSP) . IEEE, 4779–4783.[22] Nguyen Tan Viet Tuyen and Oya Celiktutan. 2022. Agree or disagree ƒgenerat-ing body gestures from affective contextual cues during dyadic interactions.In2022 31st IEEE International Conference on Robot and Human InteractiveCommunication (RO-MAN) . IEEE, 1542–1547.[23] Nguyen Tan Viet Tuyen and Oya Celiktutan. 2022. Context-aware humanbehaviour forecasting in dyadic interactions. In Understanding Social Behaviorin Dyadic and Small Group Interactions . PMLR, 88–106.[24] Nguyen Tan Viet Tuyen, Armagan Elibol, and Nak Young Chong. 2020. Con-ditional generative adversarial network for generating communicative robotgestures. In 2020 29th IEEE International Conference on Robot and Human Inter-active Communication (RO-MAN) . IEEE, 201–207.[25] Pieter Wolfert, Nicole Robinson, and Tony Belpaeme. 2022. A review of evalua-tion practices of gesture generation in embodied conversational agents. IEEETransactions on Human-Machine Systems , 52, 3, 379–389.[26] Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi, and Hiroshi Ishiguro. 2021.Modeling the conditional distribution of co-speech upper body gesture jointlyusing conditional-gan and unrolled-gan. Electronics , 10, 3, 228.[27] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, JaehongKim, and Geehyuk Lee. 2020. Speech gesture generation from the trimodalcontext of text, audio, and speaker identity. ACM Transactions on Graphics(TOG) , 39, 6, 1–16.Received 14 July 2023; revised 11 August 2023; accepted 09 August 2023 |
CMivR3x5fpC | Gesture Motion Graphs for Few-Shot Speech-Driven GestureReenactmentZeyu Zhaozhaozeyu2019@ia.ac.cnInstitute of Automation, ChineseAcademy of SciencesUniversity of Chinese Academy ofSciencesBeijing, ChinaNan Gaogao.nan@ia.ac.cnInstitute of Automation, ChineseAcademy of SciencesBeijing, ChinaZhi Zeng∗zhi.zeng@bupt.edu.cnBeijing University of Posts andTelecommunicationsBeijing, ChinaGuixuan Zhangguixuan.zhang@ia.ac.cnInstitute of Automation, ChineseAcademy of SciencesBeijing, ChinaJie Liujie.liu@ia.ac.cnInstitute of Automation, ChineseAcademy of SciencesBeijing, ChinaShuwu Zhangshuwu.zhang@bupt.edu.cnBeijing University of Posts andTelecommunicationsBeijing, ChinaFigure 1: Given a group of short reference speech gesture sequence, audio, and text, a gesture motion graph is constructedand ready to be searched when a group of test speech gesture audio and text is provided, for rhythmic and semantic gesturereenactment.ABSTRACTThis paper presents the CASIA-GO entry to the Generation andEvaluation of Non-verbal Behaviour for Embedded Agents (GE-NEA) Challenge 2023. The system is originally designed for few-shot scenarios such as generating gestures with the style of any in-the-wild target speaker from short speech samples. Given a groupof reference speech data including gesture sequences, audio, andtext, it first constructs a gesture motion graph that describes thesoft gesture units and interframe continuity inside the speech, which∗Corresponding author.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanthe author(s) must be honored. Abstracting with credit is permitted. To copy other-wise, or republish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from permissions@acm.org.ICMI ’23, October 9–13, 2023, Paris, France© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10...$15.00https://doi.org/10.1145/3577190.3616118is ready to be used for new rhythmic and semantic gesture reen-actment by pathfinding when test audio and text are provided. Werandomly choose one clip from the training data for one test clipto simulate a few-shot scenario and provide compatible results forsubjective evaluations. Despite the 0.25% average utilization of thewhole training set for each clip in the test set and the 17.5% to-tal utilization of the training set for the whole test set, the systemsucceeds in providing valid results and ranks in the top 1/3 in theappropriateness for agent speech evaluation.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) ; •Computing methodologies →Animation.KEYWORDSspeech-driven gesture generation, motion graph, few-shotICMI ’23, October 9–13, 2023, Paris, France Zhao et al.ACM Reference Format:Zeyu Zhao, Nan Gao, Zhi Zeng, Guixuan Zhang, Jie Liu, and Shuwu Zhang.2023. Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reen-actment. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERAC-TION (ICMI ’23), October 9–13, 2023, Paris, France. ACM, New York, NY,USA, 7pages. https://doi.org/10.1145/3577190.36161181 INTRODUCTIONGenerating co-speech gestures that convey rich non-verbal infor-mation remains challenging due to the indeterministic nature ofthe task. The one-to-many mapping between the modalities, alongwith other difficulties such as the lack of high-quality large-scaledatasets and standardized evaluating protocols, makes it difficultto design and evaluate models for speech-driven gesture genera-tion. In recent years, data-driven methods have attracted the in-terest of many researchers in the field. However, most of thesemethods require training on large-scale datasets. How to producegestures in common scenarios where training data are insufficient,such as reenacting gestures with new styles naturally encoded invery few recorded gesture samples of an in-the-wild target humanperformer, is rarely discussed.In this paper, we try to address this problem by designing a sys-tem that can explicitly locate key positions of rhythmic and se-mantic events in the sequences to form basic units of gestures anddescribe the continuity relationships inside. Part of that is com-ing from the commonly agreed observation [ 1,23] that while mostco-speech gestures are in synchronization with the rhythm of thevoice, some gestures are more relevant to the actual meaning ofthe words or sentences. The other part is that it should be able toproduce new gesture units that break the natural continuity rela-tionships between units for good diversity performance. Inspiredby [23], we find that motion graphs and related searching algo-rithms are most suitable for this task. With the gesture sequence,audio, and text of a reference speech and the audio and text of anytest speech, the main idea is to construct a motion graph that de-scribes the soft gesture units and continuity relationships insidethe reference speech and search the graph for new paths of ges-ture frames given the test speech, as shown in Figure 1. Numerousmodifications and improvements such as new pruning strategies,feature-based initialization, and fallback measures, can be made tothe framework to enable compatibility with pure gesture data in-stead of video frames. These are proved to be the key factors forthe feasibility, performance, and robustness of the system.To gain better knowledge of how well the results produced bythe system can be, we participate in this year’s GENEA Challengeto evaluate our results reenacted from few-shot data and comparethose with results from other systems that utilize large-scale data.To do this, we simulate a few-shot scenario by randomly choosingone clip in the provided training set as the whole reference speechfor each clip in the test set, regardless of any speaker identity. Foreach test speech, the system only utilizes 0.25% of the whole train-ing set on average. In such a way, the system utilizes 17.5% of thewhole training set for the whole test set. Despite the low utiliza-tion of the training data, The system succeeds in producing high-quality gestures for the test set and achieves good performance inthe challenge.2 RELATED WORKSLarge-scale data-driven methods are becoming exceedingly popu-lar in recent years for speech-driven data generation tasks [ 15], tak-ing over rule-based methods [ 14] or probabilistic modeling meth-ods [ 10]. Basic deep learning models show great capabilities of en-coding input data and generating new gestures [ 3,20]. New ar-chitectural designs that fit the specific properties of the task suchas skeleton hierarchies or gesture categories are proposed to im-prove the performance of gesture generation [ 1,13]. New gener-ative models can also be utilized as backbones of the generationnetworks [ 19,24].The mixed usage of matching-based and learning-based meth-ods can also be seen in numerous works to bypass limitations ofdeep learning models [ 4,18]. Motion graphs are proposed to gen-erate controllable animation from pre-recorded motion [ 5] and arecommonly used in gesture-related tasks such as retrieval and cre-ation [ 6,16]. For speech-driven data generation, they can be uti-lized by defining each graph node as the feature of a sequence ofgestures [ 22], or defining each node as a video frame [ 23]. Inspiredby these works, we find motion graphs are suitable for our task fortheir inter-frame relationship description capabilities, regardlessof the presence of learning-based modules. Thus, we design motiongraphs for reenacting gestures from few-shot reference gesture se-quences instead of large-scale data or video frames.3 DATA PROCESSINGThe dataset provided by the challenge organizers this year [ 7] isderived from the Talking With Hands data [ 9]. Gesture sequences,audio, text, and speaker labels of both the main agent and the in-terlocutor are included in the dataset, making it a dyadic datasetcompared to the monadic dataset last year. As mentioned above,our system does not utilize all training data provided. Instead, weuse the training set to simulate a few-shot scenario where only asmall amount of data is available as reference speech. For the testset, only the audio and text data of the main agent in the test clipsare utilized by the system. For each clip, only one clip in the train-ing set is randomly chosen as the reference speech, of which onlythe gesture, audio, and text data of the main agent are utilized bythe system. Other data including anything relevant to the inter-locutor, the speaker labels, and the validation set are ignored bythe system.The data are preprocessed using the utilities provided by [ 2], in-cluding converting between Euler angle and exponential map ro-tation representation, selecting the 25 joints on upper and lowerbody excluding the fingers, and aligning the text to gesture frames.Since the system can work with gestures with any skeleton defi-nition, the skeletons used inside the system are in both exponen-tial map rotation representation and position representation. Thewords in the text are pre-converted to integer indices. Due to thepoor quality of the hand tracking and some significant flickeringon the body, we have to add 19 clips in the training set to the ran-dom selection blacklist, lock the yaw and pitch rotation of the 4wrist-related joints, and apply the Savitzky-Golay filter with a win-dows length of 15 and polynomial order of 3 on the roll rotation ofthe 4 wrist-related joints.Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment ICMI ’23, October 9–13, 2023, Paris, France4 METHODThe gesture motion graph is a graph structure that can be usedto represent the continuity relationships between frames in a ges-ture sequence regardless of the length or the skeleton definitionof the sequence, as shown in Figure 2. Following [ 23], each nodein the graph represents a frame in the gesture sequence, and eachdirected edge between two nodes indicates the distance betweenthe two frames is small enough for the transition to be consideredcontinuous. Given a reference gesture sequence and its correspond-ing speech audio and text, we can construct its gesture motiongraph by detecting key nodes that non-uniquely split the gesturesequence into subsequences of soft gesture units and analyzing thecontinuity relationships between frames to find edges for unnatu-rally continuous frames. When we need to reenact a new test ges-ture sequence from its speech audio and text, we can split the testsequence into subsequences using the positions of the same kindsof key frames detected in the test speech and use a pathfinding al-gorithm to find the optimal paths of nodes in the graph correspond-ing to every test subsequence. Then a new gesture sequence that isrhythmically matched to the input speech audio and semanticallyrelevant to the input text can be reenacted by concatenating andblending the gesture frames along the paths. Due to random oper-ations in some fallback measures, the system may produce slightlydifferent results at some parts for the same input.Figure 2: A sample gesture motion graph with zoomed viewsof examples of a) a regular node, b) an onset node, c) a key-word node, d) a break node, e) a natural edge, and f) an un-natural edge.4.1 Graph Construction4.1.1 Key node detection. After adding all frames in the gesturesequence as regular nodes into the graph, we first perform onsetdetection on the reference speech audio to find onset nodes in thegesture motion graph. The onsets are located at the backtrackedpeaks of the audio’s spectral flux viewed as the onset strength [ 12],aligned to the gesture frames. Filtering on the onset strength cancontrol the number of output onsets, which further controls thelength of soft gesture units used for reenactment. Then we per-form the keyword detection on the reference speech text to markkeyword nodes in the gesture motion graph. With the input textaligned to the frames, each word is checked to see if it belongs to alist of keywords (see [ 23]). If a subsequence of one or more repeat-ing keywords is found in the text, the node corresponding to thefirst frame of this subsequence is then marked as a keyword nodewith that keyword. Also, there might be interruptions inside thespeech when e.g. the speech is a composition of multiple discon-tinuous segments. Any frame that is not continuous with the nextframe is marked as a break node .4.1.2 Continuity analysis. We first directly add directed edges tothe graph with zero weights for the frames that are naturally con-tinuous. Then we traverse every pair of different non-continuousframes as “left” and “right” frames pl,prand calculate their dis-tance. Here, the distance between two gesture frames, or poses, isdefined to be the weighted sum of the Euclidean distance of thejoint positions and the Euclidean distance of the joint velocities:dpose(pl,pr)=λpos∥pl−pr∥2+λvel∥vl−vr∥2,where the velocities vl,vrcan be calculated by differentiating thecurrent and previous frames, and λpos, λvelare the weights of thetwo terms. For every left frame, a dynamic threshold for continuityis defined to be the mean distance between the left frame and itsfollowing (up to) lcnframes. This threshold is used to filter out theright frames with distances that are too large to be considered con-tinuous frames. After filtering, every remaining right frame adds acandidate directed edge to a list (not to the motion graph) withits pose distance to the left frame as the weight. However, thiscriterion of continuity can produce a large number of neighboredright frames for a left frame and frequently generates short loops inthe graph. Thus, we perform two pruning operations to reduce thenumber of candidate edges. For each left frame, the first strategyis, for a continuous sequence of up to lpnright frames in the candi-date list, we only reserve the first one and remove the others. Thesecond strategy is, for the remaining right frames, one is removedif another edge, that starts in the lpn-neighbor of one frame andends in the lpn-neighbor of the other frame, already exists in thegraph. After the pruning, we add all candidate edges to the graphand move on to the next left frame.4.2 Pathfinding4.2.1 Beam search. The core of the path-finding algorithm is aparallelized greedy breadth-first search algorithm known as thebeam search [ 8] for each test subsequence. Given the target pathlength lsub, the termination criteria for paths, and lnpaths initialstarting nodes, the beam search algorithm outputs lnpaths pathswith top- lnpaths minimum costs that have different lengths. Theselnpaths paths are initially one-node paths with only the given start-ing nodes. As shown in Figure 3, at each iteration, we initializeICMI ’23, October 9–13, 2023, Paris, France Zhao et al.an empty watch list and check if the lnpaths paths are already ter-minated. All terminated paths are directly added to the watch list,and all unterminated paths are expanded by appending the chil-dren of the last node. If the last node of a path has multiple chil-dren, it should be split into multiple paths each with a child ap-pended, which are then all added to the watch list as well. Then,we calculate the costs of all watched paths and select those withtop- lnpaths minimum costs, which are then set to be the new lnpathspaths. Here, the cost of a path Pis defined as the sum of the weightsof the edges along the path, penalized by the difference betweenlengths of this path lpathand the test subsequence lsub:cpath(P)=λw©«plpath−1Õi=p1wi,i+1a®¬+λlen1−lpathlsub,where wi,jis the weight of the edge (pi,pj), and λw, λlenare theweights of the two terms. The algorithm repeats these steps andbreaks when the maximum length of searching is reached or alllnpaths paths are accepted (see appendix). Finally, the accepted pathwith the lowest cost is chosen for the current test subsequence.Figure 3: An example of two iterations of the beam searchprocess. Each iteration expands all children nodes of thelast nodes of the presented paths. The expanded paths arethen sorted and selected according to their costs. Termi-nated paths are in green.4.2.2 Conditional termination. For each test subsequence, we setthe termination criteria independently based on various consider-ations. Normally, if the test subsequence ends at a keyword frame,the paths should terminate at any keyword node in the graph withthe exact same keyword to produce semantic gestures. Otherwise,the paths should terminate at any onset or break node in the graphto produce rhythmic gestures. If no accepted path is found after thebeam search is forcibly stopped, we should re-initialize the start-ing nodes and retry searching. Fallback measures (see appendix)can also be designed to guarantee that the beam search can stopwith at least one accepted path in most cases. If no retry is needed,the beam search of the next subsequence will take the subsequentnodes of the ending nodes as initial starting nodes, which keepsthe reenacted gestures as naturally continuous as possible.4.2.3 Feature-based initialization. For starting node initialization,a method based on key node features is designed for the beamsearch to increase the possibility of finding a path that costs less.The feature of a key node fis defined to be a list of lengths of thelfeattrailing natural subsequences split by any key node, ignoringthe unnatural edges:fi={fi−fi−1, fi+1−fi, . . . , f i+lfeat−1−fi+lfeat−2},where fjis the frame number of the key node with the index 1≤j≤lkin the ordered list of all lkkey nodes, fj=0when j=0,and fj=flkwhen j>lk. For a test subsequence, we calculate thefeature distance between the starting key node ktand each keynode in the graph km:dfeat(kt, km)=λfull∥wfull⊙ (ft−fm)∥2+λfirst1−fm,1ft,1+λoccom,where wfull∈ [0,1]lfeatdefines the weight for each element of thefeature, f·,1represents the first element of the feature, ⊙is thesymbol of element-wise multiplication, omis the occurrence countof the key node kmalready accepted in paths for the whole testspeech, and λfull, λfirst, λoccare the weights of the two terms. Thetop- lnpaths key nodes with minimum distances are selected to bethe initial starting nodes. Fallback measures (see appendix) guar-antee that there always are lnpaths starting nodes initialized forsearching after retries.4.2.4 Blending. After the beam search for every test subsequence,we obtain a list of paths of pose frames in the gesture motion graph.As shown in Figure 4, we design a blending mechanism to smooththe transition between paths, as they are most likely to be discon-tinuous. For two paths that are needed to be concatenated, we callthe last (up to) lblend frames of the first one left path Pland the first(up to) lblend frames of the second one right path Pr. We generate apath of new gestures for the concatenated left and right paths Pc:Pc=(1−wblend) ⊙ ( Pl⊕ ({Pr,1} ×min(lr, lblend)))+ wblend ⊙ (({ Pl,ll} ×min(ll, lblend)) ⊕Pr),where wblend is the weight vector, ⊕is the symbol of concatena-tion, ×is the symbol of repeating all elements in a vector, P·,iis thei-th node in a path, and ll, lrare the lengths of left and right paths.The weight vector can be generated by linear, sigmoid, or otherfunctions that map evenly-placed values to the range of (0,1). Forskeletons defined as exponential map rotations of the joints, wecan also convert those to quaternions and use spherical linear in-terpolation (SLERP) to blend the rotations, instead of using directweighted sum.5 EVALUATIONSTo evaluate the effectiveness of the system, we generate resultsusing the mentioned data and method with the following configu-ration: λpos=λvel=1,lcn=5,lpn=10,lnpaths =20,λw=λlen=1,lfeat=10,λfull=λfirst=1,λocc=0.5,wfull={1,0.5,0.5,0.2,0.2,0.2,0.1,0.1,0.1,0.1}, and minimum onset strength threshold 5.Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment ICMI ’23, October 9–13, 2023, Paris, FranceFigure 4: An example of the blending process. The green andblack paths are blended to form a blue path (left), which isthen blended with another black path to form a red path(right).5.1 Subjective EvaluationThe generated results in Euler angle rotation representation (con-verted from exponential map) are submitted to the challenge orga-nizers and evaluated by the human evaluators recruited from sixEnglish-speaking countries [ 7]. Three aspects of the generated re-sults are evaluated and released to the participants, including thehuman-likeness, the appropriateness for agent speech, and the ap-propriateness for the interlocutor. We do not discuss the last onesince it assumes that the systems are interlocutor aware, which isnot the case for our system. No objective evaluation result is avail-able to the participants. Videos used in this evaluation are availableathttps://zenodo.org/record/8211449 .5.1.1 Appropriateness for agent speech evaluation. As mentioned,to simulate a few-shot scenario, for each test clip (minimum 60seconds, maximum 77 seconds, 62.4 seconds on average), only onetraining clip is randomly chosen as the reference speech. For the 70given test clips, 70 different training clips are finally chosen. Eachchosen training clip (minimum 60 seconds, maximum 427 seconds,170.2 seconds on average) only constitutes a tiny portion (mini-mum 0.088%, maximum 0.627%, 0.25% on average) of the wholetraining set (68069.9 seconds). For the whole test set, only 17.5%of the training data are utilized to produce the results. Despite thelow utilization of the training set, the results generated by our sys-tem (labeled SK) got a good mean appropriateness score (MAS) of0.18±0.06, ranking fourth among the 12 participants (top 1/3). Thefull results can be found in Table 1and Figure 5. This shows that thesystem is able to produce high-quality results that are comparablewith systems utilizing large-scale datasets.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 5: Bar plots visualising the response distribution inthe appropriateness for agent speech study [ 7].Table 1: Appropriateness for agent speech [ 7]Condi-MASPref. Raw response counttion matched 2 1 0 −1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC −0.02±0.04 49.1% 72 284 1057 314 76 18035.1.2 Human-likeness. However, our system did not get a satis-fying median score ( 37∈ [35,40]) in the human-likeness evalua-tion, ranking ninth among the 12 participants. Since our systemreenacts new gestures from the raw gesture frames of the refer-ence gesture sequence, the quality of the results is heavily affectedby the quality and the length of the reference data. Flickering orother defects existing in the naturally continuous frames and thelower-than-needed training data utilization can be possible causesof the low ratings given by the evaluators. Also, the blending pro-cess can only guarantee smooth transitions between paths. If toomany transitions occur in a very short time span, it may give theevaluators some non-humanlike impression. In a word, increasingthe quality of the reference speech data and using more trainingdata as reference speeches may give a better score in this evalua-tion.5.2 Ablation StudyPruning strategies, feature-based initialization, fallback measures,and other new designs for the gesture motion graph are key fac-tors for the feasibility, performance, and robustness of the system.To justify this, we also conduct ablation studies using the resultsin joint position representation. We evaluate our system in threesetups on three objective metrics. The weak detection setup re-moves proper filtering measures in onset detection (with minimumonset strength threshold 0). The weak pruning setup degradespruning operations in continuity analysis ( lpn=1). The weak ini-tialization setup initializes random starting nodes in the beamsearch algorithm. The first metric is for motion synchronization(Syn) [17], which calculates the differences between velocity mag-nitudes of the generated and ground truth gestures at each frame.Note that the results of such distance comparisons cannot accu-rately measure the quality of the generated gestures. The secondmetric is a score for beat consistency (BC) [11] that measuresthe beat correlation between gestures and speech audio by calcu-lating the mean distance between the audio onsets and the nearestICMI ’23, October 9–13, 2023, Paris, France Zhao et al.Table 2: Ablation study resultsSetup Syn↓ BC↑ Div↑ #FailureWeak Detection 0.61393 0.021577 0.06101 0Weak Pruning 0.57947 0.022278 0.05795 0Weak Initialization 0.58290 0.021982 0.06639 0No Term. Fallback - - - 51Full 0.57866 0.022087 0.07461 0peaks of angle change rate. The third metric is for gesture diver-sity (Div) [21]. It calculates the ratio of large angular changes of ve-locities between frames and uses that to indicate the frequency ofmotion changes. Finally, another no termination fallback setupthat disables all termination fallback measures is added and thenumber of failures (stuck in infinite loops) during pathfindingis counted to demonstrate the necessity of these measures. We seein Table 2that although weak setups sometimes produce gestureswith a better rhythmic score, they perform much worse in velocitysimilarity to ground truth or gesture diversity. Moreover, the sys-tem fails 51 times out of 70 (73%) without the fallback measures,showing that these designs are necessary for the graph to workwith few-shot gesture data.6 CONCLUSIONIn this work, we propose a system for reenacting gestures in few-shot scenarios where very few reference samples are available basedon gesture motion graphs. The input reference gesture and speechdata are analyzed and a gesture motion graph with descriptions ofthe interframe continuity and key rhythmic and semantic events isconstructed. Given the test speech, a path of blended pose framescan be searched from the gesture motion graph to form a new se-quence of reenacted gestures. The evaluations show that the sys-tem can generate high-quality results comparable with methodsdesigned for large-scale data, and the new designs succeed in pro-viding robust performance for the system.Nevertheless, this system has its limitations in multiple aspects.For example, although the requirement for data size is reduced, thereference data still need to be high quality for reenactment. Also,the construction and search processes are manually designed basedon human prior knowledge with some of the thresholds that needto be tuned manually. We can explore learning-based methods thatcan enhance the mechanisms of key node detection, path cost, etc.ACKNOWLEDGMENTSThis work was supported by the National Key R&D Program ofChina (2022YFF0901902).REFERENCES[1]Tenglong Ao, Qingzhe Gao, Yuke Lou, Baoquan Chen, and Libin Liu. 2022.Rhythmic gesticulator: Rhythm-aware co-speech gesture synthesis with hier-archical neural embeddings. ACM Transactions on Graphics (TOG) 41, 6 (2022),1–19.[2]Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry tothe GENEA Challenge 2022–A Tacotron2 based method for co-speech gesturegeneration with locality-constraint attention mechanism. In Proceedings of the2022 International Conference on Multimodal Interaction . 784–789.[3]Shiry Ginosar, Amir Bar, Gefen Kohavi, Caroline Chan, Andrew Owens, andJitendra Malik. 2019. Learning individual styles of conversational gesture. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni-tion. Long Beach, CA, USA, 3497–3506.[4]Ikhsanul Habibie, Mohamed Elgharib, Kripasindhu Sarkar, Ahsan Abdullah, Sim-barashe Nyatsanga, Michael Neff, and Christian Theobalt. 2022. A motionmatching-based framework for controllable gesture synthesis from speech. InACM SIGGRAPH 2022 Conference Proceedings . 1–9.[5]Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2008. Motion graphs. InACM SIGGRAPH 2008 classes . 1–10.[6]Björn Krüger, Jochen Tautges, Andreas Weber, and Arno Zinke. 2010. Fast localand global similarity searches in large motion capture databases.. In Symposiumon Computer Animation . Citeseer, 1–10.[7]Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, TeodorNikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge2023: A large-scale evaluation of gesture generation models in monadic anddyadic settings. In Proceedings of the ACM International Conference on Multi-modal Interaction (ICMI ’23) . ACM.[8]Abhishek Kumar, Shankar Vembu, Aditya Krishna Menon, and Charles Elkan.2013. Beam search algorithms for multilabel learning. Machine learning 92(2013), 65–89.[9]Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srinivasa,and Yaser Sheikh. 2019. Talking with hands 16.2 m: A large-scale dataset of syn-chronized body-finger motion and audio for conversational motion analysis andsynthesis. In Proceedings of the IEEE/CVF International Conference on ComputerVision . 763–772.[10] Sergey Levine, Philipp Krähenbühl, Sebastian Thrun, and Vladlen Koltun. 2010.Gesture Controllers. ACM Trans. Graph. 29, 4, Article 124 (jul 2010), 11 pages.https://doi.org/10.1145/1778765.1778861[11] Buyu Li, Yongchi Zhao, Shi Zhelun, and Lu Sheng. 2022. Danceformer: Musicconditioned 3d dance generation with parametric motion transformer. In Pro-ceedings of the AAAI Conference on Artificial Intelligence , Vol. 36. 1272–1279.[12] Librosa Development Team. 2023. librosa.onset.onset_detect - librosa 0.10.1devdocumentation. https://librosa.org/doc/main/generated/librosa.onset.onset_detect.html#librosa.onset.onset_detect[13] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, XiaoweiZhou, Wayne Wu, Bo Dai, and Bolei Zhou. 2022. Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition . 10462–10472.[14] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, andAri Shapiro. 2013. Virtual Character Performance from Speech. In Proceedingsof the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation(Anaheim, California) (SCA ’13) . Association for Computing Machinery, NewYork, NY, USA, 25–35. https://doi.org/10.1145/2485895.2485900[15] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter,and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-SpeechGesture Generation. arXiv preprint arXiv:2301.05339 (2023).[16] Alla Safonova and Jessica K Hodgins. 2007. Construction and optimal search ofinterpolated motion graphs. In ACM SIGGRAPH 2007 papers . 106–es.[17] Jing Xu, Wei Zhang, Yalong Bai, Qibin Sun, and Tao Mei. 2022. Freeform BodyMotion Generation from Speech. arXiv preprint arXiv:2203.02291 (2022), 1–10.[18] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao,and Haolin Zhuang. 2023. QPGesture: Quantization-Based and Phase-GuidedMotion Matching for Natural Speech-Driven Gesture Generation. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 2321–2330.[19] Sheng Ye, Yu-Hui Wen, Yanan Sun, Ying He, Ziyang Zhang, Yaoyuan Wang, Wei-hua He, and Yong-Jin Liu. 2022. Audio-Driven Stylized Gesture Generation withFlow-Based Model. In Computer Vision–ECCV 2022: 17th European Conference,Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part V . Springer, 712–728.[20] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, JaehongKim, and Geehyuk Lee. 2020. Speech Gesture Generation from the TrimodalContext of Text, Audio, and Speaker Identity. ACM Trans. Graph. 39, 6, Article222 (nov 2020), 16 pages. https://doi.org/10.1145/3414685.3417838[21] Zeyu Zhao, Nan Gao, Zhi Zeng, and Shuwu Zhang. 2022. Generating DiverseGestures from Speech Using Memory Networks as Dynamic Dictionaries. In2022 International Conference on Culture-Oriented Science and Technology (CoST) .163–168. https://doi.org/10.1109/CoST57098.2022.00042[22] Chi Zhou, Tengyue Bian, and Kang Chen. 2022. GestureMaster: Graph-basedspeech-driven gesture generation. In Proceedings of the 2022 International Con-ference on Multimodal Interaction . 764–770.[23] Yang Zhou, Jimei Yang, Dingzeyu Li, Jun Saito, Deepali Aneja, and EvangelosKalogerakis. 2022. Audio-driven neural gesture reenactment with video motiongraphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition . 3418–3428.[24] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu.2023. Taming Diffusion Models for Audio-Driven Co-Speech Gesture Gener-ation. arXiv preprint arXiv:2303.09119 (2023).Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment ICMI ’23, October 9–13, 2023, Paris, FranceA METHOD DETAILSA.1 PathfindingA.1.1 Termination Fallback Measures. If no matching keyword nodeis found after multiple retries, the stopping nodes should fall backon onset or break nodes. Also, if the shortest subsequence in thegraph is still much longer than the target length, it is difficult forany output path to be considered accepted, in which case the num-ber of retries keeps increasing endlessly. This can be solved by ran-domly discarding some stopping nodes gradually to destructivelylengthen the subsequences. If this operation does not stop the num-ber of retries from endlessly increasing, that means the longest sub-sequence is still much shorter than the target length. In this case,we can return a path with the target length by repeating the lastnode of the previous search result and terminating the search forthe current test subsequence.A.1.2 Path Acceptance. A path is considered accepted when it isterminated and 0.9 to 1.1 times the length of the test subsequence.The accepted path with the minimum cost is selected to be thesearch result if any exists, which is then resampled evenly if thelength of this path lpathis not equal to the target length lsub.A.1.3 Initialization Fallback Measures. On each retry, the last topkey nodes are discarded and the next top- lnpaths key nodes are se-lected. If no sufficient key node is available, we can randomly selectlnpaths arbitrary nodes as a fallback measure. |
xPQcKA56N4j | Discrete Diffusion for Co-Speech Gesture SynthesisAnkur ChemburkarShuhong LuAndrew Fengchemburk@usc.edushuhongl@usc.edufeng@ict.usc.eduInstitute for Creative Technologies, University of Southern CaliforniaLos Angeles, California, United StatesABSTRACTIn this paper, we describe the gesture synthesis system we devel-oped for our entry to the GENEA Challenge 2023. One challenge inlearning the co-speech gesture model is that there may be multipleviable gesture motions for the same speech utterance. Thereforecompared to a deterministic regression model, a probabilistic modelwill be preferred to handle the one-to-many mapping problem.Our system utilizes the vector-quantized variational autoencoder(VQ-VAE) and discrete diffusion as the framework for predictingco-speech gestures. Since the gesture motions are produced viasampling the discrete gesture tokens using the discrete diffusionprocess, the method is able to produce diverse gestures given thesame speech input. Based on the user evaluation results, we furtherdiscuss about the strength and limitations of our system, and pro-vide the lessons learned when developing and tuning the system.The subjective evaluation results show that our method ranks inthe middle for human-likeness among all submitted entries. In thethe speech appropriateness evaluations, our method has prefer-ences of 55.4% for matched agent gesture and 51.1% for matchedinterlocutor gestures. Overall, we demonstrated the potential ofdiscrete diffusion models in gesture generation.CCS CONCEPTS•Computing methodologies →Intelligent agents ;Animation ;Neural networks .KEYWORDSgesture synthesis, computer animation, neural networksACM Reference Format:Ankur Chemburkar, Shuhong Lu, and Andrew Feng. 2023. Discrete Diffusionfor Co-Speech Gesture Synthesis. In INTERNATIONAL CONFERENCE ONMULTIMODAL INTERACTION (ICMI ’23 Companion), October 9–13, 2023,Paris, France. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3610661.3616556Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.ICMI ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.36165561 INTRODUCTIONCo-speech gesture synthesis is an important capability for drivingvirtual character movements in conversational interactions withhuman users. It plays an essential role in augmenting the virtualhuman with non-verbal behaviors that mimic actual human commu-nications in addition to speech lip-syncing animations. However, itis not trivial to synthesize gesture motions that are both human-likeand correspond well to the speech input.In general, the process of gesture generation from speech tomotion is a non-deterministic one-to-many mapping, which indi-cates that multiple gestures could correspond to the same speechinput to convey a similar meaning. For example, a left-hand beat, aright-hand beat, or a beat involving hands will all be appropriaterepresentations of a beat motion corresponding to an utterance.Therefore instead of using deterministic models [ 13,40,41] to pre-dict gestures, the recent methods utilized the probablistic frame-works [ 2,23] by sampling the latent space to accommodate thenon-deterministic natures of gesture synthesis.For the GENEA challenge [ 21], we have developed our gesturesynthesis system based on vector-quantized variational autoen-coder (VQ-VAE) and denoising diffusion probabilistic models. Weassume that by utilizing the discrete tokens, the gesture synthesisproblem could be regarded as token sampling based on the pre-dicted logits. This allows gestures that are far apart in the motionspace to be still mapped to the same input utterance. By leveragingthe disentanglement of information in the latent space of VQ-VAE,the system gains the potential for controllable gesture synthesis.The diffusion methods have been adapted successfully for variousapplications including image and motion synthesis [ 10,35,44]. Themotivation for our system is to utilize these recent developments ingenerative models for gesture synthesis. One more insight for em-ploying the diffusion process is that diffusion models are inherentlyrobust to noise and uncertainty in the data. We aim to reduce jit-tering results generated by many previous methods. Diffusion caneffectively denoise corrupted inputs by stepping backward throughthe diffusion process, aiding in data recovery and reconstructiontasks. Specifically, we first learn the discrete latent codes from theinput motions using VQ-VAE. These codes are then used by thediscrete denoising diffusion probabilistic models (D3PM) to learnthe denoise process. By learning the denoising model in the discretelatent space, the method is able to leverage the synthesis strengthfrom the diffusion process while also greatly reducing the compu-tational costs by requiring much fewer diffusion steps to converge.After predicting the discrete codes, the model then reconstructsICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.the gesture motions through the decoder of VQ-VAE. From thesynthesis results, we found that the method is able to produce di-verse gestures with good motion dynamics. A demonstration videoshowcasing our results can be accessed by visiting the providedlink: here."2 BACKGROUND2.1 Co-Speech Gesture SynthesisIn the realm of speech gesture synthesis, traditional rule-basedapproaches have relied on manually created sets of gesture units,employing predefined rules and heuristics to generate gesturesbased on linguistic and contextual information [ 5,19,25]. Someapproaches have attempted to extract gesture units from train-ing speech-gesture pairs [ 12,16]. However, these methods havestruggled in accurately estimating gesture attributes and effectivelyforming units, thereby impacting the final quality of results.In contrast, learning-based approaches have emerged, whereincertain methods utilize speech-gesture pair data to train end-to-end models that directly predict co-speech gestures, treating thetask as a regression problem from speech to gestures [ 6,14,20,40].However, a significant challenge arises when a single speech inputcorresponds to multiple variants of gestures, as the regression modeltends to average the gesture poses, resulting in inferior outcomes.This challenge is commonly referred to as the one-to-many mappingfrom speech to gestures issue.Recent advancements have approached gesture synthesis in aprobabilistic framework, enabling the generation of multiple ges-ture sequences from a single speech input through latent spacesampling [ 1,2,7,23,24,27]. Nonetheless, as the length of thesequence increases, the process of generating data sequentiallybecomes time-consuming, and the dependency information is lostas each element relies on the previously generated ones [29].Based on the aforementioned points, we propose our model thatcombines the VQ-VAE and diffusion techniques to tackle thesechallenges and enhance the synthesis of speech gestures.2.2 Discrete Latent Space LearningA VAE (Variational Autoencoder) is a type of generative model thatlearns a compressed representation of input data by mapping it toa lower-dimensional latent space, typically modeled as a Gaussiandistribution, using an encoder. In the case of VQ-VAE, the latentspace is discretized into a finite set of codebooks [ 36]. This allowsfor the encoding of original gestures into small, trainable data unitsusing vector quantization. Recent model design and training tech-niques have been focusing on improvements for learning the latentspace reconstructions. For instance, Jukebox [ 9] trained separateVQ-VAEs on data with different resolutions by hierarchically down-sampling the input data. RQ-VAE [ 30] reduces the reconstructionerrors by recursively quantizing the feature maps using a fixed-sizecodebook.One known issue in VQ-VAE is codebook collapse [ 30], wheremultiple embeddings in the codebook collapse and become identicalor nearly identical during training. This collapse leads to a loss ofdiversity in learned representations and can adversely affect modelperformance and generation quality. Several techniques have beenproposed to mitigate codebook collapse, including re-initializingunused codes to random vectors during each training iteration [ 9],normalizing mean squared error (MSE) for reconstruction [ 39], andupdating codebook embeddings with exponential moving averages[30].VQ-VAE method typically utilizes autoregressive transformersto learn a probability distribution over the latent space during thegenerative stage. However, autoregressive models often strugglewith capturing long-range dependencies in the data, as each el-ement’s conditioning is limited to the previous elements. In thiswork, we instead applied discrete diffusion to enlarge the samplingwindow size without negatively affecting the performance of thegenerated sequences.2.3 Denoising Diffusion Probabilistic ModelsDiffusion models have emerged as a prominent approach in imagesynthesis and motion generation, showcasing their ability to gen-erate complex and realistic results. In contrast to autoregressivegenerative models, diffusion models provide greater flexibility withreduced error accumulation during inference and are well-suitedfor parallel training since they are not constrained by step-by-stepsampling [10, 17, 31–33].In the continuous diffusion process, the target data array, suchas gesture motions in our case, undergoes an iterative injectionof Gaussian noise through a forward Markov process until purenoise is obtained. In the subsequent reverse process, the modellearns to gradually denoise the sample. The diffusion transformerframework has found application in motion synthesis domains,including tasks like audio-conditioned gesture generation [ 43] thatcan effectively handle long-term dependencies in gesture sequences.Several notable adaptations of diffusion models have been made forhuman motion synthesis as well, such as generating raw motionframes [ 35] and improving jittering problems through time-varyingweight schedules for noise estimation [ 8]. In the realm of gesturesynthesis, Ao et al. [ 3] leverage a latent diffusion model and apply aContrastive-Language-Image-Pretraining strategy [ 28] to learn therelationship between speech transcripts and gestures. Additionally,Zhu et al. [ 46] focus on ensuring temporal coherence by tailoringtheir Diffusion Co-Speech Gesture framework in the context ofgesture synthesis.Diffusion models can also be extended to discrete data, includingcategorical labels or text. For example, D3PM [ 4] utilizes a transitionmatrix in the noising step to handle discrete data. Another variant,the VQ-Diffusion model [ 15], combines a VQ-VAE with a conditionalDDPM variant to model the latent space for text-to-image synthesis.In our system, we adapted the discrete diffusion model to producegesture token sequences based on input conditions.3 DATA PRE-PROCESSINGThe training data for the GENEA Challenge 2023 is based on asubset of the Talking with Hands (TWH) dataset [ 22]. The datasetincludes the entirety of dyadic interactions, with audio and speechtext features from both the main agent and interlocutor.In accordance with [ 42], we undertook analogous data prepro-cessing procedures.For input gesture representation, we first down-sampled the input motions to 30 fps and applied a sliding window of64 frames with a step size of 10 frames to produce gesture samples.Discrete Diffusion for Co-Speech Gesture Synthesis ICMI ’23 Companion, October 9–13, 2023, Paris, FranceEach gesture sample is converted into a tensor of size T×J×D,whereT=64is the sliding window size, Jis the number of joints,andDis the size for joint rotation representation.We also use D=6as the representation for joint rotations basedon previous research [ 45] to prevent singularities and reduce ro-tation approximation errors. The pose dimension we used is 153,which includes 6D rotation vectors for 25 joints and the root transla-tion. For each gesture sample, our target is to predict the main agentposes, and we combine the audio features from both the main agentand interlocutor as the input conditions to our model. Followingthe baseline data processing scripts provided by the organizers, theaudio features include Mel-frequency cepstral coefficients (MFCCs),spectrogram, and speech prosody. We concatenate all three featuresfor both agents into the final speech audio features.4 METHODThe method implemented in our system uses a two-stage architec-ture to train the gesture synthesis models; the first stage involveslearning discrete tokens using VQ-VAE, while the second stagemakes use of the discrete diffusion process to learn conditionaltoken distributions. Figure 1 presents a summary of our approachbased on discrete diffusion.4.1 Discrete Gesture Token LearningWe employ a latent space vector quantization model that has beenspecially trained on the realm of three-dimensional human gestures.When given a human gesture represented by a sequence of posesg∈RL×Dg, where Ldenotes the length of the gesture sequence andDgdenotes the dimensions of a single gesture frame, an encoderEconverts these frames into gesture tokens or snippets s∈Rl×h,where ldenotes a number significantly less than Landhdenotesthe latent dimension. Then, using a discrete quantization techniqueDQand a learned codebook Cwith Kembedding entries (c1,...cK)of dimensions Rh, these fragments are converted into quantizedvectors b∈Rl×h.DQperforms a transformation on sby comparing(si)ti=1to all codebook entries and switches the snippet with theclosest codebook index. Hence, the process DQis defined as,ki=argmin cj∈C||si−cj|| (1)In the reverse quantization process to determine the latent embed-ding for each snippet, DQ’transforms the indices kinto the relevantentries bfrom codebook C. In the end, a decoder Dreconstructsbto the 3D space for human gestures. The general formulation ofthis autoencoder technique is:bg=D(DQ′(DQ(E(g)))) (2)This procedure is trained with an embedding loss to update thecodebook entries and stabilize training, and a reconstruction lossbetween gandbggiven by:Lvq=||bg−g||1+||sg[E(g)]−b||22+β||E(g)−sg[b]||22(3)sg[.] stands for the stop gradient operation in this context andβis a weighting factor. Since the quantization process DQis notdifferentiable, back-propagation was made possible by using thestraight-through gradient estimator [37].In our system, the encoder and decoder layers for the VQ-VAEmodel are a series of convolutional layers with skipped connec-tion, which are adapted from the recent work in image synthesis[11]. Since their original applications were 2D image synthesis,we changed the 2D convolutions layers into 1D to better fit thedata dimensions for the gesture motions. We use l=L/4in ourexperiments which gives us a sequence length lof 16.4.2 Diffusion for Discrete Gesture TokensThe discrete diffusion model and its continuous equivalent sharemany similarities. The forward diffusion process gradually corruptsthe sample through a Markov chain q(kt|kt−1), given a sequenceof discrete tokens k0∈Il, where the subscript denotes the diffusionstep. Following the discrete diffusion process [ 15], we employ theforward process to create progressively noisier latent variablesk1,..., kT∈Il, whereTrepresents the total number of diffusionsteps. In this discrete diffusion example, kTconsists of pure noiseor all masked tokens.The reverse diffusion process samples from the reverse distri-butionq(kt−1|kt,k0)in an attempt to reconstruct k0from kT. Toapproximate the reverse distribution, we train a transformer modelas the denoising model. The transformer model produces the distri-bution represented by the symbol pθ(kt−1|kt,y), where ydenotesthe condition (e.g., speech/text/interlocutor gestures or their com-bination).The transitional probabilities between codebook indices aredefined by fixed transition matrices Qt∈R(K+1)×(K+1)at eachtimestep. The matrix Qis given by,Qt=αt+βtβtβt... 0βtαt+βtβt... 0βtβtαt+βt... 0...............γtγtγt... 1(4)The [MASK] token is represented by the extra dimension inK+1. According to Qt, an index in kthas a probability of Kβtof being replaced by another index chosen randomly from the Kindices, with a probability γtof turning into a [MASK] index, anda probability of αtof staying the same index at each diffusion step.During training, the forward diffusion process becomes efficientby utilizing the closed-form equation [ 15] of the cumulative transi-tion matrixQt=Qt...Q 1, which expresses the transition probabil-ity from k0toktand the corresponding forward probability distri-butionq(kt|k0). Throughout the reverse process, the model learnsto approximate the posterior q(kt−1|kt,k0)withpθ(kt−1|kt,y), asmentioned earlier.To enhance generation results, recent efforts [ 4,18] utilize areparameterization approach, approximating the distribution ratherthan directly modeling the posterior. The denoising model producesdenoised gesture tokens given by pθ( ̃k0|kt,y). By using the de-noised token distribution pθ( ̃k0|kt,y)and the posterior distributionq(kt−1|kt, ̃k0), we sample the(t−1)-th gesture from pθ(kt−1|kt,y)during inference.The diffusion model is implemented as a transformer architecture[38] with 19 layers and 16 attention heads. We use 100 diffusionICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.Figure 1: Architecture for VQ-Diffusion model. The top half represents the VQ-VAE model framework. Bottom left figure brieflyshows the forward and reverse process of the training stage in Diffusion. Bottom right figure explains the inference stage withthe reparametrization trick.steps for our method and set the condition hidden dimension as512.4.3 Classifier-Free GuidanceThe diffusion model attempts to optimize the prior distributionp(k|y)during the training phase of a conditional generation taskusing kas a sample and yas the associated condition, providedthat the posterior distribution p(y|k)is satisfied. It’s probable thatthroughout training, this posterior probability will be disregarded.It is possible that the model merely uses the corrupted sample toreconstruct and ignores the conditional input because it has accessto both the corrupted sample and the condition. The posterior issue[34], or poor alignment between the generated sample and thecondition, results from this.Therefore, both p(k|y)andp(y|k)must be included in our opti-mization objective. One way to do this is to optimize logp(k|y)+slogp(y|k), where sdenotes the guidance scale which is a hyper-parameter. By using Bayes’ Theorem, this optimization functioncan be expressed as:argmax k=[logp(k)+(s+1)(logp(k|y)−logp(k))] (5)where p(k)is the unconditional distribution of k. To handle theunconditional inputs, the model is also trained with a ’null’ con-dition [ 26] for a select percentage of samples. It has been shownthat implementing a learnable conditional vector instead of a ’null’condition is more suitable for training classifier-free guidance [ 34].We adopt the technique with a learnable null vector in our im-plementation. Empirically, we found that using the classifier-freeguidance with a proper guidance scale improves the overall gesturesynthesis results.5 RESULTS AND DISCUSSION5.1 Implementations and ExperimentsWe chose to train VQ-VAE over 35k steps (120 epochs) on a batchsize of 256 which takes approximately 90 minutes to show properconvergence. The VQ-VAE model was trained with both the L2reconstruction loss and the codebook loss. In addition, we utilizedFréchet Gesture Distance (FGD) as the perceptual metric to evaluatewhether the reconstructed motions were statistically faithful to theoriginal motion styles. Figure 2 (Top row) shows the loss graphs fortraining the VQ-VAE, which demonstrates the method is capable oflearning the discrete representation and reconstructing the originalgestures. The VQ-VAE model shows good gesture reconstructioncapabilities as proven by the best validation FGD of 0.7. However,empirically we observed one peculiarity that using the VQ-VAEmodel with the best reconstruction FGD may produce worse resultswhen training the discrete diffusion model in the 2nd stage. Wesuspected this may be due to overfitting and thus chose a VQ-VAEcheckpoint with FGD of 1 for training the discrete diffusion model.For training the 2nd stage diffusion model, the KL divergenceloss was used since the diffusion is operated on the discrete la-bels. For selecting the best checkpoint, FGD was also used as theevaluation metric to reflect the motion quality of synthesized ges-tures. During training, the discrete diffusion model converged witha steady decrease in KL loss until the model started to overfit ataround 12K steps again on a batch size of 256. The FGD was alsoconverging smoothly without large fluctuations as shown in Figure2 (Bottom row). As seen in the plots, FGD continued to improvedespite the increase in validation loss. Therefore for stage 2, wepicked the checkpoint with the lowest FGD since it was observedDiscrete Diffusion for Co-Speech Gesture Synthesis ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 2: Metric plots on the Genea2023 dataset training and validation. Top row shows the metrics for training and validatingof the VQ-VAE stage with training loss, validation loss and FGD from left to right. Bottom row shows the metrics for diffusionmodel trained and validated on the above VQ-VAE. Once, again with training loss, validation loss and FGD from left to right.empirically that the overfitted model with lower FGD resulted inbetter-looking gestures.5.2 Subjective EvaluationsThe user study and evaluations were conducted by the GENEA 2023organizers. The videos for the subjective evaluations were renderedfrom the gesture motion submissions from each team. Since thechallenge dataset is based on dyadic conversations between twoagents, three tasks were evaluated to properly assess different qual-ities for the generated gesture motions. The Human-likeness studymeasures the overall quality of the generated motions without fac-toring in the speech content. Appropriateness for agent speechstudy measures whether the synthesized gestures correspond wellto the input speech without considering the interlocutor. Finally, ap-propriateness for the interlocutor includes the dyadic interactionsto evaluate whether the interlocutor’s motions are proper giventhe conversations and the main agent’s motions. In the following,we further discuss the evaluation results for our system (SI).Figures 3, 4a, 4b show the subjective evaluations of various mod-els on the test dataset. Our model (SI) shows average performanceand ranks in the middle of all competing models. The average re-sult can be attributed to a few reasons. First, due to the efforts fordeveloping and tuning the VQ-diffusion model, we were not able toperform extensive experiments with all different input conditionswithin the timeline for the Challenge. Therefore the model has beenconditioned only on the audio of the main agent and interlocutorfor simplicity in the experiments. The possible improvement wouldbe including additional conditions such as the text transcript forbetter speech context, interlocutor gestures for more appropriatedyadic gestures and speaker identities for varying the gesture stylesof different speakers. A combination of these input features canbe fused with the audio features in a joint embedding space whichcould serve as a better conditional input for diffusion. AnotherHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 3: Box plot visualising the ratings distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are meanratings (also with a 0.05 confidence interval). Box edges are at25 and 75 percentiles, while whiskers cover 95 % of all ratingsfor each condition. Conditions are ordered by descendingsample median rating.reason for the average performance is that we have ignored synthe-sizing the finger joints when training our models, and focused onlyon producing the body and arm motions. Including these additionalfinger motions would likely enhance the details of the gestures andboost the overall motion quality in the subjective evaluations.Moreover, on inspection of our generated gestures visually, weobserved a jittering issue in some results. Specifically, sometimesthe synthesized gesture motions may produce abrupt movementsICMI ’23 Companion, October 9–13, 2023, Paris, France Chemburkar et al.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(b) Appropriateness for the interlocutorFigure 4: Bar plots visualising the response distribution inthe appropriateness studies. The blue bar (bottom) repre-sents responses where subjects preferred the matched mo-tion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score.that look like noises and motion artifacts. Originally we thoughtthis was due to the singularity of the pose representation. However,the jittering still persisted after we switched to the 6-D rotationrepresentation. Therefore we speculated that the possible reason forthis effect could be due to the discrete nature of the representation.During the learning process, the discrete diffusion process mighthave predicted to shift between codebook indices representing twovery different gestures. Even though the VQ-VAE decoder shouldalleviate the discontinuous motions, this may still lead to suddenspeed changes in the gesture being performed and reduces theoverall smoothness of the produced motion. Resolving this issuerequires a deeper investigation into the diffusion model training tounderstand the cause. Some heuristics could also be implementedto prevent sampling the subsequent gesture tokens that are too faraway in the motion space.While we believe the proposed architecture of discrete condi-tional diffusion is a promising method, a significant disadvantageto this method is having to train two different models. It requirestraining both the VQ-VAE model for learning the discrete latentcodes and the discrete diffusion model for learning the conditionalinference. Thus the performance of the diffusion model dependsheavily on the quality of VQ-VAE and slight variance in VQ-VAE canlead to significant performance differences in the final performance.In our experiment, we found that the codebook size of the VQ-VAE is also an important factor and it is easy to overfit if a largecodebook size is chosen. For example, using a codebook size of 1024produces worse results than a codebook size of 256, which was usedin our final model. Another hyperparameter requires tuning in theguidance scale in the diffusion process. The final quantitative resultsvary significantly on the guidance scale. We found a guidance scaleof 4 to give the best results.6 CONCLUSIONS AND TAKEAWAYSIn this paper, we describe the gesture synthesis method of our sub-mission entry to GENEA Challenge 2023 [ 21]. Overall, the discretediffusion method is able to leverage the generative strength of thediffusion process while reducing the inference time compared torunning the diffusion on the full motion poses. However, the userstudy results showed that there is still room for improvement inour proposed system. In the future, we plan to address the issues ofjittering artifacts and finger motions to improve the overall motionquality. We also hope to experiment with additional input condi-tions to produce proper motions in dyadic scenarios. We believe themethod requires more refinements and could be a promising direc-tion for generating stylized gestures using various input conditionssuch as audio, text, and speaker identities once these drawbacksare addressed.7 ACKNOWLEDGMENTThis work is supported by University Affiliated Research Center(UARC) award W911NF-14-D-0005. Statements and opinions ex-pressed and content included do not necessarily reflect the positionor the policy of the Government, and no official endorsement shouldbe inferred.REFERENCES[1]Chaitanya Ahuja and Louis-Philippe Morency. 2019. Language2pose: Naturallanguage grounded pose forecasting. In 2019 International Conference on 3D Vision(3DV) . IEEE, 719–728.[2]Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow.2020. Style-Controllable Speech-Driven Gesture Synthesis Using NormalisingFlows. Computer Graphics Forum 39 (5 2020), 487–496. Issue 2. https://doi.org/10.1111/CGF.13946[3]Tenglong Ao, Zeyi Zhang, and Libin Liu. 2023. GestureDiffuCLIP: Gesture Diffu-sion Model with CLIP Latents. arXiv preprint arXiv:2303.14613 (2023).[4]Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van denBerg. 2021. Structured denoising diffusion models in discrete state-spaces. Ad-vances in Neural Information Processing Systems 34 (2021), 17981–17993.[5]Kirsten Bergmann and Stefan Kopp. 2009. Increasing the Expressiveness of VirtualAgents: Autonomous Generation of Speech and Gesture for Spatial DescriptionTasks. In Proceedings of The 8th International Conference on Autonomous Agentsand Multiagent Systems - Volume 1 (Budapest, Hungary) (AAMAS ’09) . Interna-tional Foundation for Autonomous Agents and Multiagent Systems, Richland,SC, 361–368.[6]Uttaran Bhattacharya, Elizabeth Childs, Nicholas Rewkowski, and DineshManocha. 2021. Speech2affectivegestures: Synthesizing co-speech gestures withDiscrete Diffusion for Co-Speech Gesture Synthesis ICMI ’23 Companion, October 9–13, 2023, Paris, Francegenerative adversarial affective expression learning. In Proceedings of the 29thACM International Conference on Multimedia . 2027–2036.[7]Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan,Aniket Bera, and Dinesh Manocha. 2021. Text2gestures: A transformer-basednetwork for generating emotive body gestures for virtual agents. In 2021 IEEEVirtual Reality and 3D User Interfaces (VR) . IEEE, 1–10.[8]Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and ChristianTheobalt. 2022. MoFusion: A Framework for Denoising-Diffusion-based MotionSynthesis. arXiv preprint arXiv:2212.04495 (2022).[9]Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford,and Ilya Sutskever. 2020. Jukebox: A generative model for music. arXiv preprintarXiv:2005.00341 (2020).[10] Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion Models Beat GANs onImage Synthesis. In Advances in Neural Information Processing Systems , M. Ran-zato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.),Vol. 34. Curran Associates, Inc., 8780–8794. https://proceedings.neurips.cc/paper_files/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf[11] P. Esser, R. Rombach, and B. Ommer. 2021. Taming Transformers for High-Resolution Image Synthesis. In 2021 IEEE/CVF Conference on Computer Visionand Pattern Recognition (CVPR) . IEEE Computer Society, Los Alamitos, CA, USA,12868–12878. https://doi.org/10.1109/CVPR46437.2021.01268[12] Ylva Ferstl, Michael Neff, and Rachel McDonnell. 2021. ExpressGesture: Ex-pressive gesture generation from speech through database matching. Com-puter Animation and Virtual Worlds 32 (6 2021), e2016. Issue 3-4. https://doi.org/10.1002/CAV.2016[13] S. Ginosar, A. Bar, G. Kohavi, C. Chan, A. Owens, and J. Malik. 2019. LearningIndividual Styles of Conversational Gesture. In Computer Vision and PatternRecognition (CVPR) . IEEE.[14] Shiry Ginosar, Amir Bar, Gefen Kohavi, Caroline Chan, Andrew Owens, andJitendra Malik. 2019. Learning Individual Styles of Conversational Gesture. 2019IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019-June (6 2019), 3492–3501. https://doi.org/10.1109/CVPR.2019.00361[15] Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, LuYuan, and Baining Guo. 2022. Vector quantized diffusion model for text-to-imagesynthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition . 10696–10706.[16] Ikhsanul Habibie, Mohamed Elgharib, Kripashindu Sarkar, Ahsan Abdullah, Sim-barashe Nyatsanga, Michael Neff, and Christian Theobalt. 2022. A MotionMatching-based Framework for Controllable Gesture Synthesis from Speech.InSIGGRAPH ’22 Conference Proceedings . arXiv:Todo[17] Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilisticmodels. Advances in Neural Information Processing Systems 33 (2020), 6840–6851.[18] Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forr’e, and Max Welling.2021. Argmax Flows and Multinomial Diffusion: Towards Non-AutoregressiveLanguage Models. ArXiv abs/2102.05379 (2021).[19] Stefan Kopp, Bernhard Jung, Nadine Lessmann, and Ipke Wachsmuth. 2003.Max-a multimodal assistant in virtual reality construction. KI17, 4 (2003), 11.[20] Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, SimonAlexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator: A frame-work for semantically-aware speech-driven gesture generation. In Proceedings ofthe 2020 International Conference on Multimodal Interaction . 242–250.[21] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[22] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha Srinivasa,and Yaser Sheikh. 2019. Talking With Hands 16.2M: A Large-Scale Dataset ofSynchronized Body-Finger Motion and Audio for Conversational Motion Analysisand Synthesis. In 2019 IEEE/CVF International Conference on Computer Vision(ICCV) . 763–772. https://doi.org/10.1109/ICCV.2019.00085[23] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and LinchaoBao. 2021. Audio2Gestures: Generating Diverse Gestures from Speech Audio withConditional Variational Autoencoders. 2021 IEEE/CVF International Conferenceon Computer Vision (ICCV) (10 2021), 11273–11282. https://doi.org/10.1109/ICCV48922.2021.01110[24] Shuhong Lu and Andrew Feng. 2022. The DeepMotion entry to the GENEAChallenge 2022. In Proceedings of the 2022 International Conference on MultimodalInteraction . 790–796.[25] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, andAri Shapiro. 2013. Virtual Character Performance from Speech. In Proceedingsof the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation(Anaheim, California) (SCA ’13) . Association for Computing Machinery, NewYork, NY, USA, 25–35. https://doi.org/10.1145/2485895.2485900[26] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin,Bob McGrew, Ilya Sutskever, and Mark Chen. 2022. GLIDE: Towards Photo-realistic Image Generation and Editing with Text-Guided Diffusion Models.arXiv:2112.10741 [cs.CV][27] Shenhan Qian, Zhi Tu, Yihao Zhi, Wen Liu, and Shenghua Gao. 2021. SpeechDrives Templates: Co-Speech Gesture Synthesis with Learned Templates. 2021IEEE/CVF International Conference on Computer Vision (ICCV) (10 2021), 11057–11066. https://doi.org/10.1109/ICCV48922.2021.01089[28] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,et al.2021. Learning transferable visual models from natural language supervision.InInternational conference on machine learning . PMLR, 8748–8763.[29] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever,et al.2019. Language models are unsupervised multitask learners. OpenAI blog1, 8 (2019), 9.[30] Ali Razavi, Aaron Van den Oord, and Oriol Vinyals. 2019. Generating diversehigh-fidelity images with vq-vae-2. Advances in neural information processingsystems 32 (2019).[31] Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising DiffusionImplicit Models. arXiv:2010.02502 (October 2020). https://arxiv.org/abs/2010.02502[32] Yang Song and Stefano Ermon. 2019. Generative Modeling by Estimating Gra-dients of the Data Distribution. In Advances in Neural Information ProcessingSystems . 11895–11907.[33] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, StefanoErmon, and Ben Poole. 2021. Score-Based Generative Modeling through Stochas-tic Differential Equations. In International Conference on Learning Representations .https://openreview.net/forum?id=PxTIG12RRHS[34] Zhicong Tang, Shuyang Gu, Jianmin Bao, Dong Chen, and Fang Wen. 2022.Improved vector quantized diffusion models. arXiv preprint arXiv:2205.16007(2022).[35] Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, andAmit H Bermano. 2022. Human motion diffusion model. arXiv preprintarXiv:2209.14916 (2022).[36] Aaron Van Den Oord, Oriol Vinyals, et al .2017. Neural discrete representationlearning. Advances in neural information processing systems 30 (2017).[37] Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2018. NeuralDiscrete Representation Learning. arXiv:1711.00937 [cs.LG][38] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in neural information processing systems 30 (2017).[39] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. 2021. Videogpt:Video generation using vq-vae and transformers. arXiv preprint arXiv:2104.10157(2021).[40] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, JaehongKim, and Geehyuk Lee. 2020. Speech Gesture Generation from the TrimodalContext of Text, Audio, and Speaker Identity. ACM Transactions on Graphics 39,6 (2020).[41] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and GeehyukLee. 2019. Robots Learn Social Skills: End-to-End Learning of Co-Speech GestureGeneration for Humanoid Robots. In Proc. of The International Conference inRobotics and Automation (ICRA) .[42] Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: Alarge evaluation of data-driven co-speech gesture generation. In Proceedings ofthe ACM International Conference on Multimodal Interaction (ICMI ’22) . ACM.[43] Fan Zhang, Naye Ji, Fuxing Gao, and Yongping Li. 2023. DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model. In MultiMedia Mod-eling: 29th International Conference, MMM 2023, Bergen, Norway, January 9–12,2023, Proceedings, Part I . Springer, 231–242.[44] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo,Lei Yang, and Ziwei Liu. 2022. MotionDiffuse: Text-Driven Human MotionGeneration with Diffusion Model. arXiv preprint arXiv:2208.15001 (2022).[45] Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. 2019. On thecontinuity of rotation representations in neural networks. Proceedings of theIEEE Computer Society Conference on Computer Vision and Pattern Recognition2019-June (6 2019), 5738–5746. https://doi.org/10.1109/CVPR.2019.00589[46] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. 2023.Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation. arXivpreprint arXiv:2303.09119 (2023). |
vD3_u_kbkqS | Diffusion-Based Co-Speech Gesture Generation Using Joint Textand Audio RepresentationAnna Deichlerdeichler@kth.seKTH Royal Institute of TechnologyStockholm, SwedenShivam Mehtasmehta@kth.seKTH Royal Institute of TechnologyStockholm, SwedenSimon Alexandersonsimonal@kth.seKTH Royal Institute of TechnologyStockholm, SwedenJonas Beskowbeskow@kth.seKTH Royal Institute of TechnologyStockholm, SwedenABSTRACTThis paper describes a system developed for the GENEA (Genera-tion and Evaluation of Non-verbal Behaviour for Embodied Agents)Challenge 2023. Our solution builds on an existing diffusion-basedmotion synthesis model. We propose a contrastive speech and mo-tion pretraining (CSMP) module, which learns a joint embeddingfor speech and gesture with the aim to learn a semantic couplingbetween these modalities. The output of the CSMP module is usedas a conditioning signal in the diffusion-based gesture synthesismodel in order to achieve semantically-aware co-speech gesturegeneration. Our entry achieved highest human-likeness and high-est speech appropriateness rating among the submitted entries.This indicates that our system is a promising approach to achievehuman-like co-speech gestures in agents that carry semantic mean-ing.KEYWORDSgesture generation, motion synthesis, diffusion models, contrastivepre-training, semantic gesturesACM Reference Format:Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow. 2023.Diffusion-Based Co-Speech Gesture Generation Using Joint Text and AudioRepresentation. In INTERNATIONAL CONFERENCE ON MULTIMODAL IN-TERACTION (ICMI ’23), October 09–13, 2023, Paris, France. ACM, New York,NY, USA, 8 pages. https://doi.org/10.1145/3577190.36161171 INTRODUCTIONHuman communication is inherently multimodal involving the in-tegration of multiple verbal and non-verbal modalities to conveythe information. These modalities work in synergy, collaborating tocreate a joint representation of the message the speaker intends toconvey [ 29]. In addition to complementing verbal communication,these non-verbal gestures frequently serve as substitutes for wordsThis work is licensed under a Creative Commons Attribution International4.0 License.ICMI ’23, October 09–13, 2023, Paris, France©2023 Copyright held by the owner/author(s).ACM ISBN 979-8-4007-0055-2/23/10.https://doi.org/10.1145/3577190.3616117[9,31]. The semantic meaning contribution of gestures is multi-faceted. Beat gestures primarily emphasize the verbally expressedcontent, serving to accentuate the spoken message. On the otherhand, iconic and pointing gestures go beyond emphasizing content;they directly represent or indicate the referent being discussed.Deictic pointing gestures, often accompanying deictic words, play acrucial role in referential communication by providing vital contex-tual information for reference disambiguation, while iconic gesturesserve to visually represent or symbolize the attributes, actions, orcharacteristics associated with the referent.Co-speech gesture generation in robotics and avatars focuses ongenerating gestures that accompany and extend the verbal modal-ity. However, the generation of audio-driven motion has posed asignificant challenge. This difficulty arises from the fact that suchmotion can be accurately predicted by very strong probabilisticmodels, since gestures exhibit high individual variability, are inher-ently non-deterministic [ 2]. Recent advances in learning arbitraryprobability distributions with diffusion models has offered a wayto tackle this problem. These audio-driven gesture generation mod-els have proven to be efficient in reproducing the high variabilityand expressivity of human gestures, however integrating seman-tic content into gesture generation by combining audio and textconditioning is another challenge.Self-supervised pre-training methods have proven to be an ef-ficient way to learn useful representations for downstream tasks,especially in case of limited labeled data. Multi-modal pre-trainingmethods learn embedding spaces that encode useful relations ofdifferent data modalities. Contrastive Language-Image Pre-Training(CLIP) [ 32] is a contrastive multi-modal pre-training method thatlearns a joint representation of image and text data by contrastingpositive and negative text-image pair examples in the latent spaceduring training. This training approach encourage the model tocapture the underlying relationship between the two modalities.The problem of co-speech gesture generation involves multiplemodalities, with a tight coupling between motion, text and audio.This work aims at combining the expressivity of diffusion basedmotion synthesis [ 2] with the multi-modal understanding of a CLIP-like latent embedding space that models the relations betweenmotion, text and audio in co-speech gestures.ICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow2 RELATED WORK2.1 Co-speech gesture generationThe primary goal of co-speech gesture generation is to synthesisenatural and contextually appropriate gestures. In the early stagesof gesture generation research, various rule-based approaches wereemployed [ 5,26,27], where the generation of gestures was triggeredby predefined rules that initiated the playback of pre-recordedgestures. In recent years, this field has been dominated by the useof data-driven deep learning based modelling methodologies [31].Early works on deep learning-based gesture synthesis treatedit as a regression problem and utilised recurrent [ 14,36] and con-volutional [ 21] neural networks to model the generation process.Treating gesture synthesis as a regression problem leads to the prob-lem of under-articulated and over-smoothened gestures becauseof averaging over all the possible outcomes for an input signal. Toaddress the challenge of under-articulated and over-smoothenedsynthesis researchers employed various probabilistic modellingtechniques such as VAEs [ 12], VQ-VAEs [ 43], Normalising Flows[1] or adversarial techniques like GANs [ 41,42]. These methodolo-gies aim to enhance the realism and expressiveness of the generatedgestures by learning a distribution over the entire utterances andsampling different realisations from it or learning powerful transfor-mations from a simple distribution, usually a Gaussian distribution,to the output motion distribution.Diffusion models [ 15,34,35] have emerged as a notable and con-temporary probabilistic generative modelling methodology. Thesemodels have shown promise in capturing complex data distribu-tions and have gained attention in various fields, including gesturegeneration [ 2,3,30,45]. Inspired by these works our system usesDenoising Diffusion Probabilistic Modelling (DDPM) [ 15] formu-lation with self-supervised representations to synthesise gesturesconditioned on the input audio.2.2 Semantic gesture generationIn order to generate contextually appropriate gestures in agents, itis crucial to take into account gesture semantics. Semantic gestureshave a symbolic representational quality and contribute to theoverall meaning in communication. The generation of semanticgestures is highly reliant on which input modalities are taken intoaccount in the modeling process [31].Audio driven generation can reproduce the coupling betweengesture kinematics and the intonation, stress and rhythm present inthe audio signal. These systems are good at modeling beat gestures,which can help highlight important points or add emphasis tocertain words or phrases [ 28],[1],[2]. However, in order to generaterepresentational gestures (e.g., iconic, deictic pointing), additionalinput modalities are needed. Text-based conditioning is essentialto model the relation between semantic and kinematic spaces inorder to generate iconic gestures [ 44],[22], while the generation ofdeictic pointing gestures needs referential target information [ 10].In this work we develop a novel approach to jointly model audioand text conditioning in gesture generation through a contrastiveself-supervised learning approach in order to extend the existingaudio conditioned system with semantic capabilities.2.3 Using language based pre-trainingapproaches in motion generationRecent works approaches have leveraged different pre-training ap-proaches to learn the semantic coupling between text and motionspaces. [ 46] uses a GPT-like module to generate code indices basedon text embeddings which are utilized by a VQ-VAE module inmotion generation, while [ 17] proposes MotionGPT, which per-forms language modeling on both motion and text in a unifiedmanner, treating human motion as a specific language. Previouswork has also leveraged CLIP’s multimodal understanding to gener-ate meaningful motion. [ 37] develops an auto-encoder based motiongeneration model, which learns a motion embedding space alignedwith CLIP’s latent space, which allows for the generation of ex-pressive and versatile text-based motions. [ 38] uses CLIP latents asconditioning information in diffusion based human motion genera-tion. Similarly, [ 8] conditions on CLIP latents, but combines latentspace based and diffusion based motion generation. Most similar toour work is [ 3], which learns a gesture-text joint embedding usingcontrastive learning and a CLIP based style encoding module in adiffusion based gesture synthesis model.3 METHOD3.1 Self-supervised representations of text andaudioWe employ pre-trained self-supervised representations for text andaudio for both the main agent and the interlocutor. Data2vec [ 4]which is a framework for self-supervised representation learningon data of different modalities (text, audio and images), for whichpre-trained models are available1. Data2vec leverages transformerarchitecture in a self-distillation setup to achieve contextual textembedding, predicting latent representations of the full input databased on a masked view of the input.For audio, we use the data2vec-audio-base-960h model, whichtakes one-channel 16 Khz audio as input. As output we use the lasthidden layer, which gives us a sequence of 768-dimensional embed-ding vectors at a rate of 50 Hz. The output is then converted to 30Hz using polyphase resampling ( scipy.signal.resample_poly )in order to match the frame rate of the motion data.For text, we use the data2vec-text-base model. Input to themodel is a sequence of byte-pair encoded text tokens. Just as for theaudio, we use the last hidden layer of the data2vec model to obtaina 768-dimensional vector for each input token. We use the wordtimed transcriptions provided in the dataset (see [ 23]) to maintaina start and end time for each token, then we replicate the outputvector at a rate of 30 Hz for the duration of the token, The result isa text-embedding sequence that is aligned with, and of the samelength as, the audio and motion data sequences.3.2 Join representation with Contrastive Speechand Motion Pretraining (CSMP)Contrastive pre-training can effectively capture the semantic rela-tionships between two modalities but usually, it requires a largerbatch size and larger dataset to learn efficient joint representations[7] which can be challenging especially in this case because of1huggingface.coDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, Francedataset-specific properties [ 23] such as the presence of an interlocu-tor and the skeletal nodes of the characters. In such a case havingrepresentations which already capture semantic information can beused as the inputs to the CLIP module. Therefore, we devise a varia-tion of CLIP and call it Contrastive Speech and Motion Pretraining(CSMP).In CSMP, we propose several modifications to the original CLIParchitecture within the context of multimodal understanding namely:(1)We replace the vision transformer present in the originalCLIP architecture with a regular transformer architecture,which effectively eliminates the patching process typicallyemployed for 2-D image analysis. This modification is moti-vated by the nature of text and audio.(2)The input to this modified transformer is derived from con-catenated representations of the output of the pretraineddata2vec module for text and audio as described in section3.1 instead of raw tokens for original CLIP.(3)For the text encoder in CLIP, we modify the input fromdiscrete text tokens to continuous motion vectors thus elim-inating the need for an embedding layer. This alteration isintended to mimic the semantic information contained in thetext and audio representations to the motion representationin the joint space of CSMP’s representations.(4)Since the original clip takes discrete and tokenized text asan input it had a context length of 77 this, in the case ofmodalities like the output of data2vec and motion which iscontinuous in nature can be insufficient to capture longer-term dependencies. In order to overcome and increase theencoder’s field of view we increased the context length to500 timesteps.The final architecture of the CSMP module is described in Fig. 1data2vec (text)data2vec (audio)Time aligned input textInput speechText and Audio EncoderMotion EncoderInput motionCLIP lossTextCSMPFigure 1: Architecture of Contrastive Speech Motion Pretrain-ing (CSMP) module.In order to train such an architecture with CLIP loss, we chunkedeach input Xi=[x1,···, xT]in a sliding window manner with awindow length of 500 and a hop length of 250 and formed multiplesplits for each utterance.Xi=[[x1,···, x500],[x250,···, x750],···,[xT−500,···, xT]]We hypothesise that this helped in the generalisation despite a fixedcontext size because the positional encoding could see the data ata specific timestep xtin different relative positions while training.The source code is available on GitHub in GestCLIP branch2.3.3 DDPM for motion synthesisDiffusion models are a recent class of generative models that havebecome popular due to their expressivity and flexible conditioning.Diffusion models are based on the idea that complex data distribu-tions can be learned by iteratively transforming a simple known dis-tribution, such as a Gaussian distribution, through a series of diffu-sion steps. Unlike VAEs, which incorporate latent variable modeling,diffusion models directly model the data distribution without explic-itly introducing latent variables. Diffusion models consist of a for-ward process and a reverse (denoising) process. The forward processdefines a Markov chain of Ndiffusion steps to gradually add noiseto samples from the data distribution x0∼q(z). The noise stepsare assumed to be fixed, zero-mean Gaussian distributions, with-out learnable parameters, q(xn|xn−1)=N(xn;√︁1−βnxn−1, βnI),whereNdenotes the multivariate Gaussian density function evalu-ated at xnand{βn}Nn=1is the noise schedule. In the reverse processthe model learns to reverse the forward process so that the model isable to construct desired data samples from the noise. If βnis smallenough, the reverse step p(xn−1|xn)is also Gaussian and a neuralnetwork is used to approximate the parameters of the distributionpθ(xn−1|xn)=N(xn−1;μθ(xn, n),Σθ(xn, n)).The Denoising Diffusion Probabilistic Model (DDPM) [ 15] sim-plifies the objective of diffusion model and establishes a connectionto score matching, which is a technique used for estimating thegradients of the probability distribution of data. These gradientsare then used to generate samples via Langevin dynamics, which isa stochastic process that simulates the motion of particles in a fluid.In DDPM the score-matching objective is reformulated as noisepredicting objective, L=Ex0,n,ε[κn∥ε−εθ(xn, n)∥22], where εθisa neural network intended to predict the noise εthat was added tox0andκnare weights.Conditional generation in diffusion models can be achievedvia classifier-guided or classifier-free models. In classifier guideddiffusion models the gradients of a separately trained classifierfφ(y|xn)is used to guide the diffusion process ∇xfφ(y|xn)[11].Classifier-free diffusion models combine conditional and uncon-ditional diffusion in order to guide the diffusion. In the above for-mulation this means that a conditional network εθ(xn, n, c), withconditioning input cis trained, where the conditioning informa-tion is randomly discarded during training, so that in the reversediffusion process conditional generation can be achieved by thecombination of the input conditioned and unconditioned model ̄εθ(xn, n, c)=εθ(xn, n, c)+γ(εθ(xn, n, c)−εθ(xn, n))[16]. Denois-ing diffusion based conditional generation has been applied invarious domains. In [ 33], the CLIP embedding based conditioninginput is randomly set to zero in order to achieve high quality imagesynthesis. DiffWave [ 20] is a denoising diffusion based model forwaveform generation, which uses mel spectograms and speaker IDas conditioning information. The Listen-Denoise-Act (LDA) model2https://github.com/shivammehta25/CLIP/tree/GestCLIPICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas Beskow[2] builds on the DiffWave model and uses mel spectogram in-formation for human motion synthesis. Audio based conditionalhuman motion synthesis, such as dancing and co-speech gesturegeneration have been a challenge in machine learning, due to theambiguity and high versatility required for good performance inthese tasks. The denoising diffusion based LDA model have provento be a powerful model to generate versatile and expressive motionin the fields of dance and co-speech gesture generation. In our workwe use the residual deonising network of LDA with a conditioningfrom the CSMP module for semantically-aware co-speech gesturegeneration.The LDA model follows DiffWave in parameterising the denois-ing network εθ, but replaces the dilated convolutions in the stackedresidual blocks with a stack of Transformers [ 39] or Conformers[13] in order to capture and integrate information over long timescales. In our experiments we use a stack of 3 translation-invarianttransformers [ 40] in each of the 15 residual blocks. The model learnsa distribution of the form p(x1:T|a1:T), where a1:Tis the acousticconditioning and x1:T=x1:T,0is the output of the diffusion processandxtis a representation of the pose at time step tin the motionsequence. In our case, the mel spectogram based acoustic condition-ing of LDA is replaced with the joint audio and text based output ofthe CSMP module, where the outputs for interlocutor and the mainagent data are concatenated into a conditioning signal of dimen-sion ct∈R1024. This is the conditioning input in the classifier-freediffusion guidance formulation. The outputs of the model are thesame in LDA, poses of skeletal joint rotations parametrised usingan exponential map representation relative to a T-pose, similarlyas in [1].4 DATA PREPARATIONThe challenge dataset is a processed version of the Talking WithHands dataset[ 25]. The original dataset is one of the largest con-versational dataset of motion and voice, incorporating 50 hours ofdyadic interactions, with audio, text and motion modalities. Weonly used the data provided by the challenge for gesture synthesis.4.1 Audio DC-removal and muting of cross-talkWe found that the audio data contained a number of loud transientclicking noises. On inspection, it was found that they were due toa significant DC-offset, in combination with the fact that certainsections of the audio signal had been zeroed out, as part of ananonymization process. This was easily rectified by subtracting themean from all non-zeroed out portions.Additionally, the data contained a non-negligible amount ofcross-talk between the two speakers in the recording. We usedthe time stamps from the time-aligned text transcriptions to muteall audio falling outside of the intervals marked as speech in thetranscription for each speaker. We used a 200 ms ramp function forthe muting to avoid introducing transients.4.2 Motion capture data cleaningWe also noticed that some of the motion capture data containederrors such as joints suddenly popping to unnatural poses. Theseerrors were predominantly confined to the wrist joints, but alsooccurred at the hips. As such problems has an impact model training,and we even found our model reproducing them in synthesis, weperformed some data cleanup. We transformed the data to jointpositions and detected discontinuities in the wrist speeds using aHampel filter. This was followed by a manual check of the affectedfiles. In the end, 17 files were removed from the training set.5 SYSTEM OVERVIEWSchematic view of the final system can be seen in Figure 2. Thesystem was trained on a NVIDIA GeForce RTX 3090 for 387.4ksteps and achieved 0.013loss on the training and 0.019loss on thevalidation set. No post-processing was applied on the generatedoutput motions.6 EVALUATIONThe evaluation of the generated motions was carried out by theGENEA Challenge organisers, details about the evaluation inter-face and experiment setups can be found in the evaluation paper[24]. The generated co-speech gesture were evaluated in three sep-arate perceptual studies: human-likeness, appropriateness to theagent’s speech and appropriateness to the interlocutor’s motionand speech. The evaluation included two baseline conditions andthe natural motion taken from the motion-capture recordings. Themonadic baseline (‘BM’) was generated with [ 6] which uses in-formation from the main-agent for gesture generation, while thedyadic baseline (‘BD’) is an adapted version of the former, whichalso includes information from the interlocutor in the conversation.The study participants were recruited through a crowd-sourcingplatform from English-speaking countries and each study incorpo-rated attention checks. Our system, labeled as ‘SG’ achieved topperformance in the studies of human-likeness and speech appro-priateness based on the generated motions submitted. However, itranked among the lowest in terms of interlocutor appropriateness.6.0.1 Human-likeness evaluation. The aim of this study was toevaluate whether the generated motion of the virtual characterlooks like the motion of a real human. No audio was used in or-der to disentangle the human-likeness evaluation from the speechappropriateness. The evaluation was based on the HEMVIP method-ology [ 19], where multiple different motion samples are presentedin parallel and the participant is asked to rate each sample. Par-ticipant could give their ratings on a scale from 0 (worst) to 100(best). Results for the evaluation are shown on Figure 3. Our system,denoted as ‘SG’, achieved best performance from the entries, withmean rating of 65.6±1.4. Figure 4 also shows that this results issignificantly better than all of the entries, except ‘SF’. Interestingly,the human-likeness score is very close to mean rating of the naturalcondition, which was 68.4±1.4as seen on Table 1. This indicatesthat our system can generate co-speech gestures which resemblesthe motion of real humans.6.0.2 Appropriateness to speech. The aim of this study was to eval-uate whether the motion if the virtual character is appropriate forthe given speech, controlling the overall human-likeness of themotion. The participants were presented with a pair of matchedand mismatched videos from the same condition in order to disen-tangle this study from the motion quality evaluation. Five responseoptions were given for indicating preference over the 2 videos andDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, FranceResidual denoising networkText and AudioEncoderdata2vec(text)data2vec(audio)Time alignedinput textInput speechCSMPSynthesised motionMain agent+ InterlocutorFigure 2: Architecture of the motion synthesis moduleHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 3: Box plot visualising the ratings distribution inthe human-likeness study. Red bars are the median rat-ings (each with a 0.05 confidence interval); yellow dia-monds are mean ratings (also with a 0.05 confidence inter-val). Box edges are at 25 and 75 percentiles, while whiskerscover 95% of all ratings for each condition. Conditionsare ordered by descending sample median rating....over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 4: Significance of pairwise differences betweenconditions. White means the condition listed on the y-axis achieved an MAS significantly above the conditionon the x-axis, black means the opposite ( yscored belowx), and grey means no statistically significant differenceat level α= 0.05 after correction for the false discoveryrate. Conditions use the same order as the correspondingsubfigure in Figure 3the responses were converted to integer values in the range of[−2,2]. Our system achieved a MAS score of 0.39±0.07at the levelofα=0.05and the matched motion was preferred over the mis-matched in 61.8% of the evaluations. With these results it rankedhighest amongst the generated motions. Figure 5 visualizes the sig-nificant differences between conditions and shows that our system,denoted by ‘SG’, was significantly more appropriate to speech thanall of the entries of generated motions. Comparison to other entriescan be found in Table 1.6.0.3 Appropriateness to interlocutor. The aim of this study was toevaluate whether the motion of the virtual character is appropriatefor the given interlocutor behavior (speech and motion). In orderto evaluate the mismatched condition, synthetic interactions werecreated, where the main agent was the same, but the interlocutor be-havior was replaced with one from another interaction. Our systemachieved a MAS score of −0.09±0.08at the level of α=0.05andthe matched motion was preferred over the mismatched in 46.7%of the evaluations. With these results it ranked among the lowest.Figure 6 visualizes the significant differences between conditionsand shows that our system, denoted by ‘SG’, was significantly lessappropriate to interlocutor than all half of the entries of generatedmotions and there was no significant difference to the other half.Comparison to other entries can be found in Table 1.The MP4-format video stimuli used in the user studies can beaccessed through the following link: https://zenodo.org/record/8211449. As before, our system is denoted as ‘SG’.ICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas BeskowNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y... Figure 6: Appropriateness for the interlocutorFigure 7: Significant differences between conditions in the two appropriateness studies. White means the condition listed onthey-axis achieved an MAS significantly above the condition on the x-axis, black means the opposite ( yscored below x), andgrey means no statistically significant difference at level α= 0.05 after correction for the false discovery rate.Table 1: Summary of results for subjective evaluation studies with confidence intervals for the mean appropriateness score (MAS)at the level α=0.05. “Pref. matched” identifies how often test-takers preferred matched motion in terms of appropriateness,after splitting ties equally.Human-likeness Speech appropriateness Interlocutor appropriatenessCondition Median Mean Condition MAS Pref.M. Condition MAS Pref.M.NA 71∈[70,71] 68.4±1.0 NA 0.81±0.06 73.6% NA 0.63 ±0.08 67.9%SG 69∈[67,70] 65.6±1.4 SG 0.39±0.07 61.8% SA 0.09 ±0.06 53.5%SF 65∈[64,67] 63.6±1.3 SJ 0.27±0.06 58.4% BD 0.07 ±0.06 53.0%SJ 51∈[50,53] 51.8±1.3 BM 0.20±0.05 56.6% SB 0.07 ±0.08 51.8%SL 51∈[50,51] 50.6±1.3 SF 0.20±0.06 55.8% SL 0.07 ±0.06 53.4%SE 50∈[49,51] 50.9±1.3 SK 0.18±0.06 55.6% SE 0.05 ±0.07 51.8%SH 46∈[44,49] 45.1±1.5 SI 0.16±0.06 55.5% SF 0.04 ±0.06 50.9%BD 46∈[43,47] 45.3±1.4 SE 0.16±0.05 54.9% SI 0.04 ±0.08 50.9%SD 45∈[43,47] 44.7±1.3 BD 0.14±0.06 54.8% SD 0.02 ±0.07 52.2%BM 43∈[42,45] 42.9±1.3 SD 0.14±0.06 55.0% BM -0.01 ±0.06 49.9%SI 40∈[39,43] 41.4±1.4 SB 0.13±0.06 55.0% SJ -0.03 ±0.05 49.1%SK 37∈[35,40] 40.2±1.5 SA 0.11±0.06 53.6% SC -0.03 ±0.05 49.1%SA 30∈[29,31] 32.0±1.3 SH 0.09±0.07 52.9% SK -0.06 ±0.05 47.4%SB 24∈[23,27] 27.4±1.3 SL 0.05±0.05 51.7% SG -0.09 ±0.08 46.7%SC 9∈[9,9] 11.6±0.9 SC -0.02±0.04 49.1% SH -0.21 ±0.05 44.0%7 DISCUSSIONThe subjective evaluation results have shown that our system is ca-pable of generating of co-speech gestures that are human-like andspeech appropriate. The high performance on the speech appropri-ateness shows that the current system is a promising approach toachieve semantically-aware co-speech gesture generation in virtualagents.Our system was top-ranked in the human-likeness and appro-priateness for agent speech evaluations, while receiving one of thelowest scores in the appropriateness to interlocutor evaluation. Thismight seem a bit counter intuitive, given that we indeed trainedthe system to listen to the interlocutor. We believe that there aremultiple factors at play here and will outline them below. First, outsystem was trained to take in speech information of the interlocutoras input (in the form of CSMP embeddings), but we chose to notinclude interlocutor motion as one of the inputs, due to time con-straints. Feeding interlocutor motion as input might have rendereda system capable of mirroring/mimicry, similar to [ 18] which couldhave resulted in a higher rating. Secondly, we would like to discussanother possible explanation, which stems from the nature of theDiffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation ICMI ’23, October 09–13, 2023, Paris, Francedata and how the evaluation was carried out. In the appropriate-ness evaluations, each system was compared against itself, and theobjective was to see to what degree raters could distinguish motionthat matched the context from mis-matched motion. As mentionedin section 4.1, there was a certain amount of cross-talk present inthe data, i.e. the interlocutor audio was present in the main agent’schannel and vice versa. We took extra measures to eliminate suchcross-talk, because not doing so would have resulted in the agentperforming co-speech gestures also while listening, based on thecross-talk from the interlocutor. Inspecting the evaluation stimulibased on the output from the different systems in the challenge, itis clear that this seems to happen in certain systems. We can fur-ther speculate that such an agent might in fact score favourably inthe match/mismatch paradigm, because the gestures would indeedbe interlocutor aware. Future work on improving the interlocutorappropriateness could involve conditioning on interlocutor mo-tion, as mentioned above, or training a separate model for listeningbehavior.Additional evaluations on the semantic gesture generation capa-bilities of the model could be of interest for future work. In theory,our model is capable of capturing the semantic relations betweenspeech and gesture spaces through the CSMP model. However,the current subjective evaluation is a bit limited in measuring thesemantic gesture generation capabilities of the model, as it is dif-ficult to disentangle from other aspects, such as speech-gesturesynchrony. Objective evaluation metrics for semantic appropriate-ness could be helpful in quantifying and improving our system inthis regard.8 CONCLUSIONSIn this paper we described our entry system to the GENEA Chal-lenge 2023. We presented a system, which builds on an existingdiffusion based motion synthesis model and proposed a condition-ing signal, which utilizes audio, text and motion data. For this weproposed a CLIP-like contrastive pre-training module, contrastivespeech and motion pretraining (CSMP) in order to capture the un-derlying relations between speech and motion. Our system achievedtop performance in human-likeness and speech appropriatenessamongst the submitted entries, which proves that our system isa promising approach to generate human-like co-speech gesturesin agents. Our system ranked relatively low in interlocutor ap-prorpiateness, which is a focus in future work for improvement.Human-like, semantic and interlocutor appropriate co-speech ges-ture generation in virtual agents is still an open problem. Oursystems high performance in the subjective evaluations is encour-aging and indicates that our submitted model is a promising wayto achieve these goals.ACKNOWLEDGMENTSThis work was partially supported by the Advanced Adaptive In-telligent Agents project (Digital Futures), the Wallenberg AI, Au-tonomous Systems and Software Program (WASP) funded by theKnut and Alice Wallenberg Foundation and by grant no. 20023495(Development of behavior-oriented HRI AI technology for long-term interaction between service robots and users) funded by theKorean Ministry of Trade, Industry and Energy (MOTIE).REFERENCES[1]Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow.2020. Style-Controllable Speech-Driven Gesture Synthesis Using NormalisingFlows. Computer Graphics Forum 39, 2 (2020), 487–496. https://doi.org/10.1111/cgf.13946 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.13946[2]Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. 2023.Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models.ACM Trans. Graph. 42, 4 (2023), 1–20. https://doi.org/10.1145/3592458[3]Tenglong Ao, Zeyi Zhang, and Libin Liu. [n. d.]. GestureDiffuCLIP: GestureDiffusion Model with CLIP Latents. ACM Trans. Graph. ([n. d.]), 18 pages. https://doi.org/10.1145/3592097[4]Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and MichaelAuli. 2022. Data2vec: A general framework for self-supervised learning in speech,vision and language. In International Conference on Machine Learning . PMLR,1298–1312.[5]Justine Cassell, Hannes Högni Vilhjálmsson, and Timothy Bickmore. 2001. BEAT:The Behavior Expression Animation Toolkit. In Proceedings of the 28th AnnualConference on Computer Graphics and Interactive Techniques (SIGGRAPH ’01) .Association for Computing Machinery, New York, NY, USA, 477–486. https://doi.org/10.1145/383259.383315[6]Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab Entry tothe GENEA Challenge 2022 – A Tacotron2 Based Method for Co-Speech GestureGeneration With Locality-Constraint Attention Mechanism. In Proceedings ofthe 2022 International Conference on Multimodal Interaction (Bengaluru, India)(ICMI ’22) . Association for Computing Machinery, New York, NY, USA, 784–789.https://doi.org/10.1145/3536221.3558060[7]Changyou Chen, Jianyi Zhang, Yi Xu, Liqun Chen, Jiali Duan, Yiran Chen,Son Dinh Tran, Belinda Zeng, and Trishul Chilimbi. 2022. Why do We NeedLarge Batchsizes in Contrastive Learning? A Gradient-Bias Perspective. In Ad-vances in Neural Information Processing Systems , Alice H. Oh, Alekh Agarwal,Danielle Belgrave, and Kyunghyun Cho (Eds.). https://openreview.net/forum?id=T1dhAPdS--[8]Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and GangYu. 2023. Executing your Commands via Motion Diffusion in Latent Space. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition .18000–18010.[9]Robert O. Davis. 2018. The impact of pedagogical agent gesturing in multimedialearning environments: A meta-analysis. Ed. Res. Rev.-Neth. 24 (2018), 193–209.[10] Anna Deichler, Siyang Wang, Simon Alexanderson, and Jonas Beskow. 2023.Learning to generate pointing gestures in situated embodied conversationalagents. Frontiers in Robotics and AI 10 (2023), 1110534.[11] Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans onimage synthesis. Advances in neural information processing systems 34 (2021),8780–8794.[12] Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F. Troje, and Marc-AndréCarbonneau. 2023. ZeroEGGS: Zero-shot Example-based Gesture Generationfrom Speech. Computer Graphics Forum 42, 1 (2023), 206–216. https://doi.org/10.1111/cgf.14734 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14734[13] Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, JiahuiYu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang.2020. Conformer: Convolution-augmented Transformer for Speech Recognition.arXiv:2005.08100 [eess.AS][14] Dai Hasegawa, Naoshi Kaneko, Shinichi Shirakawa, Hiroshi Sakuta, and KazuhikoSumi. 2018. Evaluation of Speech-to-Gesture Generation Using Bi-DirectionalLSTM Network. In Proceedings of the 18th International Conference on IntelligentVirtual Agents (Sydney, NSW, Australia) (IVA ’18) . Association for ComputingMachinery, New York, NY, USA, 79–86. https://doi.org/10.1145/3267851.3267878[15] Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilisticmodels. Proc. NeurIPS (2020), 6840–6851.[16] Jonathan Ho and Tim Salimans. 2022. Classifier-free diffusion guidance. arXivpreprint arXiv:2207.12598 (2022).[17] Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. 2023. Mo-tionGPT: Human Motion as a Foreign Language. arXiv preprint arXiv:2306.14795(2023).[18] Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, and Jonas Beskow. 2020. Let’sface it: Probabilistic multi-modal interlocutor-aware generation of facial gesturesin dyadic settings. In Proceedings of the 20th ACM International Conference onIntelligent Virtual Agents . 1–8.[19] Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, and Gustav EjeHenter. 2021. HEMVIP: Human evaluation of multiple videos in parallel. InProceedings of the 2021 International Conference on Multimodal Interaction . 707–711.[20] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2020.Diffwave: A versatile diffusion model for audio synthesis. arXiv preprintarXiv:2009.09761 (2020).[21] Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and HedvigKjellström. 2019. Analyzing Input and Output Representations for Speech-DrivenICMI ’23, October 09–13, 2023, Paris, France Anna Deichler, Shivam Mehta, Simon Alexanderson, and Jonas BeskowGesture Generation. In Proceedings of the 19th ACM International Conference onIntelligent Virtual Agents (Paris, France) (IVA ’19) . Association for ComputingMachinery, New York, NY, USA, 97–104. https://doi.org/10.1145/3308532.3329472[22] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, SimonAlexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator: A frame-work for semantically-aware speech-driven gesture generation. In Proceedings ofthe 2020 International Conference on Multimodal Interaction . 242–250.[23] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[24] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[25] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha Srinivasa,and Yaser Sheikh. 2019. Talking With Hands 16.2M: A Large-Scale Dataset ofSynchronized Body-Finger Motion and Audio for Conversational Motion Analysisand Synthesis. In 2019 IEEE/CVF International Conference on Computer Vision(ICCV) . 763–772. https://doi.org/10.1109/ICCV.2019.00085[26] Jina Lee and Stacy Marsella. 2006. Nonverbal Behavior Generator for EmbodiedConversational Agents. In Intelligent Virtual Agents , Jonathan Gratch, MichaelYoung, Ruth Aylett, Daniel Ballin, and Patrick Olivier (Eds.). Springer BerlinHeidelberg, Berlin, Heidelberg, 243–255.[27] Margot Lhommet, Yuyu Xu, and Stacy Marsella. 2015. Cerebella: AutomaticGeneration of Nonverbal Behavior for Virtual Humans. Proceedings of the AAAIConference on Artificial Intelligence 29, 1 (Mar. 2015). https://doi.org/10.1609/aaai.v29i1.9778[28] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and LinchaoBao. 2021. Audio2gestures: Generating diverse gestures from speech audio withconditional variational autoencoders. In Proceedings of the IEEE/CVF InternationalConference on Computer Vision . 11293–11302.[29] David McNeill. 2008. Gesture and Thought . University of Chicago Press.[30] Shivam Mehta, Siyang Wang, Simon Alexanderson, Jonas Beskow, Éva Székely,and Gustav Eje Henter. 2023. Diff-TTSG: Denoising probabilistic integratedspeech and gesture synthesis. arXiv:2306.09417 [eess.AS][31] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter,and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-SpeechGesture Generation. Comput. Graph. Forum (2023).[32] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,et al.2021. Learning transferable visual models from natural language supervision.InInternational conference on machine learning . PMLR, 8748–8763.[33] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.2022. Hierarchical text-conditional image generation with clip latents. arXivpreprint arXiv:2204.06125 (2022).[34] Yang Song and Stefano Ermon. 2019. Generative modeling by estimating gradientsof the data distribution. Proc. NeurIPS (2019).[35] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Ste-fano Ermon, and Ben Poole. 2021. Score-Based Generative Modeling throughStochastic Differential Equations. In Proc. ICLR .[36] Kenta Takeuchi, Dai Hasegawa, Shinichi Shirakawa, Naoshi Kaneko, HiroshiSakuta, and Kazuhiko Sumi. 2017. Speech-to-Gesture Generation: A Challengein Deep Learning Approach with Bi-Directional LSTM. In Proceedings of the5th International Conference on Human Agent Interaction (Bielefeld, Germany)(HAI ’17) . Association for Computing Machinery, New York, NY, USA, 365–369.https://doi.org/10.1145/3125739.3132594[37] Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or.2022. Motionclip: Exposing human motion generation to clip space. In ComputerVision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022,Proceedings, Part XXII . Springer, 358–374.[38] Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, andAmit H Bermano. 2022. Human motion diffusion model. arXiv preprintarXiv:2209.14916 (2022).[39] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in neural information processing systems 30 (2017).[40] Ulme Wennberg and Gustav Eje Henter. 2021. The case for translation-invariant self-attention in transformer-based language models. arXiv preprintarXiv:2106.01950 (2021).[41] Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi, and Hiroshi Ishiguro. 2021.Modeling the Conditional Distribution of Co-Speech Upper Body Gesture JointlyUsing Conditional-GAN and Unrolled-GAN. Electronics 10, 3 (2021). https://doi.org/10.3390/electronics10030228[42] Bowen Wu, Chaoran Liu, Carlos T. Ishi, and Hiroshi Ishiguro. 2021. ProbabilisticHuman-like Gesture Synthesis from Speech Using GRU-Based WGAN. In Com-panion Publication of the 2021 International Conference on Multimodal Interaction(Montreal, QC, Canada) (ICMI ’21 Companion) . Association for Computing Ma-chinery, New York, NY, USA, 194–201. https://doi.org/10.1145/3461615.3485407[43] Payam Jome Yazdian, Mo Chen, and Angelica Lim. 2022. Gesture2Vec: ClusteringGestures using Representation Learning Methods for Co-speech Gesture Genera-tion. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) . 3100–3107. https://doi.org/10.1109/IROS47612.2022.9981117[44] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and GeehyukLee. 2019. Robots learn social skills: End-to-end learning of co-speech gesturegeneration for humanoid robots. In 2019 International Conference on Robotics andAutomation (ICRA) . IEEE, 4303–4309.[45] Fan Zhang, Naye Ji, Fuxing Gao, and Yongping Li. 2023. DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model. In MultiMedia Mod-eling , Duc-Tien Dang-Nguyen, Cathal Gurrin, Martha Larson, Alan F. Smeaton,Stevan Rudinac, Minh-Son Dao, Christoph Trattner, and Phoebe Chen (Eds.).Springer International Publishing, Cham, 231–242.[46] Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Shaoli Huang, Yong Zhang,Hongwei Zhao, Hongtao Lu, and Xi Shen. 2023. T2M-GPT: Generating HumanMotion from Textual Descriptions with Discrete Representations. arXiv preprintarXiv:2301.06052 (2023). |
S9Efb3MoiZ | Gesture Generation with Diffusion Models Aided by SpeechActivity InformationRodolfo L. Tonoli∗†r105652@dac.unicamp.brDepartment of Computer Engineeringand Automation, School of Electricaland Computer Engineering,University of Campinas (UNICAMP)Campinas, SP, BrazilLeonardo B. de M. M. Marques∗Lucas H. Uedalmenezes@cpqd.com.brlhueda@cpqd.com.brCQPDCampinas, SP, BrazilPaula D. P. Costa†paulad@unicamp.comDepartment of Computer Engineeringand Automation, School of Electricaland Computer Engineering,University of Campinas (UNICAMP)Campinas, SP, BrazilABSTRACTThis paper describes a gesture generation model based on state-of-the-art diffusion models. Novel adaptations were introduced toimprove motion appropriateness relative to speech and human-likeness. Specifically, the main focus was to enhance gesture re-sponsiveness to speech audio. We explored using a pre-trainedVoice Activity Detector (VAD) to obtain more meaningful audiorepresentations. The proposed model was submitted to the GE-NEA Challenge 2023. Perceptual experiments compared our model,labeled SH, with other submissions to the challenge. The resultsindicated that our model achieved competitive levels of human-likeness. While appropriateness to the agent’s speech score waslower than most entries, there were no statistically significant dif-ferences from most models at the confidence level.CCS CONCEPTS•Computing methodologies →Animation ;Intelligent agents ;Machine learning.KEYWORDSGesture generation, co-speech gestures, diffusion modelsACM Reference Format:Rodolfo L. Tonoli, Leonardo B. de M. M. Marques, Lucas H. Ueda, and Paula D.P. Costa. 2023. Gesture Generation with Diffusion Models Aided by SpeechActivity Information. In INTERNATIONAL CONFERENCE ON MULTIMODALINTERACTION (ICMI ’23 Companion), October 9–13, 2023, Paris, France. ACM,New York, NY, USA, 7 pages. https://doi.org/10.1145/3610661.36165541 INTRODUCTIONHuman communication is composed of verbal and nonverbal be-haviours. Co-speech gestures are one of these behaviours. They are∗Both authors contributed equally to this research.†Also with Artificial Intelligence Lab., Recod.ai, Institute of Computing, University ofCampinas, SP, Brazil..Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.ICMI ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.3616554visible actions of any body part produced while speaking and mayserve different purposes, such as to provide emphasis or to depictsome physical property [ 30]. Being such a key part of human com-munication, gestures are employed in embodied agents to simulatereal interactions and create believable characters [ 29]. Otherwise,these agents may be perceived as lifeless or dull.Recent research focused on automatic gesture generation (orsynthesis) through deep learning. Such systems are able to ani-mate embodied agents much faster and less time-demanding thantraditional techniques such as hand-crafted animations or motioncapture. Additionally, these techniques may not be suited for appli-cations whose speech content is unknown beforehand, such as anavatar being controlled by a human or an embodied agent poweredby a language model.Most research on gesture generation has a cross-modal mappingapproach to this problem, similar to a translation between differentbehaviour modalities [ 4]. Also, gestures are correlated with prosodyand may be associated with semantics [ 21]. Thus, most systems usespeech audio, speech text, or both to guide gesture generation [ 23].However, synthetic data still struggles to appear human-like andappropriate to speech if compared to real human data [ 33]. Morechallenging scenarios could widen the gap between synthetic andreal data. For example, in dyadic interactions, people are expected totake turns being the active speaker for brief or long moments. Mostresearch has not addressed such situations. We propose a monadicgesture generation model that considers the voice activity for betteralignment and responsiveness of gestures given speech audio. Themodel is based on a composition of the DiffuseStyleGesture [ 32],a speech-driven diffusion model and the Motion Diffusion Model(MDM) [ 26], which is text-driven. The main contributions of thispaper to the aforementioned models are:•the integration of voice activity information to improveturn-taking and speech audio synchrony while using onlymonadic inputs;•the employment of aligned speech text as input through apre-trained CLIP model, thus supporting the generation ofgestures semantically related to speech;•the use of speech audio representations suited for content-related tasks from a pre-trained WavLM model.Our code can be accessed via https://github.com/AI-Unicamp/ggvad-genea2023.This article is structured as follows: Section 2 presents relatedworks on gesture generation and diffusion; the data processing isICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.detailed in Section 3; Section 4 describes the proposed model andqualitative evaluations of our model are presented in Section 5;the results of the proposed model compared to other entries to theGENEA Challenge 2023 are detailed in Section 6; and Section 7presents the conclusion and final remarks.2 BACKGROUND AND PRIOR WORKGenerative models enable the capture of the one-to-many natureof gestures. Studies using VAEs [ 9], GANs [ 8], and NormalizingFlows [ 10] show that such models surpass deterministic ones. How-ever, these approaches still suffer from generalized problems suchas mean pose convergence and training instability. Recently, diffu-sion models arose as a new class of promising generative modelsachieving state-of-the-art results across a wide range of multimodaltasks validated by perceptual evaluations without the same pitfallsas the generative models mentioned before. Additionally, thesemodels were shown to be capable of handling data with specialstructures, efficient sampling and providing improved likelihoodestimation [31].Denoising Diffusion Probabilistic Models (DDPMs) [ 12] are atype of generative model that synthesize new samples from anunderlying data distribution by learning how to reconstruct infor-mation. During the training process, the model takes one noisy datapoint ( xt), obtained by applying tGaussian noise addition steps tothe original data ( x), with 0<t≤T, asTis the size of the completediffusion noise-adding chain, and is set to equivalently predict eithera one-step denoised sample ( xt−1), a fully reconstructed data point(x0), or the noise contained ( ε). On inference, the process is startedfrom a pure Gaussian noise distribution and the reconstruction isperformed iteratively Ttimes, generating a new sample [12].Diffusion models exhibited state-of-the-art performance in sev-eral different tasks. On image synthesis, diffusion models achievedsuperior performance to the at the time GAN-based state-of-the-art synthesis [ 7], and were also proven to be able to generate andedit hyper-realistic images [ 22,25]. In the audio domain, diffusionmodels have been successfully exploited for audio generation [ 15]and text-to-audio [ 19] tasks, obtaining higher performance whencompared to other current staple models. Recently, diffusion modelshave also been explored on the task of video generation, whichwere demonstrated to synthesize high-fidelity videos with a highdegree of controllability and world knowledge [11].In the context of human motion generation, text-based modelsaim to control the movements via natural language semantically.The MotionDiffuse model [ 35] is the first model to exploit DDPMsfor this task, combining these models with a cross-modal Trans-former based architecture. In another approach, denominated Mo-tion Diffusion Model (MDM) [ 26], textual representations extractedfrom a pre-trained CLIP [ 24] are combined with a Transformermodel in a classifier-free guidance diffusion training process [ 13].Other works tackle the dance generation task, which intends togenerate dances given music as audio input. The EDGE [ 27] methodpairs a diffusion model with Jukebox, a generative model for music,whereas the Listen, Denoise and Action! [ 1] model adapts Dif-fWave [ 15] to generate poses and synthesize dances in variousstyles.More recently, diffusion models have also been applied to thegesture generation task. DiffMotion [ 34] is the first approach thatapplies DDPMs to generate gestures. It leverages an autoregressivetemporal encoder based on an LSTM that processes context repre-sented by spectral audio features and previous poses to condition adiffusion process, generating each pose individually.The DiffGesture [ 37] model uses a convolutional audio encoderto extract representations directly from the raw audio. A Trans-former model then uses these representations that undergoes animplicit classifier-free guidance diffusion training.The GestureDiffuCLIP [ 2] model introduces a multimodal (text,motion or video) prompt-conditioned style-controlled gesture gen-eration via mode-specific pre-trained CLIP encoders. Also, they usea contrastive learning strategy to learn semantic correspondencesbetween textual transcripts of the input speech and gestures, al-lowing for the generation of semantically-aware gestures. Thesecontributions, along with a denoiser network based on Transform-ers, attention, and AdaIN layers [ 14] to incorporate style guidance,compose a latent diffusion training process [25].Finally, the DiffuseStyleGesture [ 32] model combines layers ofcross-local and global attention to better capture the localized as-pects of gestures. With representations extracted from the self-supervised WavLM model [ 6], the authors perform a diffusion train-ing process and are able to generate and control gestures based ona style label.Although the increasing interest in the field, the synthesizedmotions from most models are still far from being indistinguishablefrom real human motion [ 33]. Moreover, research often concen-trates on monadic scenarios in which only one participant activelycommunicates. Consequently, crucial behaviours of real-life inter-actions, such as listening, reciprocal expression, and interruptions,are disregarded during development and evaluation.3 DATA AND DATA PROCESSINGThe dataset used by the 2023 GENEA Challenge is an adaptation ofthe Talking With Hands 16.2M (TWH) data [ 18]. Pre-processing,data augmentation, and selection are described in the challenge’smain paper [ 17]. The available dataset presents a dyadic scenario,i.e., it is composed of data from two people having a conversation,referred to as the main agent and interlocutor. Entries to the chal-lenge should only generate movements for the main agent, andusing the interlocutor’s data was optional. Available data includesmotion, speech audio, speech text (audio transcripts with times-tamps), and speaker label. We only used data from the main agent;thus, our model depends on monadic information alone despite thedyadic scenario. Speaker labels were also ignored.The dataset motions are BVH files with movements composed of30 poses per second represented by Euler angles. We extracted eachpose and composed a feature vector g=[ρp,¤ρp,ρr,¤ρr]whereρp∈R3jand¤ρp∈R3jare the global 3D joint positions andpositional velocities, ρr∈R6jand¤ρr∈R3jare the local 6Djoint rotations [ 36] and the local 3D joint rotational velocities, jrepresents the number of joints. The 30 frames per second rate ofthe original data and all 83 joints of the skeleton were preserved,thus g∈R1245for each pose. Each dimension of motion datais normalized to zero mean and unit standard deviation over theGesture Generation with Diffusion Models Aided by Speech Activity Information ICMI ’23 Companion, October 9–13, 2023, Paris, Francechallenge training set. Audio files were resampled from 44.1 kHz to16 kHz.4 METHODOur approach consists of a combination of the MDM [ 26] andthe DiffuseStyleGestures [ 32] models, with modifications aimingfor improved responsiveness of gestures given speech audio. Thearchitecture is shown in Figure 1. Our model generates sequences of120 poses simultaneously, corresponding to 4 seconds. We considerinputs to be divided into global and fine-grained information. Thefirst corresponds to information relevant to the 4-second sequenceas a whole, which includes the words spoken (text), seed poses, andtimestep embedding. On the other hand, fine-grained informationis considered to be relevant at the frame level; thus, it includesaudio and speech activity.4.1 Global InformationSince gestures can be semantically related to speech, providingtext information could improve gesture appropriateness. As textualfeatures, we use spoken words within a motion sequence. Wordstimestamps from the audio transcript are used for extracting thecorresponding words. As in the MDM [ 26] model, the speech textcontained in the sequence of poses passes by a pre-trained CLIP [ 24]model1and then processed from the clip output dimension of 512to a dimension of 64 by a fully connected layer.For the motion between consecutive generated sequences tohave cohesion, 10 previous seed poses are used as conditional input.These poses are flattened and then projected to a dimension of192, and then concatenated with the textual information, forminga vector with the defined latent dimension of 256. Additionally,the timestep embedding of the diffusion process, which indicateswhich denoising step is being performed, is a sinusoidal positionalembedding that is passed through two fully connected layers witha Sigmoid Linear Unit (SiLU) activation layer in between and pro-jected to latent dimension. With this, the embedding that representsglobal conditioning information (the one that is invariant to thepose sequence) is obtained by summing the time-step embeddingwith the concatenation of the textual and seed poses embedding.4.2 Fine-grained InformationWe work with chunks of sequences of 120 poses corresponding to4 seconds of motion. The noisy poses for the diffusion process areobtained by adding tsteps of Gaussian noise on a sequence. Theseposes are then projected via a linear layer from the pose dimensionof 1245 to the latent space dimension. For the audio information,we use the resampled audio data and pass it through the WavLM [ 6]model2. Differently from the DiffuseStyleGestures [ 32], we use therepresentations extracted from the 11th layer instead of the 12th.The 11th layer is reported to perform better at content-related tasks,such as phoneme recognition and automatic speech recognition.These representations are first interpolated to match the length ofthe corresponding pose sequence and then projected to a dimensionof 64 by a linear layer.1Version ‘ViT-B/32’ obtained from https://github.com/openai/CLIP2Version ‘Base+’ obtained from https://github.com/microsoft/unilm/tree/master/wavlmFigure 1: Model architecture.4.2.1 Speech Activity Information. Due to the dyadic nature of thedataset, some sections of the data are composed of moments inwhich the main agent is not the active speaker, such as listeningand turn-taking moments. Gestures performed in active or non-active moments may play different roles in human interaction and,thus, differ from those performed in other moments. For example,beat gestures occur during articulations of speech and may serveto emphasize what is being said [ 21]; differently, mimicry, oftenperformed automatically, may enhance helpfulness and strengthensocial bonds [ 28]. Although our model only uses monadic data, weintroduce the use of speech activity information. This information,otherwise embedded in audio representations such as spectrogramsand MFCCs, may be lost in the abstract WavLM representations.Furthermore, the interpolation of representations to match the posesequence can blend moments with and without speech activity.Thus, the contribution of such inclusion is believed to be two-fold.First, it provides more straightforward access to fine-grained speechenergy. Second, it helps to stress, during training, the differencebetween gestures in the aforementioned moments, not in terms offunctionality, but dynamics.Speech activity can be inferred through analytical approachessuch as energy and F0. However, the dataset audios contain noisethat could affect computing these parameters: various speakers,different speech volumes, and background noise such as speechfrom the interlocutor and breathing. Thus, we consider two sce-narios for acquiring speech activity information. The first is basedon a pre-trained Voice Activity Detector (VAD)3that consists in3Obtained from https://huggingface.co/speechbrain/vad-crdnn-libripartyICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.a small CRDNN (A combination of convolutional, recurrent anddeep neural network) trained on the Libriparty dataset4, which isa synthetic cocktail-party scenario derived from the Librispeechdataset. When speech is detected, the model outputs a 1 and other-wise a 0. The second approach is taken from the annotated speechtext timestamps provided in the dataset. When there is any text,we consider the respective timestamps as 1 and otherwise as 0.The major difference between these approaches is that the pre-trained model can detect intra-text pauses, whereas audio tran-scripts provide word-level timestamps granularity. A comparisonof both is shown in Figure 2. From the figure, it is noticeable thatVAD provides closer alignment with speech energy. Besides, thepre-trained VAD removes the need for audio-aligned annotatedspeech text, which is sensitive to human perception or error.Figure 2: Scaled speech activities from timed audio tran-scripts (red) and from the VAD (black) overlapped with aspectrogram of an eight-second audio sample in the back-ground.The speech information sequence extracted from the VAD isused to select two embeddings with latent dimensions representingthe presence of speech or no speech for each pose. This sequenceof embeddings is then concatenated with the noisy poses and theaudio embeddings forming the fine-grained information.4.3 TrainingThe fine-grained information is concatenated with the global infor-mation along the latent dimension. Then, all the input informationis projected back to the latent dimension by an input linear layerand fed to the cross-local attention layer to capture local relationsbetween the features. Then, we concatenate the global informationembedding one more time with the output along the sequence di-mension before passing the sequence to the transformer encoderto capture the global context. Then, we ignore the first token ofthe output sequence and project the outputs to the pose dimen-sion, which finally represents the denoised pose ( x0) itself. We usepositional embeddings to add sequence information on both thecross-local attention and the transformer encoder.On inference, a sequence at a time is generated. The modeloutputs a vector G=[g1,g2,···,g120]. The last 10 poses from thepreviously generated sequence are used to condition the generationof the next sequence; mean poses are used for conditioning the firstsequence.4https://github.com/speechbrain/speechbrain/tree/develop/recipes/LibriParty/generate_datasetFor post-processing, we use linear interpolation to impose conti-nuity between successive sequences. To smooth motion artifactsin the output, we also apply a Savitzky-Golay [ 20] filter with awindow length of 9 and polynomial order of 3.The model was trained for 290k steps, with a batch size of 64, ina single NVIDIA Titan Xp GPU, which took about 2.5 days.5 EVALUATIONThere still is no objective metric to measure gesture perceptionreliably. Moreover, previous research has found that object met-rics differ from subjective ones [ 16]. Therefore, the research teamempirically evaluated the proposed model, its variations, and thereference models through visual inspection of their outputs.We trained the MDM [ 26] and the DiffuseStyleGestures [ 32]and used them as references for comparison, i.e., a starting pointfor development. Although providing reasonable human-like mo-tion, in terms of appropriateness to speech, we found the resultsunsatisfactory. The outputs seemed unaware of moments such asbrief pauses, turn-taking, and listening moments. That is, the agentwould frequently make gestures in those moments that appearedinadequate and similar to behaviours performed when it was theactive speaker. So, our main focus in developing the model for theGENEA Challenge 2023 was to overcome those issues of disregardfor no-speak moments. Thus, a VAD was employed to leveragespeech activity information.Figure 3: Histograms of the rotational velocities from themain agent’s left and right forearm joints from the trainingset of the dataset (top), and the output of the proposed modelwith (bottom). Red and black indicate velocities extractedwhen the main agent was the active speaker and when it wasnot.Gesture Generation with Diffusion Models Aided by Speech Activity Information ICMI ’23 Companion, October 9–13, 2023, Paris, FranceIn order to examine the effectiveness of the VAD, we presenthistograms of the rotational velocities of the forearms, a joint that isvery active when gesturing with the arms, on Figure 3, for the realtraining set (top), and the output of the proposed model (bottom).The figure splits each set considered in two distributions: when theVAD indicates that there is an occurrence of speech, VAD outputequals one, and when the VAD indicates that there is no speech, itsoutput is zero.For the training set, the histograms reveal distinct patterns as-sociated with speech activity during gesticulation. Speakers in thedataset exhibit increased forearm movements while talking versussilence periods. These insights support our underlying assumptionthat people tend to perform more gestures - or at least more abruptgestures - when they are speaking.The proposed model could reproduce, to some extent, the overallbehaviour of the training set. However, it was unable to synthesizemotion that reproduced the differences seen in the training set givenspeech activity, that is, a larger concentration of higher velocitieswhen the agent is speaking. We did an ablation study with theproposed model without the VAD module. Its histogram was similarto the one with VAD. However, visual inspections of the outputsby the research team favored outputs by the proposed model withVAD in terms of speech and gesture alignment.We also compared outputs from models with and without textinput. However, we did not find a significant amount of semanticallyrelated gestures in their output. Further investigation should becarried out to indicate if there is a sufficient amount of such gesturesin the dataset for models to be able to learn from. Still, we kepttexts as input as motion quality was not impaired.Compared to our reference models, the output of the proposedmodel seems better, especially in terms of speech audio and gesturealignment. However, we notice that some artifacts are still presentin the motions. Motions occasionally converge to an unusual orodd-looking pose, absurd rotations still take place, and jittering issometimes noticeable.6 RESULTS AND DISCUSSIONThe results of the shared evaluations of the GENEA Challenge 2023indicated that our model (condition SH) is competitive with mostconditions in terms of human-likeness but obtained relatively poorresults for appropriateness to speech [17].Figure 4 presents human-likeness ratings. Subjects participantsgave their ratings based on how human-like the motions appeared,from 0 (worst) to 100 (best). Real motion data (NA) achieved amedian rating of 71, the baselines 46 (BD) and 43 (BM), while ourcondition scored 46. We believe that the module that contributed themost to the human-likeness of generated gestures is the attentionmechanism. As Yang et al. [ 32] showed in their ablation studies,the cross-local attention module played a significant role in termsof human-likeness ratings.Two evaluations were performed to assess gesture appropri-ateness to speech: appropriateness for agent speech and for theinterlocutor speech. The first contains mainly moments where themain agent is the active speaker, while the roles are reversed in thelatter.Human-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 4: Shared human-likeness rating study. Red bars aremedian ratings; yellow diamonds are mean ratings. Entriesto the challenge are labeled SA-SL (ours is SH), BD and BMare the baselines [ 5], and NA is real data. Extracted fromKucherenko et al. [17].NA SG SJBM SFSK SESDBD SISLSBSASHSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 5: Shared appropriateness for agent speech responses.Entries to the challenge are labeled SA-SL (ours is SH), BDand BM are the baselines [ 5], and NA is real data. Extractedfrom Kucherenko et al. [17].For the appropriateness of agent speech evaluation, subjectswere presented with speech audio and two motions generated bythe model. One motion is the output generated with the speechaudio presented as input, and the other is the output from anothersegment of speech audio. For our condition, subjects preferred thematching motion 52.9% of the time, slightly above chance. Althoughone of the lowest mean appropriateness scores, there is no stati-cally significant differences in the scores of ours and another tenconditions (conditions BM to SA, in Figure 5).Our condition had the lowest score in the appropriateness forthe interlocutor evaluation. This means that subjects found themismatched stimuli more appropriate. However, our model does notuse any interlocutor information as input. Thus, from the model’sICMI ’23 Companion, October 9–13, 2023, Paris, France Tonoli, et al.NA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 6: Shared appropriateness for the interlocutor re-sponses. Entries to the challenge are labeled SA-SL (ours isSH), BD and BM are the baselines, and NA is real data. Ex-tracted from Kucherenko et al. [17].perspective, the output of both matched and mismatched stimulifor this evaluation was generated using the same inputs. Evidently,it is not expected that the outputs be exactly the same due to theprobabilistic nature of the model. But both outputs are expectedto be equivalent in terms of human-likeness and appropriateness,thus scoring similarly to chance (50%).We also noticed from all three evaluations that our model had awide range of scores. For instance, whiskers from the box plot visu-alization of Figure 4 span almost the entire y-axis; our condition,along with condition SK, had the highest confidence intervals ofmedian and mean ratings. In the appropriateness for agent speech,our condition had the third highest number of clear preferences formatched stimuli, the highest for mismatched, and the second lowestfor no preferences when compared to other entries to the challenge.Thus, we argue that the proposed model is indeed capable of gen-erating gestures that are competitive in terms of human-likenessand appropriateness for the main agent. However, the artifactsmentioned in the previous section hinder gesture perception andshould be addressed before any conclusion regarding the proposedarchitecture and individual modules.7 CONCLUSIONThis paper describes the proposed diffusion-based model for ges-ture generation that uses pre-trained VAD. Incorporating speechactivity information in such models could improve responsivenessduring rapid back-and-forth interactions. Also, a VAD can explicitlyprovide this information without needing human-annotated tran-scripts, thus potentially suited for real-time dialogue. Our model hasbeen compared with others in the GENEA Challenge 2023, a crowd-sourced evaluation that directly compares different methods whilecontrolling factors such as data and evaluation methodology. Theevaluation showed that our model is compatible with other entriesto the challenge in terms of human-likeness, but appropriatenessto speech is still unimpressive despite our efforts.Our experiments revealed mixed results regarding the effective-ness of the proposed implementation improvements to the gesturegeneration system. While convergences to undesired poses, extremejoint rotations, and jittering were not frequent, they nonetheless oc-curred. Besides, output motion was unstable, i.e., when generatingmotions given the same inputs, the resulting motion quality variedgreatly. These issues may have contributed to subpar performancein evaluations and compromised the responsiveness of generatedgestures to speaking moments. Although our adaptations hold po-tential value for gesture generation tasks, further improvements areneeded to leverage their benefits fully. Especially the explicit useof speech activity information that could be leveraged to addressturn-taking momentsWe intend to focus primarily on improving speech and gesturealignment for future work. An interesting approach is adapting anexternal framework for alignment as the one proposed by Badlaniet al. [ 3]. Another obvious path is to incorporate data from theinterlocutor to capture the aspects of dyadic scenarios.ACKNOWLEDGMENTSThis study was partially funded by the Coordenação de Aperfeiçoa-mento de Pessoal de Nivel Superior – Brasil (CAPES) – FinanceCode 001. The first author is grateful to the Eldorado ResearchInstitute.REFERENCES[1]Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. 2022.Listen, denoise, action! audio-driven motion synthesis with diffusion models.arXiv preprint arXiv:2211.09707 (2022).[2]Tenglong Ao, Zeyi Zhang, and Libin Liu. 2023. GestureDiffuCLIP: Gesture diffu-sion model with CLIP latents. arXiv preprint arXiv:2303.14613 (2023).[3]Rohan Badlani, Adrian Łańcucki, Kevin J Shih, Rafael Valle, Wei Ping, and BryanCatanzaro. 2022. One TTS alignment to rule them all. In ICASSP 2022-2022 IEEEInternational Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE,6092–6096.[4]Tadas Baltrušaitis, Chaitanya Ahuja, and Louis-Philippe Morency. 2019. Mul-timodal Machine Learning: A Survey and Taxonomy. IEEE Transactions onPattern Analysis and Machine Intelligence 41, 2 (Feb. 2019), 423–443. https://doi.org/10.1109/TPAMI.2018.2798607 Conference Name: IEEE Transactions onPattern Analysis and Machine Intelligence.[5]Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry tothe GENEA Challenge 2022–A Tacotron2 Based Method for Co-Speech GestureGeneration With Locality-Constraint Attention Mechanism. In Proceedings of the2022 International Conference on Multimodal Interaction (ICMI ’22) . Associationfor Computing Machinery, New York, NY, USA, 784–789. https://doi.org/10.1145/3536221.3558060[6]Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen,Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al .2022. Wavlm:Large-scale self-supervised pre-training for full stack speech processing. IEEEJournal of Selected Topics in Signal Processing 16, 6 (2022), 1505–1518.[7]Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans onimage synthesis. Advances in neural information processing systems 34 (2021),8780–8794.[8]Ylva Ferstl, Michael Neff, and Rachel McDonnell. 2019. Multi-Objective Adversar-ial Gesture Generation. In Proceedings of the 12th ACM SIGGRAPH Conference onMotion, Interaction and Games (Newcastle upon Tyne, United Kingdom) (MIG ’19) .Association for Computing Machinery, New York, NY, USA, Article 3, 10 pages.https://doi.org/10.1145/3359566.3360053[9]Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F Troje, and Marc-AndréCarbonneau. 2023. ZeroEGGS: Zero-shot Example-based Gesture Generationfrom Speech. In Computer Graphics Forum , Vol. 42. Wiley Online Library, 206–216.[10] Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow. 2020. Moglow:Probabilistic and controllable motion synthesis using normalising flows. ACMTransactions on Graphics (TOG) 39, 6 (2020), 1–14.[11] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, AlexeyGritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet,et al.2022. Imagen video: High definition video generation with diffusion models.arXiv preprint arXiv:2210.02303 (2022).[12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilisticmodels. Advances in neural information processing systems 33 (2020), 6840–6851.Gesture Generation with Diffusion Models Aided by Speech Activity Information ICMI ’23 Companion, October 9–13, 2023, Paris, France[13] Jonathan Ho and Tim Salimans. 2022. Classifier-free diffusion guidance. arXivpreprint arXiv:2207.12598 (2022).[14] Xun Huang and Serge Belongie. 2017. Arbitrary style transfer in real-timewith adaptive instance normalization. In Proceedings of the IEEE internationalconference on computer vision . 1501–1510.[15] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2020.Diffwave: A versatile diffusion model for audio synthesis. arXiv preprintarXiv:2009.09761 (2020).[16] Taras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, and Gustav EjeHenter. 2021. A Large, Crowdsourced Evaluation of Gesture Generation Systemson Common Data: The GENEA Challenge 2020. In 26th International Conferenceon Intelligent User Interfaces (College Station, TX, USA) (IUI ’21) . Associationfor Computing Machinery, New York, NY, USA, 11–21. https://doi.org/10.1145/3397481.3450692[17] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[18] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srinivasa,and Yaser Sheikh. 2019. Talking with hands 16.2 m: A large-scale dataset of syn-chronized body-finger motion and audio for conversational motion analysis andsynthesis. In Proceedings of the IEEE/CVF International Conference on ComputerVision (ICCV ’19) . 763–772.[19] Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, WenwuWang, and Mark D Plumbley. 2023. Audioldm: Text-to-audio generation withlatent diffusion models. arXiv preprint arXiv:2301.12503 (2023).[20] Jianwen Luo, Kui Ying, and Jing Bai. 2005. Savitzky–Golay smoothing anddifferentiation filter for even number data. Signal processing 85, 7 (2005), 1429–1434.[21] David McNeill. 1992. Hand and Mind: What Gestures Reveal About Thought.University of Chigado Press. https://doi.org/10.1177/002383099403700208[22] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin,Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. Glide: Towards photorealisticimage generation and editing with text-guided diffusion models. arXiv preprintarXiv:2112.10741 (2021).[23] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter,and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-SpeechGesture Generation. In Computer Graphics Forum , Vol. 42. Wiley Online Library,569–596.[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,et al.2021. Learning transferable visual models from natural language supervision.InInternational conference on machine learning . PMLR, 8748–8763.[25] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and BjörnOmmer. 2022. High-resolution image synthesis with latent diffusion models. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition .10684–10695.[26] Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, andAmit Haim Bermano. 2023. Human Motion Diffusion Model. In The EleventhInternational Conference on Learning Representations . https://openreview.net/forum?id=SJ1kSyO2jwu[27] Jonathan Tseng, Rodrigo Castellon, and Karen Liu. 2023. Edge: Editable dancegeneration from music. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition . 448–458.[28] Rick B Van Baaren, Rob W Holland, Kerry Kawakami, and Ad Van Knippenberg.2004. Mimicry and prosocial behavior. Psychological science 15, 1 (2004), 71–74.[29] V. Vinayagamoorthy, M. Gillies, A. Steed, E. Tanguy, X. Pan, C. Loscos, andM. Slater. 2006. Building Expression into Virtual Characters. In Eurographics2006 - State of the Art Reports , Brian Wyvill and Alexander Wilkie (Eds.). TheEurographics Association. https://doi.org/10.2312/egst.20061052[30] Petra Wagner, Zofia Malisz, and Stefan Kopp. 2014. Gesture and speech ininteraction: An overview. , 209–232 pages.[31] Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao,Yingxia Shao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. 2022. Diffusionmodels: A comprehensive survey of methods and applications. arXiv preprintarXiv:2209.00796 (2022).[32] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, WeihongBao, Ming Cheng, and Long Xiao. 2023. DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models. arXiv preprintarXiv:2305.04919 (2023).[33] Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: A LargeEvaluation of Data-Driven Co-Speech Gesture Generation (ICMI ’22) . Associationfor Computing Machinery, New York, NY, USA, 736–747. https://doi.org/10.1145/3536221.3558058[34] Fan Zhang, Naye Ji, Fuxing Gao, and Yongping Li. 2023. DiffMotion: Speech-Driven Gesture Synthesis Using Denoising Diffusion Model. In MultiMedia Mod-eling , Duc-Tien Dang-Nguyen, Cathal Gurrin, Martha Larson, Alan F. Smeaton,Stevan Rudinac, Minh-Son Dao, Christoph Trattner, and Phoebe Chen (Eds.).Springer International Publishing, Cham, 231–242.[35] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, LeiYang, and Ziwei Liu. 2022. Motiondiffuse: Text-driven human motion generationwith diffusion model. arXiv preprint arXiv:2208.15001 (2022).[36] Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. 2019. On theContinuity of Rotation Representations in Neural Networks. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) .[37] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. 2023.Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) . 10544–10553. |
bBrebR1YpXe | The UEA Digital Humans entry to the GENEA Challenge 2023Jonathan WindleUniversity of East AngliaUnited KingdomIain MatthewsUniversity of East AngliaUnited KingdomBen MilnerUniversity of East AngliaUnited KingdomSarah TaylorIndependent ResearcherUnited KingdomABSTRACTThis paper describes our entry to the GENEA (Generation and Eval-uation of Non-verbal Behaviour for Embodied Agents) Challenge2023. This year’s challenge focuses on generating gestures in adyadic setting – predicting a main-agent’s motion from the speechof both the main-agent and an interlocutor. We adapt a Transformer-XL architecture for this task by adding a cross-attention modulethat integrates the interlocutor’s speech with that of the main-agent. Our model is conditioned on speech audio (encoded usingPASE+), text (encoded using FastText) and a speaker identity label,and is able to generate smooth and speech appropriate gesturesfor a given identity. We consider the GENEA Challenge user studyresults and present a discussion of our model strengths and whereimprovements can be made.CCS CONCEPTS•Computing methodologies →Artificial intelligence ;Ani-mation .KEYWORDSSpeech-to-gesture, 3D pose prediction, gesture generation, Transformer-XL, Self-Attention, Cross-AttentionACM Reference Format:Jonathan Windle, Iain Matthews, Ben Milner, and Sarah Taylor. 2023. TheUEA Digital Humans entry to the GENEA Challenge 2023. In INTERNA-TIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23), Oc-tober 9–13, 2023, Paris, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3577190.36161161 INTRODUCTIONCo-speech gesturing contributes to language production and per-ception during conversation. Gestures can aid conversation turn-taking and listener feedback while also providing semantic contextand may be indicative of emotion and emphasis [ 4,9,16,22]. Speech-driven gesture generation has predominantly focused on estimatingmotion for monadic speech input of a main-agent, with no knowl-edge of interlocutor speech and no concept of interaction. Instead,Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).ICMI ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s).ACM ISBN 979-8-4007-0055-2/23/10.https://doi.org/10.1145/3577190.3616116this year’s GENEA challenge focuses on generating gestures in adyadic setting – predicting a main-agent’s motion from the speechof both the main-agent itself and also the speech of the interlocutor.We introduce a system to the GENEA Challenge 2023 that usesPASE+ [ 21] speech embeddings in conjunction with FastText [ 2]word embeddings and a speaker identity label as input to an adaptedTransformer-XL [ 3] architecture to generate smooth, contextuallyand temporally coherent motion that can adapt to varying lengthsof historic context. Specifically, we extend the Transformer-XLmodel to provide cross-attention with the interlocutor’s speech toimpart knowledge of both speakers into the prediction.Video examples and code can be found in the supplement atgithub.com/JonathanPWindle/uea-dh-genea23.2 BACKGROUND & PRIOR WORKMany speech-to-motion deep learning techniques are built uponrecurrent models, such as bi-directional Long Short-Term Memorymodels (LSTMs) [ 5,7,23]. Transformer architectures are growingtraction in favour of LSTM models in sequence-based AI, withsequence-based motion prediction models already making use ofthem [ 1,10,15,24]. Transformer models do not have a concept oftemporal position but can effectively model temporal informationoften using a sinusoidal position embedding which is added to theinput.Transformers rely on attention mechanisms which inform thenetwork which parts of data to focus on [ 25]. In self-attention, themechanism is applied to the input sequence to find which elementswithin the same sequence may relate to each other and which arekey to focus on. Conversely, cross-attention is computed for oneinput source in relation to a separate input source, calculating whichelements from one sequence may relate and be important to focuson in another sequence.To perform sequence-to-sequence generation using a vanillatransformer as defined in Vaswani et al. [ 25] a sequence is processedover a sliding window with a one-frame stride. For each windowof input, one frame of output is generated. This is computationallyexpensive and window size is limited by the longest input sequenceseen during training. As the sequence length increases, the size ofthe self-attention mechanism also grows exponentially, leading tomemory and computational limitations.The Transformer-XL architecture [ 3] differs from the traditionaltransformer architecture in two key ways: 1) Attention is calcu-lated conditioned on the previous context, and 2) the positionalencoding uses a learned relative embedding. The Transformer-XLarchitecture allows for extended attention beyond a fixed lengthICMI ’23, October 9–13, 2023, Paris, France Windle, et al.by using segment-level recurrence with state reuse allowing thealteration of context length. The Transformer-XL can therefore betrained efficiently on small segment lengths while retaining histori-cal influence through the state reuse. As the historic context lengthcan vary, the Transformer-XL introduces a learned, relative posi-tional encoding scheme. Due to its improved ability for modellingsequences, we adapt the Transformer-XL architecture for dyadicgesture generation.3 DATA & PREPROCESSINGOur model makes use of the GENEA challenge data [ 11] derivedfrom the Talking With Hands dataset [ 12]. This data includes dyadicconversations between a main-agent and interlocutor and consistsof high-quality 30fps mocap data in Biovision Hierarchical (BVH)format, with corresponding speech audio and text transcripts. Ourtask is to generate the main-agent motion conditioned on bothmain-agent and interlocutor speech. We process both main-agentand interlocutor speech data the same, using all available modalities;motion, speech, transcription and speaker identity.3.1 MotionEuler angles are required for test submission and are a convenientrepresentation supported by many available 3D animation pipelines.Despite this, Euler angles are discontinuous and difficult for neuralnetworks to learn [ 28]. We convert rotations to the 6D rotationrepresentation presented by Zhou et al. [ 28] for their suitability todeep learning tasks. Global skeleton position is also encoded usingthreex,y,z values. All values are standardised by subtracting themean and dividing by the variance computed from the trainingdata.Each identity in the dataset has a skeleton with different bonelengths. Additionally, per-frame joint offsets are also present in thedata, possibly to account for bone-stretching in the data capture.Our analysis of these joint offset values revealed very low variance,and setting them to a pre-defined fixed value for all frames did notimpact visual performance. We therefore compute one set of bonelengths and offsets per speaker to simplify the training pipeline. Werandomly select a sample corresponding to each identity and fix thebone lengths and offsets accordingly using the first data frame. Jointpositions can then be computed using the joint angles (measuredor predicted) and pre-defined speaker-specific bone measurements.3.2 Speech3.2.1 Audio. We extract audio features using the problem-agnosticspeech encoder (PASE+) [ 21]. PASE+ is a feature embedding learnedusing a multi-task learning approach to solve 12 regression tasksaimed at encoding important speech characteristics. These 12 tasksinclude estimating MFCCs, FBANKs and other speech-related in-formation including prosody and speech content.PASE+ requires audio to be sampled at 16KHz, so we used band-sinc filtering to reduce the audio sample rate from 42KHz to 16KHz.We use the released, pre-trained PASE+ model to extract audiofeature embeddings of size 768 that represents a 33ms window ofaudio to align with the 30 fps motion. The weights for this modelare not updated during training.3.2.2 Text. We extract features from the text transcriptions usingthe FastText word embedding described by Bojanowski et al. [ 2]using the pre-trained model released by Mikolov et al. [ 17]. Foreach spoken word, we extract the word embedding and align theembedding values to each 33ms window of motion. If no word isspoken at a given frame then a vector of zero values is passed. Whena word is spoken across multiple frames, the vector is repeated forthe appropriate number of frames.4 METHODWe adapt the Transformer-XL [ 3] architecture for speech-drivengesture generation. Specifically, we modify this architecture to useboth self and cross-attention. The advantage of the Transformer-XLarchitecture is that it allows us to model the longer term relationshipbetween speech and gesture for input of any duration.Our feature extraction process, shown in Figure 1, is used togenerate a feature vector Xof lengthwfor both the main-agentand interlocutor. These features are then passed to our model asshown in our overview Figure 2 where they are processed using anumber os Self-Attention Blocks andCross-Attention Blocks .FastT ext"Hello"PASE+SpeakerEmbeddingSpeaker LabelLinearFigure 1: Outline of our data processing pipeline. Our processtakes as input, wframes starting at frame tof speech audio,text transcript and a speaker identity label to generate afeature vector X. We use pre-trained models for the audioand text inputs. Red box defines frozen weights.4.1 Feature ExtractionWe segment the input into non-overlapping segments of length wframes. For each segment, an input feature vector Xis generatedand used to predict Y, a sequence of poses of length w. Our modelis called for each w-frame feature vector X. In a speech sequenceof lengthT, it is therefore called ⌈Tw⌉times.For each segment, we extract audio (PASE+) features at:t+w, andtext (FastText) features ft:t+was described in Section 3.2, where trepresents the start frame of a window w. For each utterance, thereis also a speaker label provided. This is a unique ID which we passto a learned embedding layer. The embedding layer acts as a lookupThe UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceSelf-Attention BlockRepeat timesTransformer-XL Attention BlockFeed Forward BlockQ K VLinear QKV NetSkipQLinear Q NetV KLinear KV NetTransformer-XL Attention BlockFeed Forward Block Relative Encoding NetRelative Encoding NetLinearCross-Attention BlockRepeat timesKeyMain-SpeakerInterlocutorSelf-AttentionCross-AttentionSinusoidal PositionEmbeddingInput on first layerInput on subsequentlayers Figure 2: Outline of our prediction model which takes as input, wmotion frames worth of encoded conditioning informationstarting at time tand predicts wframes of body motion. We show a self-attention block and cross-attention block, where weextractQ,K,V vectors using main-agent or interlocutor speech according to the attention type conditioned on previous mnumber of hidden states M. These vectors are passed to the Transformer-XL attention block to calculate attention before beingfed into a feed-forward block. A final linear layer predicts wposes ˆyt:t+w.ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.table for learned feature embeddings that are representative of eachspeaker style. The trainable weights ensure that two speakers withsimilar gesture styles are close in the latent embedding space, andconversely, those with different gesturing styles are far apart.Each modality is extracted and concatenated into a single featurevector Xas shown in Figure 1. Feature vectors for both the main-agent and the interlocutor are extracted in the same way using thesame learned weights. This is because a speaker may appear as themain-agent in some sequences and the interlocutor in others.4.2 Self-AttentionAs shown in Figure 2, we process the features from the main-agent using a self-attention block. The attention score is defined inVaswani et al. [25] as:Attention(Q,K,V)=softmax(QKT√︁dk)VWhere Query Q, KeyK, and Value Vare all vectors and queriesand keys are of dimension dk, and values of dimension dv. Thesevectors are often linear projections of an input vector into theirrespective dimensions d.When calculating attention scores in the Transformer-XL model,historic context is included using segment-level recurrence withstate reuse. This is achieved by caching previous hidden state se-quences which can be used when processing future segments. Whenno historic context is present at the start of the speech sequence,our Transformer-XL extracts Q,K andVvectors from the main-agent inputs alone. The historic context from processed segmentsMof lengthmis cached as each segment is processed. Q,K andVvectors are then extracted from the subsequent inputs, conditionedon previous context. This process is completed using a Linear QKVNet shown in Figure 2 which is a single linear layer.Transformer models do not have inherent knowledge of posi-tional order. To ensure temporal coherency, a positional encodingis often added to the input vectors to inject some position contextto the model. As the Transformer-XL architecture can have varyinglengths of historic context and is not constrained to a maximumlength, a learned relative position encoding ris instead utilised.The learned relative encoding is from a single linear layer and takesa sinusoidal position embedding for the full length of context, thatis the sum of both memory length available and the query length.Rather than injecting the temporal information to the input beforecalculating Q,KandV, which is the approach used in Vaswaniet al. [ 25], the Transformer-XL inputs this information after thesevectors have been extracted at the time of calculating the attentionscore.UsingQ,KandVin conjunction with the relative position en-codingr, we use the Transformer-XL attention block to calculateattention vectors. As Figure 2 shows, these attention vectors arethen passed to a Feed Forward Block which comprises of two Lin-ear layers, with a ReLU activation on the first output and dropoutapplied to both.Each self-attention block has multiple attention heads, each aim-ing to extract different attention features and a self-attention blockis repeatedNselftimes, with each layer feeding its output to the next.Memory values Mare persisted on a per-layer basis and thereforehidden states are specific to each self-attention block. The lengthof this memory mcan be altered during training and evaluation.4.3 Cross-AttentionWhile it is reasonable to assume the main-agent speech is drivingthe majority of the gestures, the interlocutor can also influencethe motion of the agent indicating turn taking and backchannelcommunication. For example, the main-agent might nod to showagreement or understanding when the interlocutor is speaking.Therefore we aim to derive the main source of information drivingthe motion from the main-agent’s speech, but also include the inter-locutor’s speech. We adapt the Transformer-XL to not only computeself-attention over the main-agent inputs, but to also utilise cross-attention from the interlocutor while maintaining segment-levelrecurrence and relative position encoding. This cross-attentionblock is shown in Figure 2.Cross-attention is an attention mechanism where the Query Qis extracted from the input source and the Key Kand ValueVareextracted from an external input element. Our cross-attention blockuses a similar approach as the self-attention block defined in Section4.2, but instead has two separate networks to process the inputs; oneto extractQfrom the main-agent self-attention encoding and one toextractKandVderived from the interlocutor speech. For each layerof cross-attention blocks, the input to the Qnet is a skip connectionfrom the output of the self-attention encoder and therefore remainsthe same input for all cross-attention blocks . The input to the KVnetin the first iteration is the interlocutor feature vectors (described inSection 4.1), and the output from a cross-attention block thereafter.The output from the cross-attention block is then passed to asingle linear layer which predicts Y, the standardised 6D rotationsof each joint and the global position of the skeleton.4.4 Training ProcedureFor each segment of speech of length w, we predict the pose rep-resented by a vector of joint rotations ˆYof lengthw. In motionsynthesis it is common to include both geometric and temporalconstraints in the loss function to ensure that the model gener-ates output that is both geometrically and dynamically plausible[6,24,26]. Our loss function Lccomprises multiple terms includingaL1loss on the rotations ( Lr), positions ( Lp), velocity (Lv), acceler-ation (La) and kinetic energy ( Lv2) of each joint. If we take yrandˆyrto be natural mocap and predicted 6D rotations respectively; ypandˆypto to be positions in world space computed using forwardkinematics given the predicted joint angles and the pre-definedspeaker-specific bone lengths, we use the following loss function:Lr=L1(yr,ˆyr)Lp=L1(yp,ˆyp)Lv=L1(f′(yp),f′(ˆyp))Lv2=L1(f′(yp)2,f′(ˆyp)2)La=L1(f′′(yp),f′′(ˆyp))Lc=λpLp+λvLv+λaLa+λrLr+λv2Lv2(1)The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceWheref′andf′′are the first and second derivatives respectively.Each term has a λweighting to control the importance of each termin the loss.Table 1 summarises the parameters used, optimised using a ran-dom grid search parameter sweep. These settings were chosen usinga combination of low validation loss values and quality of the pre-dicted validation sequences as observed by our team. We train ourmodel for 1770 epochs using the AdamW [ 14] optimiser and foundthat a segment length wof 90 frames and memory length mof 180frames was optimal. The Feed Forward Blocks used in both self andcross-attention layers are comprised using the same topology andsize.Hyperparameter ValueTransformerXL Head Dimension 32Number Heads 32Self-Attention Layers ( Nself) 4Cross-Attention Layers ( Ncross)2Feed Forward Block Dropout 0.2Hidden Size 4096Embeddings Feature Embedding 1024Speaker Embedding 8Training Batch Size 32Learning Rate 0.00001λr 1λp 0.01λv,λa 0.5λv2 0.2Context Segment Length ( w) 90 framesMemory Length ( m) 180 framesTable 1: Training hyperparameters.5 RESULTSOur approach is evaluated in conjunction with the GENEA Chal-lenge 2023 [ 11]. Each challenge participant submitted 70 BVH filesfor main-agent motion generated using the speech of the main-agent and interlocutor for each interaction. Using these submittedBVH files, motion is rendered on the same character for comparison.There are three studies of interest in this challenge; human likeness,appropriateness to speech and appropriate to interlocutor. Eachchallenge participant is assigned a unique ID to provide anonymityduring the evaluation process, our ID which will be used in Figuresand Tables throughout is SJ.NAdenotes natural motion of themocap sequences, BDandBMare baseline systems in a dyadicand monadic setting respectively. We give a brief overview of eachevaluation method, however, we strongly recommend also readingthe main challenge paper [11] for full details.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9Table 2: Summary statistics of user-study ratings from thehuman-likeness study, with confidence intervals at the levelα= 0.05. Conditions are ordered by decreasing sample medianrating. Our model results are highlighted in pink . Table andcaption from [11]....over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significance of pairwise differences between condi-tions in human-likeness study. White means that the condi-tion listed on the y-axis rated significantly above the condi-tion on the x-axis, black means the opposite ( yrated below x),and grey means no statistically significant difference at thelevelα= 0.05 after Holm-Bonferroni correction. Conditionsare listed in the same order as in Table 2. Figure and captionfrom [11].ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.5.1 Human LikenessThis user-study aims to evaluate how human-like the motion gen-erated is, independent of the speech. Although each comparisonsystem motion corresponds to the same input speech and condi-tioning, these sequences were muted to ensure ratings can onlydepend on the motion seen in the videos. 8 systems were comparedat any one time and participants were asked “Please indicate on asliding scale how human-like the gesture motion appears”. Studyparticipants gave their ratings in response to this question on ascale from 0 (worst) to 100 (best).Summary statistics (median, mean) are shown in Table 2 andsignificance comparisons are provided in Figure 3. Our system(SJ) was evaluated to be the third highest ranking of submittedsystems with regards to mean and median human likeness score.Figure 3 shows only NA,SGandSFare significantly better thanour system. Our system scores significantly higher than 9 othersystems, including both baseline systems.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803Table 3: Summary statistics of user-study responses from theappropriateness to speech study, with confidence intervalsfor the mean appropriateness score (MAS) at the level α= 0.05.“Pref. matched” identifies how often test-takers preferredmatched motion in terms of appropriateness, ignoring ties.Our model results are highlighted in pink . Table and cap-tion from [11].5.2 Speech AppropriatenessTo measure appropriateness of gestures to speech, participantswere asked to view two videos and answer “Which character’smotion matches the speech better, both in terms of rhythm andintonation and in terms of meaning?”. Both video stimuli are fromthe same condition and thus ensure the same motion quality, butone matches the speech and the other is mismatched, generatedfrom an unrelated speech sequence. Five response options wereavailable, namely “Left is clearly better”, “Left is slightly better”,“They are equal”, “Right is slightly better”, and “Right is clearlybetter”. Each answer is assigned a value of -2, -1, 0, 1, 2 where anegative value is given for a preference to mismatched motion anda positive value for a preference to matched motion.Table 3 provides summary statistics and win rates, Figure 4visualises the response distribution and Figure 5 shows significancecomparisons. Our approach ( SJ) ranked second in the submittedsystems. Figure 5 shows that there are few significant differencesbetween pairwise systems. Only SGand the natural mocap ( NA)rank significantly better than our system. Again, our system rankssignificantly better than 9 other conditions including the dyadicbaseline system.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 4: Bar plots visualising the response distribution inthe appropriateness to speech study. The blue bar (bottom)represents responses where subjects preferred the matchedmotion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score. Figure and caption from [11].5.3 Interlocutor AppropriatenessAs this year’s challenge includes awareness of the interlocutorspeech and motion, the appropriateness of the generated main-agent motion to the interlocutor’s speech is also evaluated. Thewas done using a similar technique used for measuring speech ap-propriateness but differed in several important aspects. The test datacontained pairs of interactions, one with matched main-agent andinterlocutor interactions and another with the same main-agentspeech, but mismatched interlocutor speech. Preference can bequantified for generated motion with matched over mismatched in-terlocutor behaviour and we can assess how interlocutor behaviouraffects the motion.Our system ranked 8th in this study but only natural mocap, SA,BDandSLare rated significantly higher than it. There is no othersignificant difference to any other system, except SHwhere we weresignificantly better. We observe from the statistics in Figure 7 thatour system had the lowest number of negative scores (preferencefor the mismatched dyadic interaction), and a large number of nopreference scores.The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Significance of pairwise differences between con-ditions in the appropriateness to speech evaluation. Whitemeans that the condition listed on the y-axis rated signifi-cantly above the condition on the x-axis, black means theopposite (yrated below x), and grey means no statistically sig-nificant difference at the level α= 0.05 after Holm-Bonferronicorrection. Conditions are listed in the same order as in Table3. Figure and caption from [11].Cond-MASPref. Raw response countition matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014Table 4: Summary statistics of user-study responses from theappropriateness to interlocutor study, with confidence inter-vals for the mean appropriateness score (MAS) at the level α= 0.05. “Pref. matched” identifies how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties. Our model results are highlighted in pink . Table andcaption from [11].NA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...Figure 6: Significance of pairwise differences between con-ditions in the appropriateness to interlocutor study. Whitemeans that the condition listed on the y-axis rated signifi-cantly above the condition on the x-axis, black means theopposite (yrated below x), and grey means no statistically sig-nificant difference at the level α= 0.05 after Holm-Bonferronicorrection. Conditions are listed in the same order as in Fig-ure 4. Figure and caption from [11].NA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 7: Bar plots visualising the response distribution in theappropriateness to interlocutor study. The blue bar (bottom)represents responses where subjects preferred the matchedmotion, the light grey bar (middle) represents tied (“They areequal”) responses, and the red bar (top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of responses in each cat-egory. Lighter colours correspond to slight preference, anddarker colours to clear preference. On top of each bar is alsoa confidence interval for the mean appropriateness score,scaled to fit the current axes. The dotted black line indicateschance-level performance. Conditions are ordered by meanappropriateness score. Figure and caption from [11].ICMI ’23, October 9–13, 2023, Paris, France Windle, et al.5.4 ObservationsWe observe that the animation generated from our model is smoothand temporally coherent without jitter or sudden shifts in motionwhile maintaining gesture beats in time with speech. Our modelappears to reliably and realistically animate beat gestures. Beatgestures are simple and fast movements of the hands and havea close relationship to prosodic activity such as acoustic energyand pitch [ 20,27]. The PASE+ model used for encoding audio inour system was trained to estimate prosodic features as one of itsdownstream tasks, making the derived audio features particularlysuitable for animating beat gestures.We do not expect gestures to occur during every audio beat,but when they happen they should synchronise with the speech.Using the method of motion and audio beat extraction used in thebeat align score calculation presented in Liu et al. [ 13], we canvisualise the onset of audio beats and motion gesture over time.Figure 8 shows two well timed gestures for a 3 second audio clip.The utterance of “programs” shows a beat gesture where duringthe syllable utterance “pro”, the speaker moves their right handfrom right to left and as the stressed syllable “grams” is spoken,the hand begins to change velocity and move from left to right. Wealso see an example of muted speech where our model continues toperform well. As there is no speech, there is little to inform gesture,we find the right arm drops to the side, and left arm lowers slightly.However, as the speech begins again, both arms raise in time withthe speech.A difference between natural mocap motion and our generatedanimation is that the latter does not exhibit sporadic, non-speechrelated motion such as self-adaptor traits. Self-adaptors are move-ments that typically include self-touch, such as scratching of theneck, clasping at an elbow, adjusting hair or interlocking fingers[18]. Despite the indirect relationship between these behavioursand speech, these traits are linked to perceived emotional stabilityof an agent [18] and may influence perceived human-likeness.6 DISCUSSIONOur approach performed well with regards to human-likeness andappropriateness to speech. Our model performed comparably to10 of the other systems with regards to appropriateness to the in-terlocutor’s speech, but clearly it can be improved in this area. Weobserve in Figure 7 and Table 4 that, for our system, participantspreferred the mismatched stimuli least compared to all other sys-tems (including natural mocap). The majority of responses weretied, meaning that they considered the mismatched stimuli to be ofequal appropriateness as the matched animation. It is unclear wherethis uncertainty stems from and more work is required to evaluatethis cause. There may be a lack of influence from the interlocutorspeech in this model architecture. There are many ways to incorpo-rate the interlocutor speech in this model, for example including asan extra input to the self-attention rather than as cross-attentionor altering skip connections. These ideas or simply increasing thenumber of cross-attention layers may improve the performance ofthe appropriateness to the interlocutor.More experiments are also required to determine the impactof including the interlocutor information on human-likeness andappropriateness to speech as well as appropriateness to interlocutor.P r og r a m s<mute> medicalFigure 8: Generated gestures for given audio beats. Using a3s audio clip from the test dataset we show the audio spec-trogram, as well as aligned audio beat onsets and their cor-responding onset strengths as well as motion gesture onsetdetection of the right wrist using the method of beat detec-tion defined in Liu et al. [ 13]. We can see during the syllableutterance “pro”, the speaker moves their right hand handfrom right to left and as the stressed syllable “grams” is spo-ken, the hand begins to move left to right. When there issilence, the arms begin to rest and again gesture in the nextutterance.This may have a positive effect on these two evaluations or maylimit performance in these areas.Although our proposed method is deterministic, i.e. the sameinputs will always produce the same outputs, it could be possible toincorporate this design into a probabilistic model. For example, thisapproach could be adjusted to incorporate probabilistic diffusion[8, 19] methods.7 CONCLUSIONWe have presented our submission to the GENEA Challenge 2023,a modified Transformer-XL based approach that utilises both self-attention and cross-attention. Our solution generates smooth, tem-porally coherent animation from the conversational speech of amain-agent and interlocutor. Subjective evaluation results supportthat our system performs well in regards to human-likeness andappropriateness, ranking third and second respectively when com-pared to the 14 other systems and baselines and performing signifi-cantly better than 9 in both evaluations. Our approach continues tobe competitive when evaluating the generated main-agent motion’sappropriateness to the interlocutor, where only the natural mocapand 3 systems performed significantly better.The UEA Digital Humans entry to the GENEA Challenge 2023 ICMI ’23, October 9–13, 2023, Paris, FranceREFERENCES[1]Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan,Aniket Bera, and Dinesh Manocha. 2021. Text2gestures: A transformer-basednetwork for generating emotive body gestures for virtual agents. In 2021 IEEEvirtual reality and 3D user interfaces (VR) . IEEE, 1–10.[2]Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017.Enriching word vectors with subword information. Transactions of the associationfor computational linguistics 5 (2017), 135–146.[3]Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and RuslanSalakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 (2019).[4]Jan P De Ruiter, Adrian Bangerter, and Paula Dings. 2012. The interplay betweengesture and speech in the production of referring expressions: Investigating thetradeoff hypothesis. Topics in Cognitive Science 4, 2 (2012), 232–248.[5]Ylva Ferstl and Rachel McDonnell. 2018. Investigating the use of recurrent motionmodelling for speech gesture generation. In Proceedings of the 18th InternationalConference on Intelligent Virtual Agents . 93–98.[6]Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F Troje, and Marc-AndréCarbonneau. 2022. ZeroEGGS: Zero-shot Example-based Gesture Generationfrom Speech. arXiv preprint arXiv:2209.07556 (2022).[7]Dai Hasegawa, Naoshi Kaneko, Shinichi Shirakawa, Hiroshi Sakuta, and KazuhikoSumi. 2018. Evaluation of speech-to-gesture generation using bi-directional LSTMnetwork. In Proceedings of the 18th International Conference on Intelligent VirtualAgents . 79–86.[8]Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilisticmodels. Advances in Neural Information Processing Systems 33 (2020), 6840–6851.[9]Adam Kendon. 1994. Do gestures communicate? A review. Research on languageand social interaction 27, 3 (1994), 175–200.[10] Jihoon Kim, Jiseob Kim, and Sungjoon Choi. 2022. Flame: Free-form language-based motion synthesis & editing. arXiv preprint arXiv:2209.00349 (2022).[11] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[12] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srinivasa,and Yaser Sheikh. 2019. Talking with hands 16.2 m: A large-scale dataset of syn-chronized body-finger motion and audio for conversational motion analysis andsynthesis. In Proceedings of the IEEE/CVF International Conference on ComputerVision . 763–772.[13] Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou,Elif Bozkurt, and Bo Zheng. 2022. BEAT: A Large-Scale Semantic and Emo-tional Multi-Modal Dataset for Conversational Gestures Synthesis. In ComputerVision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022,Proceedings, Part VII . Springer, 612–630.[14] Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization.arXiv preprint arXiv:1711.05101 (2017).[15] Shuhong Lu and Andrew Feng. 2022. The DeepMotion entry to the GENEAChallenge 2022. In Proceedings of the 2022 International Conference on MultimodalInteraction . 790–796.[16] David McNeill. 1985. So you think gestures are nonverbal? Psychological review92, 3 (1985), 350.[17] Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Ar-mand Joulin. 2018. Advances in Pre-Training Distributed Word Representations.InProceedings of the International Conference on Language Resources and Evalua-tion (LREC 2018) .[18] Michael Neff, Nicholas Toothman, Robeson Bowmani, Jean E Fox Tree, andMarilyn A Walker. 2011. Don’t scratch! Self-adaptors reflect emotional stability.InInternational Workshop on Intelligent Virtual Agents . Springer, 398–411.[19] Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved denoising diffu-sion probabilistic models. In International Conference on Machine Learning . PMLR,8162–8171.[20] Wim Pouw, Steven J Harrison, Núria Esteve-Gibert, and James A Dixon. 2020.Energy flows in gesture-speech physics: The respiratory-vocal system and itscoupling with hand gestures. The Journal of the Acoustical Society of America 148,3 (2020), 1231–1247.[21] Mirco Ravanelli, Jianyuan Zhong, Santiago Pascual, Pawel Swietojanski, JoaoMonteiro, Jan Trmal, and Yoshua Bengio. 2020. Multi-task self-supervised learn-ing for robust speech recognition. In ICASSP 2020-2020 IEEE International Confer-ence on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 6989–6993.[22] Michael Studdert-Kennedy. 1994. Hand and Mind: What Gestures Reveal AboutThought. Language and Speech 37, 2 (1994), 203–209.[23] Kenta Takeuchi, Dai Hasegawa, Shinichi Shirakawa, Naoshi Kaneko, HiroshiSakuta, and Kazuhiko Sumi. 2017. Speech-to-gesture generation: A challengein deep learning approach with bi-directional LSTM. In Proceedings of the 5thInternational Conference on Human Agent Interaction . 365–369.[24] Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, andAmit H Bermano. 2022. Human motion diffusion model. arXiv preprintarXiv:2209.14916 (2022).[25] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in neural information processing systems 30 (2017).[26] Jonathan Windle, David Greenwood, and Sarah Taylor. 2022. UEA Digital Humansentry to the GENEA Challenge 2022. In Proceedings of the 2022 InternationalConference on Multimodal Interaction . 771–777.[27] Jonathan Windle, Sarah Taylor, David Greenwood, and Iain Matthews. 2022. Armmotion symmetry in conversation. Speech Communication 144 (2022), 75–88.[28] Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. 2019. On theContinuity of Rotation Representations in Neural Networks. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) .Received 20 February 2007; revised 12 March 2009; accepted 5 June 2009 |
mK2qMNf0_Nd | Co-Speech Gesture Generation via Audio and Text FeatureEngineeringGeunmo KimKorea Electronics TechnologyInstituteRepublic of Korearootmo96@keti.re.krJaewoong YooKorea Electronics TechnologyInstituteRepublic of Koreajaewoong.yoo@keti.re.krHyedong JungKorea Electronics TechnologyInstituteRepublic of Koreahudson@keti.re.krABSTRACTIn recent years, the field of human-computer interaction (HCI) re-search has seen increasing efforts to model social intelligence andbehavior based on artificial intelligence. For human-agent commu-nication to evolve in a human-way, non-verbal features can beused as important factors. We conducted our research as part ofthe GENEA Challenge 2023[ 13], where the task is to generate hu-man gestures using these non-verbal elements. We applied twomain approaches to generating natural gestures. First, we modi-fied the provided baseline model to apply RoBERTa-based speechtranscription embedding, and second, we designed a gesture gen-eration model by adding a zero-crossing rate and rhythmical fea-tures to the input features. The gestures generated by this methodwere evaluated as unnatural in terms of human-like and confor-mity. However, through this, we will study the SOTA model struc-ture of gesture generation in the future and apply various prepro-cessing methods to the input data to generate natural gestures.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) .KEYWORDSHuman-Computer Interaction (HCI), Gesture Generation, Deep Learn-ing, Multimodal LearningACM Reference Format:Geunmo Kim, Jaewoong Yoo, and Hyedong Jung. 2023. Co-Speech GestureGeneration via Audio and Text Feature Engineering. In INTERNATIONALCONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23 Companion), Oc-tober 9–13, 2023, Paris, France. ACM, New York, NY, USA, 6pages. https://doi.org/10.1145/3610661.36165531 INTRODUCTIONIn recent years, the field of Human-Computer Interaction (HCI) re-search has seen an increase in efforts to model social intelligenceand behavior based on artificial intelligence[ 2,3]. According to Al-bert Mehrabian’s Three elements of communication[ 20], humansPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanthe author(s) must be honored. Abstracting with credit is permitted. To copy other-wise, or republish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee. Request permissions from permissions@acm.org.ICMI ’23 Companion, October 9–13, 2023, Paris, France© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10...$15.00https://doi.org/10.1145/3610661.3616553rely more on para-verbal and non-verbal elements of communica-tion than on verbal elements. In order for human-agent commu-nication to evolve towards the human-way, para-verbal and non-verbal behavioral cues can be used as important elements. Peopleusually express social signals and behaviors through non-verbalbehavioral cues such as facial expressions, body postures and ges-tures, or para-verbal behavioral cues such as tone and pitch fromvocal sounds[ 26]. According to Vinciarelli et al. (2009)[ 26], 90% ofnonverbal behavioral cues are associated with speech. Therefore,assuming that a matching gesture exists based on audio and speechdata, we will participate in the GENEA Challenge 2023 and pro-ceed with the co-speech gesture generation task. The generated co-speech gestures can be utilized for multi-modal fusion by consid-ering matching and combining verbal, para-verbal, and non-verbalfeatures in future research on human-agent communication.In traditional gesture generation research, motion system frame-works have been proposed as concatenative approaches such asmotion graphs[ 10]. In recent years, learning-based approaches havebeen used to generate high-quality and interactive gestures by uti-lizing neural networks such as FFNNs, RNNs, GANs, and VAEs[ 6,8,11,22,24]. There are also studies on gesture generation tasksusing text, speaker identity and style, and personality parametersas input features for generation models[ 1,12,23,27]. In GENEAChallenge 2023, our team applied two main approaches to achievea more natural and appropriate matching with speech. First, wemodified the provided baseline model with RoBERTa-based embed-ding for speech transcription, and second, we designed a gesturegeneration model by adding a zero-crossing rate and rhythmicalfeature as additional audio features to the input features.As a result, it was evaluated as unnatural for human-likenessand appropriateness. After checking with a 3D animation tool, wefound that there were some natural gestures, but most of themwere inappropriate for speech. Through this experiment, we real-ized that using more features does not always lead to better gener-ation performance.2 BACKGROUND AND PRIOR WORK2.1 Data-driven gesture generation researchData-driven gesture generation models are models that learn froma large amount of data, such as audio, text, and pose data, and gen-erate gestures that correspond to the data. There are a variety ofstudies [ 7][18][19][29] that use data-driven generative models togenerate gestures.Habibie, Ikhsanul, et al [ 7] combined the benefits of databasematching and adversarial learning to generate 3D gestures. The pa-per used the k-Nearest Neighbors (k-NN) algorithm to consider theICMI ’23 Companion, October 9–13, 2023, Paris, France Kim and Yoo et al.similarity between the correct audio-pose data stored in the data-base and the input data. Based on this, the correct audio-pose datastored in the database is sequentially searched to find the data withthe highest similarity to the input data. Then, a Conditional Gener-ative Adversarial Network (cGAN) model[ 21] was used to generategestures corresponding to the input data. Unlike the GAN model,the cGAN model can use additional information such as the labelof the input data to generate the desired data while the generatorand discriminator are training. Therefore, the paper used the re-sults of the k-NN algorithm as additional information to generategestures corresponding to the input data.Lu, Shuhong, et al [ 18] used the encoder structure of Liu, Xian,et al [ 17] to extract features from text and audio, and the Vector-Quantized Variational AutoEncoder (VQ-VAE) model[ 25] to extractgesture features. The VQ-VAE model is a model that applies vectorquantization (VQ) to the VAE model. Vector quantization is a tech-nique that uses an algorithm similar to K-means clustering to re-place continuous probability values with discrete values. By doingso, we converted the latent values of the gesture data into low-dimensional vectors. As a result, we generate gestures similar tothe input data by learning low-dimensional latent variables thatbetter represent the features of the gesture data.Lu, Shuhong, et al [ 19] considered the problem that when gener-ating gestures based on speech data, multiple gestures may be gen-erated for the same speech data. To solve this problem, they used in-dividual gesture tokens and a Residual-Quantized Variational Au-doencoder (RQ-VAE) model[ 14]. By using discrete gesture tokens,we solved the mapping problem of gesture generation by assign-ing different probabilities to different gestures generated based onthe same speech data. We also used the RQ-VAE model to train thediscrete gesture tokens. The RQ-VAE model recursively discretizesthe latent variables in the input data to reduce the loss of infor-mation as the encoding progresses. This resulted in higher-qualitygestures.Zhang, Fan, et al [ 29] proposed the DiffMotion model based onthe diffusion model for gesture generation. The DiffMotion modelconsists of an Autoregressive Temporal Encoder (AT-Encoder) anda Denoising Diffusion Probabilistic Module (DDPM). The AT-Encoderuses a multi-layer LSTM structure to encode the temporal contextof the speech data. Then, through the diffusion and generation pro-cess of the DDPM model, it learned a one-to-many mapping of in-put data and gestures and generated new gestures.2.2 Multimodal gesture generation researchMultimodal-based research utilizes various types of data throughmultiple modalities to overcome the limitations of using only a sin-gle type of data for learning. Feature vectors are extracted usinga deep learning structure suitable for each modality, and multipletasks are performed based on them. Multimodal-based gesture gen-eration research uses audio, text, and pose data as input data foreach modality to extract feature vectors and utilize them to gener-ate gestures that correspond to the input data. Various studies usethis multimodal structure to generate gestures.Kim, Gwantae, et al [ 9] proposed a new framework, MultimodalPretrained Encoder for Feature generation (MPE4G), to generatenatural gestures using (speech, text, motion) as input data for mul-timodal structures. This framework solves the problem of inaccu-rate gesture generation when there is noise in the input data usedfor training. To achieve this, the proposed framework consists ofthree main steps. First, a frame-by-frame embedder and generatorare trained with joint embedding loss and reconstruction loss. Sec-ond, a multimodal encoder is trained with a self-supervised learn-ing approach. Third, the embedder, encoder, decoder, and genera-tor are jointly trained using supervised learning. Based on thesecomponents, we not only achieved good performance in gesturegeneration but also solved problems such as noise in the input dataand generated natural gestures that respond to the input data.3 METHODOur model structure for gesture generation is based on [ 4]. Ourmodel structure consists of an encoder, an attachment, and a de-coder, as shown in the following figure 1.The encoder consists of character embedding, three 1d convolu-tion layers, and a bi-directional LSTM. When a one-hot vector isinput, it is converted into an embedding vector through characterembedding. It is then converted to an encoded feature through aconvolutional layer and a bi-directional LSTM. Attention is the pro-cess of aligning what information to get from the encoder by usingthe encoded features from the encoder and the features generatedat the previous point in the decoder’s LSTM. In our model, we usea locality constraint attention like [ 4]. The decoder consists of two(Fully connected layer + ReLU), a uni-directional LSTM, a Fullyconnected layer, and five convolutional layers. The alignment fea-ture information obtained through attention and the gesture fea-ture generated at the previous time is used to generate the gesturefeature at the next time. Through this process, gestures correspond-ing to the input data are generated.For gesture generation, we built on the aforementioned modelstructure and focused on input features. First, to vary the text fea-tures, we used RoBERTa-based (784 dimensions) pretrained withword embeddings. Next, we used mfcc, mel-spectrogram, pitch,and energy, which are commonly used audio features, as well aszero-crossing rate and rhythmical features.We used two NVIDIA A100-SXM4-80GB GPUs to train the afore-mentioned models. For both Monadic and Dyadic, we trained fora total of 25,000 iterations and set the learning rate to 1e-4. Wealso used a weight decay value of 1e-6 and a batch size of 64 tomatch the GPU memory. For the optimizer and loss function usedfor training, we used the most popular Adam optimizer and MSEloss function.3.1 Data and data processingWe trained our model using a dataset [ 15] provided by GENEAChallenge 2023. The dataset is based on the Talking With Hands16.2M gesture dataset, which are audio and motion capture dataof several pairs of people talking freely about various topics. Thedataset consists of 372 training datasets and 41 validation datasets.The training and validation datasets contain motion capture data(BVH format), audio (WAV format), and transcript (CSV format)data corresponding to the motion, and speaker id (CSV format)data, respectively. Since GENEA Challenge 2023[ 13] considers notCo-Speech Gesture Generation via Audio and Text Feature Engineering ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 1: Our Proposed Architectureonly monadic but also dyadic situations, unlike GENEA Challenge2022[ 28], the training and validation datasets include the main-agent and additionally the interlocutor.3.1.1 Motion. We extracted features from the motion using PyMolibrary for gesture generation. The motion FPS is 30. The team usedan exponential map[ 5] to represent 3D motion. Unlike GENEAChallenge 2022, GENEA Challenge 2023 evaluates only the fullbody[ 13]. Therefore, we utilised the motion features correspond-ing to the full body using the root position and 19 keypoints in theupper body and 6 keypoints in the lower body. Therefore, the fullbody has 78 dimensions.3.1.2 Audio. We extracted several features from the audio for ges-ture generation. The sample rate of the audio is 44100 Hz. First,we used mfcc, mel-spectrogram, and prosody (energy, pitch) fea-tures, which are widely used in gesture generation research[ 16].We also used zero-crossing rate and rhythmical feature in additionto the aforementioned features because we believe that gesturesare highly related to audio. In the case of zero-crossing rate, the di-rection and shape of the gesture can be determined, so we thoughtthat audio with a high zero-crossing rate could be used to generategently waving gestures, etc. In the case of rhythmical feature, wethought that if the rhythm of the audio is uniform, the correspond-ing gesture will also have a smooth shape.The characteristics of the six features mentioned above are asfollows. For mfcc, mel-spectrogram, zero-crossing rate, and rhyth-mical features, the Librosa library was used. The prosody featurewas extracted using the Parselmouth library. mfcc, mel-spectrogram,zero-crossing rate, and rhythmical features were all extracted us-ing a hop length of 1470 on the audio. The mel-spectrogram wasextracted by specifying the number of filter banks as 64, and themfcc was extracted using 40 dimensions. Thus, the features ex-tracted from the audio for model training are mfcc (40 dimensions),mel-spectrogram (64 dimensions), prosody (4 dimensions), zero-crossing rate (1 dimension), and rhythmical feature (384 dimen-sions).3.1.3 Text. We used pretrained word embedding to extract fea-tures from the text for gesture generation. For word embedding, weused the RoBERTa-based model (784 dimensions). The RoBERTa-based model is a Transformer-based language model that performsbetter than BERT by applying several improvements. Unlike BERT,it does not use masking during the training process, which short-ens the training time and improves performance. It also showsbetter generalization performance by using layer regularization,which is one of the techniques to prevent model overfitting dur-ing the training process. We used the RoBERTa-based model asour word embedding model.The text features used to train the model were extracted usingthe transcripts contained in the provided dataset. Each text datawas preprocessed with a word embedding model, and all OOVwords were zeroed. In addition, we used metadata information suchas the speaker’s ID and the presence or absence of finger joints.4 EVALUATIONGENEA Challenge 2023 was slightly different from GENEA Chal-lenge 2022 in that it was evaluated on three different aspects:•Human-likeness : How human-like the gestures are, regard-less of the speech•Appropriateness for agent speech : Evaluation of natu-ral gestures for speech of the interlocutor, while consideringhuman-likeness.•Appropriateness for the interlocutor : Evaluate whetherthe interlocutor shows appropriate gestures to match thespeech of the interlocutor, while considering human-likeness.4.1 Result and DiscussionThe test dataset used to compare and analyze the performance ofour gesture generation model was provided by GENEA Challenge2023. Unlike GENEA Challenge 2022, we also considered dyadicsituations, so the dataset used to generate gestures for the main-agent includes motion, audio, and text data for the interlocutor.We submitted the motion data generated using the test dataset toGENEA Challenge 2023 for evaluation and received the followingevaluation results.4.1.1 Human-likeness. Table 1 shows the results of the human-likeness evaluation. Our submission falls into the SC submission,ICMI ’23 Companion, October 9–13, 2023, Paris, France Kim and Yoo et al.and as can be seen in Table 1, it was evaluated as an unnaturalgesture in terms of human-likeness. To analyze these results, wevisualized some of the gestures generated by our model using a3D animation tool called Blender. When we checked the visual-ized gestures, we found that our model produced several unnatu-ral gestures, such as the gesture with the right arm fixed (left inFigure 2) and the gesture with the right arm bent behind the head(right in Figure 2), as shown in Figure 2. This confirmed that ourmodel produced a large number of unnatural gestures, as shownin Table 1. We also confirmed that simply increasing the numberof input features, which was the focus of our research, can havea detrimental effect on the model’s ability to generate gestures bylearning unnecessary information.Condi- Human-likenesstion Median MeanNA 71∈ [70,71]68 .4±1.0SG 69∈ [67,70]65 .6±1.4SF 65∈ [64,67]63 .6±1.3SJ 51∈ [50,53]51 .8±1.3SL 51∈ [50,51]50 .6±1.3SE 50∈ [49,51]50 .9±1.3SH 46∈ [44,49]45 .1±1.5BD 46∈ [43,47]45 .3±1.4SD 45∈ [43,47]44 .7±1.3BM 43∈ [42,45]42 .9±1.3SI 40∈ [39,43]41 .4±1.4SK 37∈ [35,40]40 .2±1.5SA 30∈ [29,31]32 .0±1.3SB 24∈ [23,27]27 .4±1.3SC 9∈ [ 9,9]11 .6±0.9Table 1: The table of statistics for the human-likeness evalu-ation, with confidence intervals at the level α=0.05. Condi-tions are ordered by decreasing sample median rating.Figure 2: Visualisation of the unnatural generated gestures4.1.2 Appropriateness. Table 2 shows the evaluation results in termsof appropriateness for speech. For our submission, SC, the eval-uation result is an unnatural gesture that is not appropriate forspeech in terms of appropriateness to speech. As with human-likeness,we visualized the generated gestures to analyze the evaluation re-sults. When we checked the visualized gestures, we found that inmany cases we were unable to generate gestures that correspondedto the speech. The evaluation results and visualizations confirmedthat the zero-crossing rate and rhythmical features, which we usedas additional input features, require different preprocessing.Condi- 2*MAS Pref. Raw response counttion matched 2 1 0 −1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC −0.02±0.04 49.1% 72 284 1057 314 76 1803Table 2: The table of statistics for the speech appropriatenessevaluation, with confidence intervals for the mean appropri-ateness score (MAS) at the level α=0.05. “Pref. matched”identifies how often test-takers preferred matched motionin terms of appropriateness, ignoring ties.Condi- 2*MAS Pref. Raw response counttion matched 2 1 0 −1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM −0.01±0.06 49.9% 55 212 470 206 63 1006SJ −0.03±0.05 49.1% 31 157 617 168 39 1012SC −0.03±0.05 49.1% 34 183 541 190 45 993SK −0.06±0.09 47.4% 200 227 111 276 205 1019SG −0.09±0.08 46.7% 140 252 163 293 167 1015SH −0.21±0.07 44.0% 55 237 308 270 144 1014Table 3: The table of statistics for the evaluation of appropri-ateness for the interlocutor, with confidence intervals forthe mean appropriateness score (MAS) at the level α=0.05.“Pref. matched” identifies how often test-takers preferredmatched motion in terms of appropriateness, ignoring ties.Co-Speech Gesture Generation via Audio and Text Feature Engineering ICMI ’23 Companion, October 9–13, 2023, Paris, FranceTable 3 shows the results of our evaluation in terms of appropri-ateness, i.e., the ability to generate gestures that match the speechas information about the interlocutor is added. To analyze the eval-uation results, we visualized the gestures generated by our model.We found that our model did not generate appropriate gesturesfor the interlocutor, but unnatural gestures that were not relatedto the interlocutor’s information, such as monadic situations. Wethought that this could be improved by resolving the aforemen-tioned issues of human-likeness and appropriateness.After analyzing the results of the previous evaluation, we foundthat gesture generation based on input features, which is the fo-cus of our research, requires appropriate preprocessing for eachfeature rather than simply adding features. Although most of theevaluation results show unnatural gestures, we believe that our re-search has the potential for further development.5 CONCLUSION AND FUTURE WORKWe conducted a study to generate gestures according to input data(motion, audio, text) based on the model structure of [ 4]. As men-tioned earlier, we conducted experiments by changing the wordembedding and adding audio features based on the existing modelstructure. We did not focus on improving the performance of thegesture generation model, but rather on checking how gesturesare generated according to the input features. After training ourmodel in this way, we found that it produced low-quality gestureswhen evaluated. Through these results, we confirmed that the pre-processing method for each feature is important, not just increas-ing the number of input features, and we have the following plansto improve the performance of gesture generation by conductingexperiments with various research methods.We will conduct experiments by changing SOTA models suchas diffusion, RQ-VAE, and detailed hyper-parameters instead ofsimply using the model structure used in the past. We will alsoconduct experiments in a different way to compare and analyzethe performance of gesture generation according to the input fea-tures we focused on. In the past, we simply added features to learn,but in the future, we will conduct experiments by segmenting thefeatures of motion, audio, and text. For example, we will conductexperiments using only motion features, only audio features, anda combination of motion and audio features to see which featureshave the most impact on gesture generation.ACKNOWLEDGMENTSThis work was supported by Institute of Information communica-tions Technology Planning Evaluation (IITP) grant funded by theKorea government(MSIT) (2022-0-00043,Adaptive Personality forIntelligent Agents)REFERENCES[1]Uttaran Bhattacharya, Elizabeth Childs, Nicholas Rewkowski, and DineshManocha. 2021. Speech2affectivegestures: Synthesizing co-speech gestures withgenerative adversarial affective expression learning. In Proceedings of the 29thACM International Conference on Multimedia . 2027–2036.[2]Jeffrey M Bradshaw, Paul Feltovich, and Matthew Johnson. 2017. Human-agentinteraction. Handbook of human-machine interaction (2017), 283–302.[3]Cristiano Castelfranchi. 1998. Modelling social action for AI agents. Artificialintelligence 103, 1-2 (1998), 157–182.[4]Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry tothe GENEA Challenge 2022–A Tacotron2 based method for co-speech gesturegeneration with locality-constraint attention mechanism. In Proceedings of the2022 International Conference on Multimodal Interaction . 784–789.[5]F Sebastian Grassia. 1998. Practical parameterization of rotations using the ex-ponential map. Journal of graphics tools 3, 3 (1998), 29–48.[6]David Greenwood, Stephen Laycock, and Iain Matthews. 2017. Predicting headpose from speech with a conditional variational autoencoder. ISCA.[7]Ikhsanul Habibie, Mohamed Elgharib, Kripasindhu Sarkar, Ahsan Abdullah, Sim-barashe Nyatsanga, Michael Neff, and Christian Theobalt. 2022. A motionmatching-based framework for controllable gesture synthesis from speech. InACM SIGGRAPH 2022 Conference Proceedings . 1–9.[8]Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neural net-works for character control. ACM Transactions on Graphics (TOG) 36, 4 (2017),1–13.[9]Gwantae Kim, Seonghyeok Noh, Insung Ham, and Hanseok Ko. 2023. MPE4G:Multimodal Pretrained Encoder for Co-Speech Gesture Generation. In ICASSP2023-2023 IEEE International Conference on Acoustics, Speech and Signal Process-ing (ICASSP) . IEEE, 1–5.[10] Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2008. Motion graphs. InACM SIGGRAPH 2008 classes . 1–10.[11] Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and Hed-vig Kjellström. 2019. Analyzing input and output representations for speech-driven gesture generation. In Proceedings of the 19th ACM International Confer-ence on Intelligent Virtual Agents . 97–104.[12] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, Si-mon Alexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator:A framework for semantically-aware speech-driven gesture generation. In Pro-ceedings of the 2020 international conference on multimodal interaction . 242–250.[13] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, TeodorNikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge2023: A large-scale evaluation of gesture generation models in monadic anddyadic settings. In Proceedings of the ACM International Conference on Multi-modal Interaction (ICMI ’23) . ACM.[14] Doyup Lee, Chiheon Kim, Saehoon Kim, Minsu Cho, and Wook-Shin Han. 2022.Autoregressive image generation using residual quantization. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition . 11523–11532.[15] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srinivasa,and Yaser Sheikh. 2019. Talking with hands 16.2 m: A large-scale dataset of syn-chronized body-finger motion and audio for conversational motion analysis andsynthesis. In Proceedings of the IEEE/CVF International Conference on ComputerVision . 763–772.[16] Carson Liu. 2023. Speech-Driven Gesture Generation of Social Robot and EmbodiedAgents . Ph. D. Dissertation. UNSW Sydney.[17] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, XiaoweiZhou, Wayne Wu, Bo Dai, and Bolei Zhou. 2022. Learning hierarchicalcross-modal association for co-speech gesture generation. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition . 10462–10472.[18] Shuhong Lu and Andrew Feng. 2022. The DeepMotion entry to the GENEAChallenge 2022. In Proceedings of the 2022 International Conference on MultimodalInteraction . 790–796.[19] Shuhong Lu, Youngwoo Yoon, and Andrew Feng. 2023. Co-Speech Gesture Syn-thesis using Discrete Gesture Token Learning. arXiv preprint arXiv:2303.12822(2023).[20] Albert Mehrabian. 2017. Nonverbal communication . Routledge.[21] Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets.arXiv preprint arXiv:1411.1784 (2014).[22] Najmeh Sadoughi and Carlos Busso. 2018. Novel realizations of speech-drivenhead movements with generative adversarial networks. In 2018 IEEE Interna-tional Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 6169–6173.[23] Sinan Sonlu, Uğur Güdükbay, and Funda Durupinar. 2021. A conversationalagent framework with multi-modal personality expression. ACM Transactionson Graphics (TOG) 40, 1 (2021), 1–16.[24] Sebastian Starke, Ian Mason, and Taku Komura. 2022. Deepphase: Periodic au-toencoders for learning motion phase manifolds. ACM Transactions on Graphics(TOG) 41, 4 (2022), 1–13.[25] Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representationlearning. Advances in neural information processing systems 30 (2017).[26] Alessandro Vinciarelli, Maja Pantic, and Hervé Bourlard. 2009. Social signalprocessing: Survey of an emerging domain. Image and vision computing 27, 12(2009), 1743–1759.[27] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, JaehongKim, and Geehyuk Lee. 2020. Speech gesture generation from the trimodal con-text of text, audio, and speaker identity. ACM Transactions on Graphics (TOG)39, 6 (2020), 1–16.[28] Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, TeodorNikolov, Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA ChallengeICMI ’23 Companion, October 9–13, 2023, Paris, France Kim and Yoo et al.2022: A large evaluation of data-driven co-speech gesture generation. In Proceed-ings of the 2022 International Conference on Multimodal Interaction . 736–747.[29] Fan Zhang, Naye Ji, Fuxing Gao, and Yongping Li. 2023. DiffMotion: Speech-driven gesture synthesis using denoising diffusion model. In International Con-ference on Multimedia Modeling . Springer, 231–242. |
FovoQL3nygw | FEIN-Z: Autoregressive Behavior Cloning for Speech-DrivenGesture GenerationLeon Harz∗lharz@techfak.uni-bielefeld.deBielefeld UniversityGermanyHendric Voß∗hvoss@techfak.uni-bielefeld.deSocial Cognitive Systems GroupBielefeld UniversityGermanyStefan Koppskopp@techfak.uni-bielefeld.deSocial Cognitive Systems GroupBielefeld UniversityGermanyABSTRACTHuman communication relies on multiple modalities such as verbalexpressions, facial cues, and bodily gestures. Developing compu-tational approaches to process and generate these multimodal sig-nals is critical for seamless human-agent interaction. A particularchallenge is the generation of co-speech gestures due to the largevariability and number of gestures that can accompany a verbalutterance, leading to a one-to-many mapping problem. This paperpresents an approach based on a Feature Extraction Infusion Net-work (FEIN-Z) that adopts insights from robot imitation learningand applies them to co-speech gesture generation. Building on theBC-Z architecture, our framework combines transformer architec-tures and Wasserstein generative adversarial networks. We describethe FEIN-Z methodology and evaluation results obtained within theGENEA Challenge 2023, demonstrating good results and significantimprovements in human-likeness over the GENEA baseline. Wediscuss potential areas for improvement, such as refining inputsegmentation, employing more fine-grained control networks, andexploring alternative inference methods.CCS CONCEPTS•Human-centered computing →Interactive systems andtools ;Empirical studies in interaction design ;HCI theory, conceptsand models ;•Computing methodologies →Neural networks ;Learning latent representations ;Unsupervised learning .KEYWORDSmachine learning; deep learning; co-speech gesture generation;gesture synthesis; multimodal data; transformer; behavior cloning;reinforcement learningACM Reference Format:Leon Harz∗, Hendric Voß∗, and Stefan Kopp. 2023. FEIN-Z: Autoregres-sive Behavior Cloning for Speech-Driven Gesture Generation. In INTER-NATIONAL CONFERENCE ON MULTIMODAL INTERACTION (ICMI ’23),October 9–13, 2023, Paris, France. ACM, New York, NY, USA, 10 pages.https://doi.org/10.1145/3577190.3616115∗Both authors contributed equally to the paperPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.ICMI ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10. . . $15.00https://doi.org/10.1145/3577190.36161151 INTRODUCTIONHuman communication is a multifaceted process that relies onvarious modalities, including verbal expressions, facial cues, andbodily gestures. Combining these modalities allows us to conveycomplex messages and facilitate meaningful interactions [ 9,50].Consequently, the development of machines that can process andgenerate these multi-modal signals is crucial to enable seamlessinteraction between humans and agents. A key aspect that makesgesture generation particularly challenging is the existence of multi-ple valid gestures for a given interaction. Unlike verbal expressions,which often have a single intended meaning, gestures can conveydifferent nuances and interpretations, leading to a one-to-manymapping problem [ 41]. Capturing this inherent variability and gen-erating contextually appropriate gestures is a complex task thatrequires careful consideration. The importance of gesture genera-tion extends beyond research to practical applications in real-worldscenarios and virtual environments. In human-robot interaction,gestures play a crucial role in enhancing communication and fa-cilitating natural interactions between humans and robotic agents[56]. Similarly, in virtual reality, realistic and expressive gesturescontribute to immersion and engagement, enabling more intuitiveand compelling experiences [ 35]. Therefore, the development ofrobust and effective gesture-generation methods has great potentialfor improving various areas of human-machine interaction.In this work, we propose the FEIN-Z framework, a combinationof the proposed Feature Extraction Infusion Network (FEIN) and thezero-shot learning aspect of the BC-Z architecture (Z). Inspired byrecent achievements in robotic imitation learning, we extend theBC-Z approach [ 27] intended to generalize robotic manipulationtasks to unseen problems, to the co-speech gesture generation do-main. As transformer architectures have shown promising resultsin a wide variety of domains [ 17,48], including co-speech gesturegeneration [ 38], we replace and extend multiple components of theoriginal BC-Z approach with a transformer architecture. Gener-ative adversarial networks (GAN) are widely used in the roboticand co-speech gesture generation domain [ 20,52]. Building uponthe insight gained from recent approaches [ 52], we propose to usea Wasserstein generative adversarial networks (WGAN) with aWasserstein divergence objective to guide our framework to gener-ate natural and expressive gestures. The released evaluation resultsof the GENEA Challenge 2023 show that our framework outper-forms the challenge baseline with regard to human-likeness bya significant margin and ranks in the top half of all evaluated ap-proaches [ 31]. In the next sections, we will first give a brief overviewof the existing work and current achievements of co-speech gestureICMI ’23, October 9–13, 2023, Paris, France Harz et al.generation (Section 2), before detailing the proposed FEIN-Z archi-tecture, the individual components, the data processing, and ourtraining procedure (Section 3). Finally, we will discuss the results ofthe performed evaluation (Section 4) and conclude with an outlookfor possible improvements of our work (Section 6).2 RELATED WORKGesture generation is an area of research that is rapidly progress-ing. Previous studies have explored various approaches, initiallyfocusing on rule-based methods [ 10,29,34,40] and simple com-putational models [ 8,19], and later transitioning to early machinelearning techniques [ 12,23]. Currently, data-driven approachesthat integrate multiple modalities are being employed [ 4,41,59],advancing the field even further.Initially, gesture generation relied on manually crafted rules,either directly applied to specific avatars or used in conjunctionwith computational models that estimated appropriate gesturesbased on accompanying speech [ 10,19,29,34]. Although these ap-proaches generally struggled to produce natural and fluent gestures,they did enable the creation of complex representative gesturesthat are challenging to achieve with current data-driven methods[5, 6, 29, 34].During the beginning of data-driven gesture generation, thefocus was primarily on single modalities, where gestures weregenerated based on previous gesture frames [ 47], textual inputs[12,56], or audio-driven inputs [ 18,21,23]. Recent research haswitnessed a notable shift towards the generation of multi-modalco-speech gestures. This approach integrates gestures with audio,text, and other input modalities to produce varied and natural ges-tures. To accomplish this, advanced techniques such as generaladversarial networks (GANs) [ 3,41,52,54,55], cyclic functions[26], glow networks with invertible convolutions [ 24], variationalautoencoders [ 38,46], and deep reinforcement learning have beenused [ 46]. Recurrent neural networks, specifically Bi-DirectionalLong Short-Term Memory (Bi-Directional LSTM) and gated recur-rent unit (GRU) [ 13,25], have demonstrated the ability to generatenatural co-speech gestures [ 23,57], with various adaptations ofrecurrent architectures still being utilized in recent approaches[28,30,44,51]. Notably, the incorporation of style embeddings hasfacilitated the generation of distinct gesture styles for individualspeakers, thereby enabling diverse variations in gestures that aretailored to specific styles or speakers [21, 55].Recent advancements in the field of co-speech gesture generationcan be broadly categorized into two main approaches: retrieval-based methods and learning-based methods. Retrieval-based meth-ods involve the creation or learning of predefined sets of gestureunits and employ techniques such as keyword matching, semanticanalysis, and prosody analysis to retrieve corresponding gesturesfrom a comprehensive database [ 59]. Conversely, learning-basedmethods focus on training models to directly predict co-speechgestures using paired co-speech gesture data [ 55]. In recent stud-ies, some researchers have automated the creation of gesture unitdatabases by leveraging training data. These gesture units are thenemployed to train deep learning models, enabling the generationof new and varied co-speech gestures [ 38]. Both retrieval-basedand learning-based methods have proven to be effective in address-ing the inherent challenge of one-to-many mapping in co-speechgestures [ 11,32,44,55]. Notably, recent work on retrieval-basedmethods have even demonstrated superior performance comparedto ground truth gestures [58, 59].Simultaneously, significant progress has been made in the realmof reinforcement learning for robot control, particularly in theutilization of text and visual data as input. Within this context,text data is commonly employed either as action descriptions orgoal descriptions. Recently, successful approaches have emergedleveraging large language models (LLMs), which generate suitableplans for given goals [ 1] [42] [36]. These approaches harness LLMsto break down goal descriptions into a sequence of feasible low-level actions expressed in natural language. Subsequently, the actiondescriptions undergo embedding and serve as additional input toa reinforcement learning model. As an example, PaLM-SayCanincorporates the BC-Z network [ 27] to acquire low-level robotskills by providing visual data of the current state alongside textdescriptions of planned actions.Both the co-speech gesture generation and reinforcement imita-tion learning domains share a common goal: to generate elaborateand complex outputs by acquiring knowledge from a relatively lim-ited data set. As the imitation learning domain has made significantprogress in minimizing the data requirements for generating com-plex outputs, we believe that these achievements can be leveragedin the gesture generation domain. Therefore, we propose our novelframework, which is built on the foundation of imitation learn-ing, with the expectation of extending these advances to gesturegeneration.3 MODEL AND METHODOur framework builds upon the BC-Z architecture by Jang et al .[27], which is a flexible imitation learning system that can learnfrom both demonstrations and interventions for a given Zero-Shottask. Similar to our approach, the BC-Z architecture generates itsoutput in an autoregressive manner. However, given the uniquedomain and data characteristics of co-speech gestures, we havemade several modifications to the backbone of the BC-Z architec-ture to adapt it to our domain. In particular, we replaced the visionnetwork component of BC-Z with an attention-based network thattakes inputs from each modality ( Transformer Network ). In addition,we refined the Feature-wise Linear Modulation (FiLM) network[43], while retaining the fundamental concept of linear modulationapplied to the previous embedding. We refer to this modified FiLMarchitecture as the Feature Extraction Infusion Network (FEIN) . Ourframework takes audio, text, and speaker identity information fromboth the main agent and the interlocutor as input, alongside ges-tures from the interlocutor. To incorporate the temporal dimensionof the provided data, we employ positional encoding techniquesproposed by Vaswani et al . [49] . The transformer network receivesaudio features, text features, and speaker identity information fromboth the main agent and the interlocutor. The FEIN module alsoutilizes this data, with the addition of previous t-gestures fromboth the main agent and the interlocutor. The output of the trans-former network is then combined with features extracted from theFEIN module. The resulting embedding is further processed by aFEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, Francejoint-specific Fully Connected Network (FCN). In addition to thearchitectural refinements, we utilize a Wasserstein GAN networkwith gradient divergence (WGAN-div) to improve the generationperformance of our framework [ 53]. To enhance the generationperformance of our framework we employ a discriminator withan FCN consisting of four linear layers, using the leaky ReLU acti-vation function [ 39]. Figure 1 gives an overview of our approach.In the following sections, we will provide a detailed description ofthe sub-modules of this framework, including the attention-basednetwork, FEIN, and the control network.3.1 Transformer BlocksThe presented framework incorporates a total of four transformerblocks, each possessing a consistent underlying architecture withdistinct parameters. These blocks comprise a multi-attention headfollowed by a feedforward network. To augment the capabilitiesof the feedforward network, we have introduced the Swish-GatedLinear Unit (SwiGLU) activation function [ 45] into the transformerblocks. As a result, the output yof the transformer blocks can becomputed as follows:MultiHead(Q,K,V)=Concat(head 1,..., headn)W0=x(1)f(x)=Swish(x·W1)⊗(x·W2) (2)y=f(x)·W3 (3)In the above equations, MultiHead denotes the multi-headed atten-tion layer, Swish represents the swish activation function and Wcorresponds to the weights of the linear functions.3.2 Transformer NetworkThe BC-Z framework initially relied on visual data, specificallyimages, to predict robot actions based on the current context. How-ever, our specific scenario lacks visual data, therefore requiringmodifications to the original architecture. To address this challenge,we adopt a transformer network, known for its capacity to modellong-term dependencies within structured input data. Central toour approach is the integration of audio and text input from boththe main agent and the interlocutor. Particularly, audio and textdata are processed independently. For each input modality, theframework computes an attention-based embedding, which learnsthe information and relationships present within the data. Theindividual attention-based embeddings obtained in the precedingstep are then aggregated and passed through an additional multi-attention mechanism, known as the ’Combined Transformer’. Thiscombination stage aims to identify and encapsulate important cuesrelated to the interplay between audio and text data. The resultantcomposite embedding effectively captures salient information anddata relationships, forming the fundamental basis for subsequentprocesses.3.3 Feature Extraction Infusion Network (FEIN)The FiLM network initially used in the BC-Z approach [ 27] requiresa task description and a human demonstration video as inputs.However, this approach isn’t directly applicable to our specificcase. Therefore, we designed a novel network architecture thatestablishes connections between the current audio-text inputs andthe gestures observed in the previous time window. Our dual goalswere to ensure coherent gesture generation by conditioning onprevious gestures and to inject additional contextual informationinto the current context.To achieve these goals, we use three separate stacks of 1D con-volutional layers to process the concatenated audio-text data andgesture information. This approach results in an embedding withan enriched spatial feature space, effectively capturing importantspatial relationships. For meaningful interplay within these embed-dings, a multi-head attention mechanism is incorporated. In thismechanism, the gesture embedding served as both query and value,while the audio-text embedding acts as the key. The goal of thisattention-based embedding is to learn complex dependencies be-tween gestures and audio-text data. The resulting attention-basedembedding then traverses two different feed-forward networks.Each network consisted of two linear layers with SiLU activationfunctions to promote non-linearity and information propagation. Anormalization layer completes each network, ensuring consistentand stable feature representations. This architectural configurationaims to facilitate the extraction of two essential feature networks:theγ-network and the β-network. These networks contain criticalinformation for the following control model. Within the controlnetwork architecture, the role of the γ-network is to provide timinginformation about previous gestures to the embedding. This helpsto maintain gesture consistency across time windows and counter-act fragmented gestures. On the other hand, the β-network, due toits additive nature, provides nuanced details to the embedding. Thisfeature allows the framework to capture subtle gestures that mightbe suppressed by the relatively coarse influence of the γ-network.3.4 Control NetworkThe embedding network, derived from the transformer network,along with the γandβnetworks from the FEIN model, serve as in-puts for the control network. This network architecture is foundedTable 1: The employed joints and their corresponding cate-gorizations within the control networkBody part number of joints jointsroot 3 b_rootupper body 21 b_spine0, b_spine1,b_spine2, b_spine3,b_neck0, b_headleft leg 6 b_l_upleg, b_l_legright leg 6 b_r_upleg, b_r_legleft arm 18 b_l_shoulder, b_l_arm,b_l_arm_twist, b_l_forearm,b_l_wrist_twist, b_l_wristleft hand 48 b_l_pinky1 ...3, b_l_ring1...3,b_l_middle1...3, b_l_index1 ...3,b_l_thumb0...3right arm 18 b_r_shoulder, b_r_arm,b_r_arm_twist, b_r_forearm,b_r_wrist_twist, b_r_wristright hand 48 b_r_thumb0 ...3, b_r_pinky1 ...3,b_r_middle1 ...3, b_r_ring1...3,b_r_index1...3ICMI ’23, October 9–13, 2023, Paris, France Harz et al.Figure 1: Top: The proposed FEIN model with the convolutional embedder, transformer block, and γ- andβ-FCN. Bottom:Transformer model with transformer blocks. Right: Control network with convolutional layers and γandβinfusion. All inputs(Gesture, Text, Audio, Speaker ID) consist of concatenated speaker and interlocutor information. The subscripts (0:99) and(100:199) denote distinct time windows represented by the input data.on the framework proposed by Jang et al . [27] . Initially, the em-bedding undergoes convolutional layer processing, resulting in adistilled embedding. Subsequently, this distilled embedding is en-riched through element-wise multiplication with the γ-networkoutput, which effectively integrates contextual information fromthe FEIN module. A subsequent convolutional layer processes themodulated output, combining information and yielding a trans-formed embedding. To further infuse the embedding with contex-tual cues, the transformed embedding is subject to element-wiseaddition with the β-network output. This step augments the embed-ding with supplementary contextual information. Following a finalconvolutional layer, the output is normalized, yielding a vector thatmerges current relevant features with essential contextual informa-tion. This integration is pivotal for generating coherent gestures,especially when considering the influence of preceding gestures.This processed vector then progresses through a sequence of fullyconnected networks (FCNs), with each FCN generating joint con-figurations for specific body parts, see Figure 1. This design impartsfine-grained control over individual body parts, thus facilitatingprecise manipulation of the model’s movements. The employmentof independent body-part-specific FCNs allows the framework toextract distinct features from the shared embedding, enabling abody-part-specific feature space.3.5 LossThe loss functions used in our framework are defined as follows.For the discriminator, the loss function is given by:LDwdiv(x,D(z))=Dis(x)−Dis(D(z))+δ|∇ˆxDis(ˆx)|p(4)Here,Disrepresents the discriminator function, xrepresents theoriginal dataset, and zrepresents the reconstructed data. The hy-perparameter δcontrols the magnitude of the divergence penalty.The first component of the loss, Dis(x)−Dis(D(z)), measuresthe dissimilarity between the real sample xand the output of ourframework, D(z). The second term, δ|∇ˆxDis(ˆx)|p, corresponds tothe divergence penalty, which encourages the generated sampleD(z)to closely resemble the distribution of real data. The generatorloss function is defined as:LGwdiv =Dis(D(z)) (5)This loss function aims to minimize the output of the discriminator,specifically the evaluation of Dis(D(z)).For behavior cloning, we employ a scaled version of the smoothedL1 loss, defined as:L1= 0.5θ(xθ−zθ)2β, if|x−z|<βθ|xθ−zθ|−0.5β,otherwise(6)This loss function is applied to the positions yandˆy, velocitiesy′and ˆy′, and accelerations y′′and ˆy′′. For this, the gradients arecalculated using the following formula:f(y)=2∑︁i=0λidiydti(7)Lbc=L1(f(yi),f(ˆyi)) (8)In these equations, yrepresents the true gestures, while ˆydenotesthe predicted gestures. The function f(y)calculates the gradientsof the variable or function ywith respect to time. The superscriptFEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualization for the human-likeness study,provided by the GENEA Challenge 2023 [ 31]. Our frameworkis labeled SE. Median ratings are shown as red bars (with 0.05CI) and mean ratings as yellow diamonds (with 0.05 CI). Boxedges indicate the 25th and 75th percentiles. Whiskers cover95% of ratings for each condition.iindiydtiindicates the order of the derivative, ranging from 0 to 2.Theλiterms are scaling factors applied to the position, velocity,and acceleration losses.The termLbccorresponds to the loss function used for back-propagation. It is computed as the average of the individual losstermsLiover a dataset of size N. EachLimeasures the dissimilaritybetween the calculated gradients f(yi)and the target gradientsf(y∗i). Together, this loss ensures a temporal consistency of thegenerated gestures. The overall loss function used in our frame-work is a combination of the behavior cloning loss ( Lbc) and thediscriminator loss ( LGwdiv):Ltotal=Lbc+ 1n·λgLGwdiv(9)Here, 1n(s)is an indicator function defined as:1n(s)=(1,ifs%n=00,otherwise(10)This indicator function is used to determine when to apply thediscriminator loss. The parameter ncontrols the frequency of ap-plying the discriminator loss, and the scaling factor λgadjusts therelative importance of the discriminator loss compared to the be-havior cloning loss. By combining these components, the overallloss function guides the training process to improve the quality andconsistency of the generated gestures.3.6 Data ProcessingThe Genea Challenge 2023 provided an adapted version of theTalking With Hands 16.2M dataset [ 33], extended to a dyadic set-ting involving both a speaker and an interlocutor. This dataset en-compasses various modalities, including 3D full-body gesture data,audio data, text transcripts, and the speaker ID, all organized sepa-rately for the speaker and the interlocutor. As part of the challenge,the data was pre-separated into a training set of 371 sequences, avalidation set of 40 sequences, and a test set of 69 sequences. Eachsequence is approximately 1 minute in length, with a sample rateof 44100 Hz for the audio data. The gesture data was recorded at 30frames per second. Since the challenge required the generation ofthe speaker for the test set, this data was omitted.For our approach, we built upon the preprocessing pipeline es-tablished by Chang et al . [11] , making necessary modifications tosuit our specific requirements. For the audio data, we used multiplefeature extraction techniques to obtain three different features: MelFrequency Cepstral Coefficients (MFCC) with 40 dimensions, MelSpectrograms with 64 filter banks, and prosody features. All audiofeatures were computed using a window length of 4096 and a hoplength of 1470. Regarding the text transcripts, we used the FastTextword embedding model [ 7], which assigns a 300-dimensional vectorrepresentation to each word in the transcript. Since the temporalduration of each word is known, we generated a vector of size [se-quence length, 300] containing the corresponding word embeddingvector for each word’s duration. For the gesture data, we trans-formed the rotation of each body and finger joint in the BVH fileinto an exponential map representation [ 22]. This transformationresulted in 56 3D body joints for the gesture data.In the post-processing phase of the gesture output, we performedtwo operations. First, we clipped the angle of each generated bodyjoint to be within the range of the 2nd and 98th percentiles ofthe corresponding joint in the training data. This clipping stepensured that the generated angles remained within a reasonablerange. Afterward, we applied a rolling window calculation over 50frames to smooth the generated output and improve its temporalcoherence.3.7 Training procedureThe training procedure incorporates both behavior cloning and theWGAN architecture. In our setup, the network is responsible forgenerating gestures, while the discriminator is used to discriminatebetween the generated data and the original data. We chose a batchsize of 128 and a sequence length of 200 frames, which correspondsto two frame windows: t−1:=[0−99]andt0:=[100−199]. For theoptimizer, we use AdamW [ 37] with a weight decay parameter of0.01 for both the FEIN network and the discriminator. For the FEINmodel, we select a learning rate of 5e−5, while the discriminatorutilizes a learning rate of 1e−4. During training, we set the scalingfactorλgto0.05.The audio and text data used in training comes from t0, whilethe gesture data is sourced from t−1. After each prediction step, weoptimize the model using the loss function described in 9, and weoptimize the discriminator accordingly using its loss function, asdefined in 4. To prevent the network from consistently outperform-ing the discriminator and to stabilize the training, we apply the 5loss only every n=4steps. In total, we trained our framework for60 epochs. Every 10 epochs, we computed the validation loss andused the best-performing model to generate the evaluation data.ICMI ’23, October 9–13, 2023, Paris, France Harz et al.4 EVALUATIONDuring the training phase of the framework, we conducted a thor-ough analysis of various framework configurations, experimentingwith different numbers of transformer blocks and parameters. Wealso explored frameworks that generated gestures for both the mainagent and the interlocutor, as well as different input data for theFEIN model. Among these tested frameworks, many did not yieldsatisfactory results in terms of generating realistic and coherentgestures. As a result, we selected the framework proposed in thisstudy as the most suitable for our purposes.The main evaluation of the framework was performed along-side other approaches within the GENEA Challenge 2023. Sincethe evaluation of generated co-speech gestures is largely subjec-tive and objective measures that strongly correlate with subjec-tive evaluations are lacking [ 41], the evaluation focused primar-ily on subjective measures. Three specific aspects were evaluated:"Human-Likeness", "Appropriateness for Agent Speech", and "Ap-propriateness for the Interlocutor". To ensure anonymity, all pub-lished results were anonymized and assigned unique labels. Ourframework was labeled SE.4.1 Human-LikenessThe results of the Human-Likeness evaluation are shown in Figure2, illustrating the rating distribution obtained for the different ap-proaches. Figure 3 highlights the significant differences betweenthe competitors. Here, our framework receives significantly higherratings than the dyadic baseline ( BD), the monadic baseline ( BM),as well as the approaches SH,SD,SI,SK,SA,SB, and SC. On...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significant differences between all approaches, pro-vided by GENEA Challenge 2023 [ 31]. Our framework is la-beled SE. White indicates that the condition on the y-axis israted significantly higher than the one on the x-axis, whileblack indicates the opposite (y-rated below x). Gray indicatesno statistically significant difference at a significance levelofα=0.05, after applying the Holm-Bonferroni correction.the other hand, compared to the natural motion ( NA) and the ap-proaches SGandSF, our framework receives significantly lowerratings for human-likeness. There were no significant differences interms of human-likeness between our approach and the approachesSJandSL.A significant limitation of our approach, especially concerninghuman-like gesturing, was the lack of finger movement in all of thegenerated gestures. Although we trained our framework to produceoutput for the finger bones, the resulting gestures consistently ex-hibited a static finger position. Any changes observed in the fingerbones were primarily intended to prevent the introduction of arti-facts, rather than to add meaningful information to the generatedgestures.Another notable issue was the rapid change of poses in ourframework. Although the evaluation only captured footage fromthe knees up, to prevent any foot sliding from influencing the eval-uation, our model consistently exhibited movements that involveda redistribution of weight in the lower part of the torso. Such move-ments may have compromised the naturalness of the generatedgestures and led to a lower ranking in the human-likeness evalua-tion.4.2 AppropriatenessThe results of the speech appropriateness evaluation for the mainagent are depicted in Figure 4a. These ratings indicate the likelihoodof each framework being preferred with matching or mismatchinggestures. Our proposed framework, labeled SE, demonstrates sta-tistical significance in terms of speech appropriateness comparedto random chance. However, it is notably inferior to frameworkSG, which exhibits significantly better performance. Additionally,there is no significant difference between our framework and theapproaches SJ,SF,SK,SD,SI,SK,SB,SA, and SHin terms ofspeech appropriateness. The results of the appropriateness of ges-tures in response to the interlocutor are presented in Figure 4b.These ratings reflect the likelihood of each framework being pre-ferred with matching or mismatching gestures. Our frameworkdoes not exhibit statistical significance compared to random chancein this aspect. Our model does achieve a significantly higher meanappropriateness score (MAS) compared to frameworks SGandSH,and a significantly lower MAS compared to the natural motionNA. Furthermore, our model does not differ significantly from thedyadic and monadic baselines, as well as frameworks SA,SB,SL,SF,SI,SD,SJ,SC, and SK, in terms of appropriateness of gesturesin response to the interlocutor.The evaluation results presented here show a notable discrepancywhen compared to the results of the human similarity evaluation.While our framework is able to generate co-speech gestures thatare perceived as more human-like than the baseline used in thechallenge, this does not mean that the generated gestures are per-ceived as more appropriate for the given context than the baseline.Although the lack of finger bone information could be a possibleexplanation for this, we suggest that it is indicative of a generalproblem common to all current approaches to co-speech gesturegeneration. Current approaches excel at producing gestures that ap-pear natural and unobtrusive within a given conversation, which isalready a commendable achievement for human-agent interaction.FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceNA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched (b) Appropriateness for the interlocutorFigure 4: Bar plots visualizing the response distribution in the appropriateness studies, provided by the GENEA Challenge 2023[31]. Our framework is labeled SE. The blue bar (bottom) represents responses where subjects preferred the matched motion,the light grey bar (middle) represents tied responses, and the red bar (top) represents responses preferring mismatched motion,with the height of each bar being proportional to the fraction of responses in each category. Lighter colors correspond to slightpreference, and darker colors to clear preference. On top of each bar is also a confidence interval for the mean appropriatenessscore, scaled to fit the current axes. The dotted black line indicates chance-level performance. Conditions are ordered by meanappropriateness score.However, this still falls well short of replicating human-to-humaninteraction. In human-to-human communication, individuals con-vey additional meaning through their gestures [ 14], which is basedon a shared mental model of the current conversation, themselves,and the conversation partner [ 15,16]. With this shared understand-ing, conversational partners can adapt their gestures to each otherand effectively convey meaningful information. Since our frame-work, and to the best of our knowledge all other available co-speechgesture approaches, lacks this essential insight into the conversa-tion partner, the generated gestures appear highly interchangeableto any human evaluator.Table 2: The Fréchet Gesture Distance (FGD) distance for eachablation modification, calculated both in the feature space(FGD F-space) and the raw data space (FGD R-space). For bothdistances, lower is better.Methods FGD F-space ↓FGD R-space↓natural motion 0.00 0.00w/o transformer 169.93 3334.14w/oγ-network 84.45 2667.33w/oβ-network 61.76 1879.82w/o audio 50.93 965.05w/o text 43.9 1099.48w/o main audio 34.98 758.62w/o inter text 31.26 767.28w/o main text 29.49 777.91w/o inter audio 28.54 680.66original 23.03 533.045 ABLATION STUDYIn order to assess the specific contributions of each componentwithin our proposed framework, we conducted an ablation study.First, different input configurations were investigated, includingthe exclusion of all textual input ("w/o text"), the exclusion of allaudio input ("w/o audio"), and the selective removal of these modal-ities for the main speaker ("w/o main audio" and "w/o main text")as well as for the interlocutor ("w/o inter audio" and "w/o intertext"). Furthermore, different architectural configurations were ex-plored, including deactivation of the output of the combined trans-former ("w/o transformer"), deactivation of the β-network ("w/oβ-network"), and exclusion of the multiplication process involvingtheγ-network (referred to as "w/o γ-network"). The distinction inthe generated gestures was measured by using the Fréchet GestureDistance (FGD), as defined by Yoon et al . [55] , for each modification.The evaluation of this distance was performed both in the featurespace of the autoencoder network given by the GENEA 2023 chal-lenge and in the context of the raw data space, similar to Ahujaet al. [2]. Detailed results are presented in Table 2. We make anexample video of all modifications available online1.As can be expected, each modification of the framework leadsto an increase in the FGD, both in the feature space and in theraw data space. In terms of the modality-specific inputs associatedwith the interactive partner, all modifications lead to a comparableincrease in the FGD. In particular, the removal of the interlocutor’saudio produced the smallest change, while the exclusion of the mainspeaker’s audio produced the largest change. The complete removalof both textual and audio information led to a sharp increase inFGD. Visual inspection of the generated gestures revealed instancesof elaborate but misaligned gestures in cases of audio removal,1https://vimeo.com/853326587ICMI ’23, October 9–13, 2023, Paris, France Harz et al.whereas small and infrequent gestures were observed followingtext removal.Looking at the modifications of the architectural configurations,it becomes clear that the transformer model has successfully learnedto generate the gestures since the removal leads to strongly de-graded performance and the largest increase in FGD of all modifi-cations. Similarly, the removal of the βnetwork and the γnetworkleads to a deterioration of the performance. Looking at the visualresults of the βnetwork, the gestures still show a natural fluidmovement but are mainly concentrated in front of the chest and donot show any obvious finger movement. On the other hand, the vi-sual results from the γnetwork show fast, erratic movements of thehands and upper body, with some unnatural poses. These resultssupport our intended design choices, with the γ-network focusingmainly on smoothing the temporal information of the generatedgestures, while the β-network refines the generated gestures toallow for more elaborate hand movements.6 CONCLUSIONOur framework presents a novel approach to co-speech gesturegeneration inspired by robotic imitation learning and based on abehavior cloning architecture. We combine a transformer architec-ture with a generative adversarial network to create a model thatranks in the top half of the GENEA Challenge 2023 [ 31]. Althoughthe model did not achieve results comparable to natural motion,we believe that additional training time and more sophisticatedinput segmentation could lead to improved results. An effectivestrategy may involve the use of only historical data in the FEINmodel to ensure that the input data consists only of aligned gesture,audio, and text data. In addition, the use of a finer-grained controlnetwork that distinguishes separate body parts, such as hands andarms, could have the potential to improve the generated gestures.Increasing the feedback provided by the discriminator model inlater stages of training is another way to improve performance,as the discriminator shows diminishing returns as training pro-gresses. Additionally, selectively freezing certain models within ourframework during later stages of training to focus on refining ges-tures could lead to performance improvements. Similarly, exploringalternative inference methods, such as predicting one frame at atime or adjusting the time window, may also help to improve thecapabilities of the framework. In conclusion, we believe that ourarchitecture demonstrates the potential to generate gestures thatexhibit some human-like characteristics, and we believe that thereare several ways in which our framework could be improved in thefuture. Finally, we hypothesize that the integration of frameworksintroduced in multimodal robot learning could further enhance theperformance of future gesture generation models.FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation ICMI ’23, October 9–13, 2023, Paris, FranceREFERENCES[1]Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes,Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Haus-man, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan,Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil Joshi, RyanJulian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, YaoLu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao,Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan,Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu,Mengyuan Yan, and Andy Zeng. 2022. Do As I Can and Not As I Say: GroundingLanguage in Robotic Affordances. In arXiv preprint arXiv:2204.01691 .[2]Chaitanya Ahuja, Dong Won Lee, Ryo Ishii, and Louis-Philippe Morency. 2020.No Gestures Left Behind: Learning Relationships between Spoken Language andFreeform Gestures. In Findings of the Association for Computational Linguistics:EMNLP 2020 . Association for Computational Linguistics, Online, 1884–1895.https://doi.org/10.18653/v1/2020.findings-emnlp.170[3]Chaitanya Ahuja, Dong Won Lee, Yukiko I. Nakano, and Louis-Philippe Morency.2020. Style Transfer for Co-Speech Gesture Animation: A Multi-SpeakerConditional-Mixture Approach. https://doi.org/10.48550/arXiv.2007.12553arXiv:2007.12553 [cs].[4]Simon Alexanderson, Éva Székely, Gustav Eje Henter, Taras Kucherenko, andJonas Beskow. 2020. Generating coherent spontaneous speech and gesture fromtext. In Proceedings of the 20th ACM International Conference on Intelligent VirtualAgents . 1–3. https://doi.org/10.1145/3383652.3423874 arXiv:2101.05684 [cs, eess].[5]James Allen, Mehdi Manshadi, Myroslava Dzikovska, and Mary Swift. 2007. Deeplinguistic processing for spoken dialogue systems. In Proceedings of the Work-shop on Deep Linguistic Processing - DeepLP ’07 . Association for ComputationalLinguistics, Prague, Czech Republic, 49. https://doi.org/10.3115/1608912.1608922[6]Kirsten Bergmann, Sebastian Kahl, and Stefan Kopp. 2013. Modeling the SemanticCoordination of Speech and Gesture under Cognitive and Linguistic Constraints.InIntelligent Virtual Agents , David Hutchison, Takeo Kanade, Josef Kittler, Jon M.Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz,C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, DougTygar, Moshe Y. Vardi, Gerhard Weikum, Ruth Aylett, Brigitte Krenn, CatherinePelachaud, and Hiroshi Shimodaira (Eds.). Vol. 8108. Springer Berlin Heidel-berg, Berlin, Heidelberg, 203–216. https://doi.org/10.1007/978-3-642-40415-3_18Series Title: Lecture Notes in Computer Science.[7]Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017.Enriching Word Vectors with Subword Information. https://doi.org/10.48550/arXiv.1607.04606 arXiv:1607.04606 [cs].[8]Matthew Brand and Aaron Hertzmann. 2000. Style machines. In Proceedingsof the 27th annual conference on Computer graphics and interactive techniques(SIGGRAPH ’00) . ACM Press/Addison-Wesley Publishing Co., USA, 183–192.https://doi.org/10.1145/344779.344865[9]Justine Cassell, David Mcneill, and Karl-Erik Mccullough. 1994. Speech-GestureMismatches: Evidence for One Underlying Representation of Linguistic andNonlinguistic Information. Cognition 7 (Jan. 1994). https://doi.org/10.1075/pc.7.1.03cas[10] Justine Cassell, Hannes Högni Vilhjálmsson, and Timothy Bickmore. 2004. BEAT:the Behavior Expression Animation Toolkit. In Life-Like Characters: Tools, Affec-tive Functions, and Applications , Helmut Prendinger and Mitsuru Ishizuka (Eds.).Springer, Berlin, Heidelberg, 163–185. https://doi.org/10.1007/978-3-662-08373-4_8[11] Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry tothe GENEA Challenge 2022 – A Tacotron2 Based Method for Co-Speech GestureGeneration With Locality-Constraint Attention Mechanism. In INTERNATIONALCONFERENCE ON MULTIMODAL INTERACTION . ACM, Bengaluru India, 784–789. https://doi.org/10.1145/3536221.3558060[12] Chung-Cheng Chiu, Louis-Philippe Morency, and Stacy Marsella. 2015. PredictingCo-verbal Gestures: A Deep and Temporal Modeling Approach. In IntelligentVirtual Agents , Willem-Paul Brinkman, Joost Broekens, and Dirk Heylen (Eds.).Vol. 9238. Springer International Publishing, Cham, 152–166. https://doi.org/10.1007/978-3-319-21996-7_17 Series Title: Lecture Notes in Computer Science.[13] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau,Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning PhraseRepresentations using RNN Encoder-Decoder for Statistical Machine Translation.https://doi.org/10.48550/arXiv.1406.1078 arXiv:1406.1078 [cs, stat].[14] Sharice Clough and Melissa C. Duff. 2020. The Role of Gesture in Commu-nication and Cognition: Implications for Understanding and Treating Neuro-genic Communication Disorders. Frontiers in Human Neuroscience 14 (2020).https://doi.org/10.3389/fnhum.2020.00323[15] Ilaria Cutica and Monica Bucciarelli. 2011. “The More You Gesture, the Less IGesture”: Co-Speech Gestures as a Measure of Mental Model Quality. Journal ofNonverbal Behavior 35, 3 (Sept. 2011), 173–187. https://doi.org/10.1007/s10919-011-0112-7[16] Ilaria Cutica and Monica Bucciarelli. 2013. Cognitive change in learning fromtext: Gesturing enhances the construction of the text mental model. Journalof Cognitive Psychology 25, 2 (March 2013), 201–209. https://doi.org/10.1080/20445911.2012.743987[17] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn,Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer,Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. AnImage is Worth 16x16 Words: Transformers for Image Recognition at Scale.https://doi.org/10.48550/arXiv.2010.11929 arXiv:2010.11929 [cs].[18] Ylva Ferstl, Michael Neff, and Rachel McDonnell. 2019. Multi-objective adversarialgesture generation. In Proceedings of the 12th ACM SIGGRAPH Conference onMotion, Interaction and Games (MIG ’19) . Association for Computing Machinery,New York, NY, USA, 1–10. https://doi.org/10.1145/3359566.3360053[19] Aphrodite Galata, Neil Johnson, and David Hogg. 2001. Learning Variable-LengthMarkov Models of Behavior. Computer Vision and Image Understanding 81, 3(March 2001), 398–413. https://doi.org/10.1006/cviu.2000.0894[20] Chongkai Gao, Haichuan Gao, Shangqi Guo, Tianren Zhang, and Feng Chen. 2021.CRIL: Continual Robot Imitation Learning via Generative and Prediction Model.In2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) .6747–5754. https://doi.org/10.1109/IROS51168.2021.9636069 ISSN: 2153-0866.[21] Shiry Ginosar, Amir Bar, Gefen Kohavi, Caroline Chan, Andrew Owens, andJitendra Malik. 2019. Learning Individual Styles of Conversational Gesture.https://doi.org/10.48550/arXiv.1906.04160 arXiv:1906.04160 [cs, eess].[22] F. Sebastian Grassia. 1998. Practical Parameterization of Rotations Using theExponential Map. Journal of Graphics Tools 3, 3 (Jan. 1998), 29–48. https://doi.org/10.1080/10867651.1998.10487493[23] Dai Hasegawa, Naoshi Kaneko, Shinichi Shirakawa, Hiroshi Sakuta, and KazuhikoSumi. 2018. Evaluation of Speech-to-Gesture Generation Using Bi-DirectionalLSTM Network. In Proceedings of the 18th International Conference on IntelligentVirtual Agents (IVA ’18) . Association for Computing Machinery, New York, NY,USA, 79–86. https://doi.org/10.1145/3267851.3267878[24] Gustav Eje Henter, Simon Alexanderson, and Jonas Beskow. 2020. MoGlow:Probabilistic and controllable motion synthesis using normalising flows. ACMTransactions on Graphics 39, 6 (Dec. 2020), 1–14. https://doi.org/10.1145/3414685.3417836 arXiv:1905.06598 [cs, eess, stat].[25] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory.Neural Computation 9, 8 (Nov. 1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735 Conference Name: Neural Computation.[26] Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neuralnetworks for character control. ACM Transactions on Graphics 36, 4 (Aug. 2017),1–13. https://doi.org/10.1145/3072959.3073663[27] Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch,Sergey Levine, and Chelsea Finn. 2022. BC-Z: Zero-Shot Task Generalizationwith Robotic Imitation Learning. In Proceedings of the 5th Conference on RobotLearning (Proceedings of Machine Learning Research, Vol. 164) , Aleksandra Faust,David Hsu, and Gerhard Neumann (Eds.). PMLR, 991–1002. https://proceedings.mlr.press/v164/jang22a.html[28] Naoshi Kaneko, Yuna Mitsubayashi, and Geng Mu. 2022. TransGesture: Au-toregressive Gesture Generation with RNN-Transducer. In INTERNATIONALCONFERENCE ON MULTIMODAL INTERACTION . ACM, Bengaluru India, 753–757. https://doi.org/10.1145/3536221.3558061[29] Stefan Kopp, Brigitte Krenn, Stacy Marsella, Andrew N. Marshall, CatherinePelachaud, Hannes Pirker, Kristinn R. Thórisson, and Hannes Vilhjálmsson.2006. Towards a Common Framework for Multimodal Generation: The BehaviorMarkup Language. In Intelligent Virtual Agents (Lecture Notes in Computer Science) ,Jonathan Gratch, Michael Young, Ruth Aylett, Daniel Ballin, and Patrick Olivier(Eds.). Springer, Berlin, Heidelberg, 205–217. https://doi.org/10.1007/11821830_17[30] Vladislav Korzun, Anna Beloborodova, and Arkady Ilin. 2022. ReCell: repli-cating recurrent cell for auto-regressive pose generation. In INTERNATIONALCONFERENCE ON MULTIMODAL INTERACTION . ACM, Bengaluru India, 94–97.https://doi.org/10.1145/3536220.3558801[31] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[32] Dong Won Lee, Chaitanya Ahuja, and Louis-Philippe Morency. 2021. CrossmodalClustered Contrastive Learning: Grounding of Spoken Language to Gesture. InCompanion Publication of the 2021 International Conference on Multimodal Inter-action . ACM, Montreal QC Canada, 202–210. https://doi.org/10.1145/3461615.3485408[33] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha Srinivasa,and Yaser Sheikh. 2019. Talking With Hands 16.2M: A Large-Scale Dataset ofSynchronized Body-Finger Motion and Audio for Conversational Motion Analysisand Synthesis. In 2019 IEEE/CVF International Conference on Computer Vision(ICCV) . IEEE, Seoul, Korea (South), 763–772. https://doi.org/10.1109/ICCV.2019.00085[34] Jina Lee and Stacy Marsella. 2006. Nonverbal Behavior Generator for EmbodiedConversational Agents. In Intelligent Virtual Agents (Lecture Notes in ComputerICMI ’23, October 9–13, 2023, Paris, France Harz et al.Science) , Jonathan Gratch, Michael Young, Ruth Aylett, Daniel Ballin, and PatrickOlivier (Eds.). Springer, Berlin, Heidelberg, 243–255. https://doi.org/10.1007/11821830_20[35] Yang Li, Jin Huang, Feng Tian, Hong-An Wang, and Guo-Zhong Dai. 2019. Gestureinteraction in virtual reality. Virtual Reality & Intelligent Hardware 1, 1 (Feb.2019), 84–112. https://doi.org/10.3724/SP.J.2096-5796.2018.0006[36] Kevin Lin, Christopher Agia, Toki Migimatsu, Marco Pavone, and JeannetteBohg. 2023. Text2Motion: From Natural Language Instructions to Feasible Plans.arXiv:2303.12153 [cs.RO][37] Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization.https://doi.org/10.48550/arXiv.1711.05101 arXiv:1711.05101 [cs, math].[38] Shuhong Lu and Andrew Feng. 2022. The DeepMotion entry to the GENEA Chal-lenge 2022. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION .ACM, Bengaluru India, 790–796. https://doi.org/10.1145/3536221.3558059[39] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. [n. d.]. Rectifier Nonlineari-ties Improve Neural Network Acoustic Models. ([n. d.]).[40] Stacy Marsella, Yuyu Xu, Margaux Lhommet, Andrew Feng, Stefan Scherer, andAri Shapiro. 2013. Virtual character performance from speech. In Proceedingsof the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation .ACM, Anaheim California, 25–35. https://doi.org/10.1145/2485895.2485900[41] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter,and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-SpeechGesture Generation. Computer Graphics Forum 42, 2 (May 2023), 569–596. https://doi.org/10.1111/cgf.14776 arXiv:2301.05339 [cs].[42] Norman Di Palo, Arunkumar Byravan, Leonard Hasenclever, Markus Wulfmeier,Nicolas Heess, and Martin Riedmiller. 2023. Towards A Unified Agent withFoundation Models. In Workshop on Reincarnating Reinforcement Learning atICLR 2023 . https://openreview.net/forum?id=JK_B1tB6p-[43] Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and AaronCourville. 2018. Film: Visual reasoning with a general conditioning layer. InProceedings of the AAAI conference on artificial intelligence , Vol. 32.[44] Khaled Saleh. 2022. Hybrid Seq2Seq Architecture for 3D Co-Speech Gesture Gen-eration. In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION .ACM, Bengaluru India, 748–752. https://doi.org/10.1145/3536221.3558064[45] Noam Shazeer. 2020. GLU Variants Improve Transformer. CoRR abs/2002.05202(2020). arXiv:2002.05202 https://arxiv.org/abs/2002.05202[46] Mingyang Sun, Mengchen Zhao, Yaqing Hou, Minglei Li, Huang Xu, Songcen Xu,and Jianye Hao. [n. d.]. Co-Speech Gesture Synthesis by Reinforcement LearningWith Contrastive Pre-Trained Rewards. ([n. d.]).[47] Graham W. Taylor and Geoffrey E. Hinton. 2009. Factored conditional restrictedBoltzmann Machines for modeling motion style. In Proceedings of the 26th AnnualInternational Conference on Machine Learning . ACM, Montreal Quebec Canada,1025–1032. https://doi.org/10.1145/1553374.1553505[48] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-AnneLachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro,Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guil-laume Lample. 2023. LLaMA: Open and Efficient Foundation Language Models.https://doi.org/10.48550/arXiv.2302.13971 arXiv:2302.13971 [cs].[49] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is Allyou Need. In Advances in Neural Information Processing Systems , I. Guyon, U. VonLuxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.),Vol. 30. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf[50] Petra Wagner, Zofia Malisz, and Stefan Kopp. 2014. Gesture and speech ininteraction: An overview. Speech Communication 57 (Feb. 2014), 209–232. https://doi.org/10.1016/j.specom.2013.09.008[51] Jonathan Windle, David Greenwood, and Sarah Taylor. 2022. UEA Digital Humansentry to the GENEA Challenge 2022. In INTERNATIONAL CONFERENCE ONMULTIMODAL INTERACTION . ACM, Bengaluru India, 771–777. https://doi.org/10.1145/3536221.3558065[52] Bowen Wu, Chaoran Liu, Carlos T. Ishi, and Hiroshi Ishiguro. 2021. ProbabilisticHuman-like Gesture Synthesis from Speech using GRU-based WGAN. In Com-panion Publication of the 2021 International Conference on Multimodal Interaction .ACM, Montreal QC Canada, 194–201. https://doi.org/10.1145/3461615.3485407[53] Jiqing Wu, Zhiwu Huang, Janine Thoma, Dinesh Acharya, and Luc Van Gool. 2018.Wasserstein Divergence for GANs. https://doi.org/10.48550/arXiv.1712.01026arXiv:1712.01026 [cs].[54] Sicheng Yang, Zhiyong Wu, Minglei Li, Mengchen Zhao, Jiuxin Lin, Liyang Chen,and Weihong Bao. 2022. The ReprGesture entry to the GENEA Challenge 2022.InINTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION . ACM,Bengaluru India, 758–763. https://doi.org/10.1145/3536221.3558066[55] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, JaehongKim, and Geehyuk Lee. 2020. Speech Gesture Generation from the TrimodalContext of Text, Audio, and Speaker Identity. ACM Transactions on Graphics 39,6 (Dec. 2020), 1–16. https://doi.org/10.1145/3414685.3417838 arXiv:2009.02119[cs].[56] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and GeehyukLee. 2019. Robots Learn Social Skills: End-to-End Learning of Co-Speech GestureGeneration for Humanoid Robots. In 2019 International Conference on Roboticsand Automation (ICRA) . IEEE, Montreal, QC, Canada, 4303–4309. https://doi.org/10.1109/ICRA.2019.8793720[57] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and GeehyukLee. 2019. Robots Learn Social Skills: End-to-End Learning of Co-Speech GestureGeneration for Humanoid Robots. In 2019 International Conference on Roboticsand Automation (ICRA) . IEEE, Montreal, QC, Canada, 4303–4309. https://doi.org/10.1109/ICRA.2019.8793720[58] Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: A largeevaluation of data-driven co-speech gesture generation. In INTERNATIONALCONFERENCE ON MULTIMODAL INTERACTION . ACM, Bengaluru India, 736–747. https://doi.org/10.1145/3536221.3558058[59] Chi Zhou, Tengyue Bian, and Kang Chen. 2022. GestureMaster: Graph-basedSpeech-driven Gesture Generation. In INTERNATIONAL CONFERENCE ON MUL-TIMODAL INTERACTION . ACM, Bengaluru India, 764–770. https://doi.org/10.1145/3536221.3558063 |
zrcgseqv0n2 | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023Sicheng Yang∗Haiwei Xue∗Shenzhen International Graduate School, TsinghuaUniversity, Shenzhen, Chinayangsc21@mails.tsinghua.edu.cnxhw22@mails.tsinghua.edu.cnZhiyong Wu†Shenzhen International Graduate School, TsinghuaUniversity, Shenzhen, ChinaThe Chinese University of Hong KongHong Kong SAR, Chinazywu@sz.tsinghua.edu.cnMinglei Li†Zonghong DaiHuawei Cloud Computing Technologies Co., LtdShenzhen, Chinaliminglei29@huawei.comdaizonghong@huawei.comZhensong ZhangSongcen XuXiaofei WuHuawei Noah’s Ark Lab, Shenzhen, Chinazhangzhensong@huawei.comxusongcen@huawei.comwuxiaofei2@huawei.comABSTRACTIn this paper, we introduce the DiffuseStyleGesture+, our solutionfor the Generation and Evaluation of Non-verbal Behavior for Em-bodied Agents (GENEA) Challenge 2023, which aims to foster thedevelopment of realistic, automated systems for generating conver-sational gestures. Participants are provided with a pre-processeddataset and their systems are evaluated through crowdsourcedscoring. Our proposed model, DiffuseStyleGesture+, leverages adiffusion model to generate gestures automatically. It incorporatesa variety of modalities, including audio, text, speaker ID, and seedgestures. These diverse modalities are mapped to a hidden spaceand processed by a modified diffusion model to produce the corre-sponding gesture for a given speech input. Upon evaluation, theDiffuseStyleGesture+ demonstrated performance on par with thetop-tier models in the challenge, showing no significant differenceswith those models in human-likeness, appropriateness for the in-terlocutor, and achieving competitive performance with the bestmodel on appropriateness for agent speech. This indicates that ourmodel is competitive and effective in generating realistic and ap-propriate gestures for given speech. The code, pre-trained models,and demos are available at this URL.CCS CONCEPTS•Human-centered computing →Human computer interac-tion (HCI) ;•Computing methodologies →Motion processing ;Neural networks .∗Both authors contributed equally to this research.†Corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from permissions@acm.org.ICMI ’23, October 09-13 , 2023, Paris, France©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXKEYWORDSgesture generation, diffusion-based model, conversation gestureACM Reference Format:Sicheng Yang, Haiwei Xue, Zhiyong Wu, Minglei Li, Zonghong Dai, Zhen-song Zhang, Songcen Xu, and Xiaofei Wu. 2023. The DiffuseStyleGesture+entry to the GENEA Challenge 2023. In Proceedings of ACM InternationalConference on Multimodal Interaction (ICMI ’23). ACM, New York, NY, USA,7 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONNon-verbal behaviors, particularly gestures, act a crucial role in ourcommunication [ 24]. They provide the necessary spark to animaterobotic interfaces, encapsulate diverse functional information, andsubtly deliver social cues. We can create more engaging, informative,and socially adept robotic systems by incorporating these behaviors.And gestures enrich communication with non-verbal nuances [ 24,39]. Indeed, natural conversations often incorporate body gestures,which can lead to perceptions of dullness or unnaturalness if absent.Individuals use gestures to express ideas and feelings, either directlyor indirectly. For instance, the formation of a circle using the thumband forefinger—an open palm gesture—communicates the conceptof “OK” [32].3D gesture generation has drawn much attention in the com-munity. Early studies leveraged unimodal inputs, Dai et al. [ 10]employ audio features to drive gesture synthesis via Bi-LSTMs, andsome works incorporate GANs and VAEs to learn relevant pairsand improve synthesis quality [ 19,26,34]. However, these meth-ods encountered challenges such as gesture diversity and trainingdifficulties. On the other hand, some works also explored textualmodality, Chiu et al. [ 6] introducing the DCNF model combiningspeech, textual content, and prosody, and Yoon et al. [ 38] propos-ing an Encoder-Decoder framework. Liang et al. [ 20] introducesSEmantic Energized Generation (SEEG), a novel approach that ex-cels at semantic-aware gesture generation. Recently, multimodalmethods [1, 9, 35, 37] integrating both audio and text have gainedattention, focusing on the semantic feature encoding and long se-quence modeling of 3D human motion. Further, many works beginICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.to pay attention to the speaker’s identity [ 21,22], style [ 8,33], emo-tion [ 25,36], etc. Despite significant advances, gesture generationusing a comprehensive multimodal approach remains challenging,mainly due to the inherent trade-off between quality and diversity[33].Recently, diffusion models [ 11] have shown great potential forgenerating motions [ 7,29,41], achieving high-quality outputs whilemaintaining diversity. Hence, in this gesture generation challenge,we attempt to apply diffusion models to tackle the problem ofmultimodal gesture generation.Inspired by [ 33], we find that the diffusion model-based approachfor co-speech gesture generation surpasses other deep generativemodels of motion in terms of quality and alignment with speech,while allowing for the generation of stylized and diverse gestures.In this paper, we incorporate textual modality using the DiffuseS-tyleGesture framework and restructure the architecture. Further-more, we also refined the representations of gesture and audio, inalignment with the challenge dataset. These enhancements allowthe model to generate high-quality, speech-aligned, speaker-specificstylized, and diverse gestures with significant controllability. Wesubmitted our system to the GENEA challenge 2023 [ 16], whichaims to consolidate and compare various methods for co-speechgesture generation and evaluation, promoting the development ofnon-verbal behavior generation and its evaluation via a large-scaleuser study involving a common dataset and virtual agent.The main contributions of our paper are: (1) We propose Dif-fuseStyleGesture+, a multimodal-driven gesture generation modelwith improved input network structure, input modality and featurerepresentation, as well as the diffusion model with cross-local atten-tion. (2) The evaluation of the GENEA Challenge demonstrates thatour model is among the first tier at human-likeness, appropriate-ness for the interlocutor, and achieves competitive performance onappropriateness for agent speech. (3) The ablation study validatesthe effectiveness of our proposed denoising module. Besides, wediscuss the stylization and diversity of the generated gestures, aswell as further discussion of more technical details.2 METHODOur method is based on DiffuseStyleGesture [ 33], a recent diffusionmodel-based speech-driven gesture generation approach. Besidesseed gesture, audio and speaker ID, we also take text as an additionalinput modality. The overview of this work is shown in Figure 1.2.1 Feature ExtractionWe extract the features of the input modalities as follows:•Gesture: We used 62 joints including the fingers, and eachframe represents the motion features in terms of position,velocity, acceleration, rotation matrix, rotational angularvelocity, and rotational angular acceleration of each joint.Although there are certain relations between positions, veloc-ities, accelerations, etc., which can be transformed into eachother, representing motion features with more motion datacan lead to better performance [ 8,40]. We denote the naturalmocap gestures clip as x0∈R(Nseed+N)×[62×(9+3)×3]. ThefirstNseed frames of the gestures clip x0are used as the seed... ,dt dtxDiffuse0→(T d −1)DenoisingSamplec~(,, )d dTTx 0IDenoisingcDiffuse0→(t d -1)...Denoisingc11,xAudio Feature ExtractionNoisy gestureSeed gesture RMSpeaker ID RMConcatCross Local AttentionSelf AttentionHuber loss~( , )tx0It ~ Uniform({1,2,...,T })GestureDenoisingRPEText“I have a book... ”Feature ExtractionSdTDATGZ0ˆx0x0ˆx00,ˆx 0ˆxd dFigure 1: (Top) Denoising module. A noising step tdand anoisy gesture sequence xtat this noising step conditioningonc(including seed gesture, audio, speaker ID and text) arefed into the model. (Bottom) Sample module. At each noisingsteptd, we predict the ˆx0with the denoising process, then addthe noise to the noising step xtd−1with the diffuse process.This process is repeated from td=Tduntiltd=0.gesture and the remaining Nframes are what the modelneeds to predict based on text and audio.•Audio: More speech features also lead to better performance[4,15]. Different representations can complement each other,e.g., representations such as pitch contain rhythmic con-tent, the pre-trained model features such as WavLM [ 5]contain more complex information such as emotion, On-sets contain beat information, etc. We combine MFCC, MelSpectrum, Pitch, Energy [ 39], WavLM [ 5], and Onsets [ 2]as audio features. We denote the features of audio clip asA∈RN×(40+64+2+2+1024+1).•Speaker ID: The ID of the speaker is represented as one-hotvectors where only one element of a selected ID is nonzero.The Talking With Hands dataset has a total of 17 speakers,so the dimension of speaker ID is 17.•Text: Following [ 39], we use FastText [ 3] to obtain the 300-Dword embeddings. And we use one bit to indicate whetherthere is a laugh or not, and the last bit is set to 0 as [ 4]. Eachword is mapped to its pre-trained word embedding at word-level granularity. Then the features of text clip T∈RN×302.2.2 Gesture DenoisingUnlike text semantics-driven motion generation [ 13,29,41], theyonly need a token to contain the semantics of a sentence, whichhaven’t to be aligned with time. Gesture generation is temporallyperceptible, that is, the gestures are related to the rhythm of thespeech. So we perform linear interpolation of the extracted audioThe DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, Francefeatures Ain the temporal dimension in order to align with the ges-tures. Gestures and music-driven dance generation [ 28,30,42] arealso different. Gestures and semantics are also temporally related,for example, the hand opens when saying ’big’. As in [ 4,37], weuse frame-level aligned word vectors T.Our goal is to synthesize a high-quality and speech-matchedhuman gesture ˆxof lengthNgiven conditions cusing the diffusionmodel [ 11]. Following [ 29], we predict the signal itself instead ofpredicting noise at each noising step td. As shown in the top ofFigure 1, the Denoising module reconstructs the original gesturex0from the pure noise xt, noising step tdand conditions c.ˆx0=Denoisextd,td,c(1)wherec=[S,D,A,T]. During training, noising step tdis sampledfrom a uniform distribution of {1,2,...,T d}, with the position en-coding [ 31].xtdis the noisy gesture with the same dimension asthe real gesture x0obtained by sampling from the standard normaldistributionN(0,I).We add the information of the noising step Tdand speaker IDSto form Zand replicate and stack them into a sequence featureof lengthNseed+N. The overall attention mechanism is similar to[33], using cross-local attention [ 27], self-attention [ 31] and relativeposition encoding (RPE) [ 14]. The difference is that we conditionDin the firstNseed frames and AandTin the lastNframes, sothat the smooth transition between segments is considered in thefirstNseed frames and the corresponding gestures are generatedin the lastNframes based on audio and text, which reduce theredundancy of inputs.Then the Denoising module is trained by optimizing the Huberloss [ 12] between the generated gestures ˆx0and the real humangesturesx0:L=Ex0∼q(x0|c),td∼[1,Td][HuberLoss(x0−ˆx0)] (2)2.3 Gesture SamplingAs shown in the bottom of Figure 1, when sampling, the initial noisygesturexTis sampled from the standard normal distribution andthe otherxtd,td<Tdis the result of the previous noising step. Thefinal gesture is given by splicing a number of clips of length N. Theseed gesture for the first clip is a gesture from the dataset. Then theseed gesture for other clips is the last Nseed frames of the gesturegenerated in the previous clip. For every clip, in every noisingsteptd, we predict the clean gesture ˆx0using Equation (1) and addGaussian noise to the noising step xtd−1with the diffuse process[11]. This process is repeated from td=Tduntilx0is reached.3 EXPERIMENT3.1 Experiment SettingWe trained on all the data in the GENEA Challenge 2023 [ 16] train-ing dataset, which is based on Talking With Hands [ 18]. In thiswork, gesture data are cropped to a length of 150 frames (5 seconds,30 fps), with the first Nseed=30frames as seed gesture, and the lastN=120frames to calculate the loss between generated gesturesand real gestures in Equation (2). We use standard normalization(zero mean and unit variant) to all joint feature dimensions. TheHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualising the ratings distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are meanratings (also with a 0.05 confidence interval). Box edges are at25 and 75 percentiles, while whiskers cover 95% of all ratingsfor each condition. Conditions are ordered by descendingsample median rating.latent dimension of the attention-based encoder is 512. The cross-local attention networks use 8 heads, 48 attention channels, thewindow size is 15 frames (0.5 second), each window looks at theone in front of it, and with a dropout of 0.1. As for self-attentionnetworks are composed of 8 layers, 8 heads, and with a dropout of0.1. AdamW [ 23] optimizer (learning rate is 3 ×10−5) is used witha batch size of 200 for 1200000 samples. Our models have beentrained with Td= 1000 noising steps and a cosine noise schedule.The whole framework can be learned in about 132 hours on oneNVIDIA V100 GPU.3.2 Evaluation SettingThe challenge organizers conducted a detailed evaluation compar-ing all submitted systems [ 16]. Three proportions were evaluated:human-likeness, appropriateness for agent speech and appropri-ateness for the interlocutor. We strongly recommend the reference[16] for more details on the evaluation. The following abbreviationsare used to denote each model in the evaluation:•NA: Natural mocap (‘NA’ for ‘natural’).•BM: The official monadic baseline [ 4], a model based onTacotron 2 that takes information (WAV audio, TSV tran-scriptions, and speaker ID) from the main agent as input (‘B’for ‘baseline’, ‘M’ for ‘monadic’).•BD: The official dyadic baseline [ 4], which also take informa-tion from the interlocutor in the conversation into accountwhen generating gesture (‘D’ for ‘dyadic’).•SA–SL: 12 submissions (ours is SF) to the final evaluation(‘S’ for a submission).3.3 Evaluation Analysis3.3.1 Human-likeness. As for human-likeness, participants wereasked “Please indicate on a sliding scale how human-like the gesturemotion appears”. The rating scale from 100 (best) to 0 (worst) isanchored by partitioning the sliders into five equal-length intervalsICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.NA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...(a) Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to speechNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...(b) Appropriateness for the interlocutorFigure 3: Significant differences between conditions in thetwo appropriateness studies. White means the conditionlisted on the y-axis achieved a mean appropriateness scoresignificantly above the condition on the x-axis, black meansthe opposite ( yscored below x), and grey means no statisti-cally significant difference at level α= 0.05 after correctionfor the false discovery rate.labeled “Excellent”, “Good”, “Fair”, “Poor”, and “Bad”. Bar plots andsignificance comparisons are shown in Figure 2. The median ofour system (SF) was 65 ∈[64, 67] and the mean was 63.6 ±1.3. Andthe human-likeness was not significantly different from the systemSG [16]. This result shows that our model can generate very high-quality gestures, but somewhat lower than natural mocap, with amedian of 71∈[70, 71] and a mean of 68.4 ±1.0.3.3.2 Appropriateness for agent speech. In terms of appropriate-ness for agent speech, participants were asked “Which character’smotion matches the speech better, both in terms of rhythm andintonation and in terms of meaning?” Five response options areavailable, “Left is clearly better”, “Left is slightly better”, “Theyare equal”, “Right is slightly better”, and “Right is clearly better”.Table 1: Ablation studies results. ’ +’ indicates additional mod-ules and↔indicates the length of the modality in the timedimension. Bold indicates the best metric.NameFGD onfeature space↓FGD on rawdata space↓Ours 14.461 531.172+ Seed gesture↔N+ Speech↔Nseed(DiffuseStyleGesture [33])19.017 767.503+ Seed gesture↔(N+Nseed) 15.539 616.437The mean appropriateness score (MAS) of the submitted system isclose to each other, so we report significant differences as shownin Figure 8(a). Our system (SF) with a MAS of 0.20 ±0.06 and a Pref.matched (identifies how often test-takers preferred matched motionin terms of appropriateness) of 55.8%, which is significantly betterthan submitted systems SH, SL and BC. However, it has significantdeficiencies with natural mocap (NA) with a MAS of 0.81 ±0.06 anda Pref. matched 73.6% and SG.3.3.3 Appropriateness for the interlocutor. Additionally, an inter-locutor who converses with the previous main agent is added tothis user interface for scoring. Please ref to [ 16] for more details. Asfor appropriateness for the interlocutor, participants were asked “Inwhich of the two videos is the Main Agent’s motion better suitedfor the interaction?”. The response options were the same as before,i.e., “Left is clearly better”, “Left is slightly better”, “They are equal”,“Right is slightly better”, and “Right is clearly better”. We also reportsignificant differences as shown in Figure 8(b). Natural mocap (NA)with a MAS of 0.63 ±0.08 and a Pref. matched of 69.8% is signifi-cantly more appropriate for the interlocutor compared to all otherconditions. Our system (SF) with a MAS of 0.04 ±0.06 and a Pref.matched of 51.5%, which is significantly more appropriate thanconditions SG and SH, and not significantly different from otherconditions. And our system does not use interlocutor informationand (as expected) is not significantly different from chance.3.4 Ablation StudiesMoreover, we conduct ablation studies to address the performanceeffects of different architectures in our model. We use Fréchet ges-ture distance (FGD) [ 37] as the objective evaluation metric, whichis currently the closest to human perception among all objectiveevaluation metrics [ 17]. The lower FGD, the better. The FGD iscomputed using the autoencoder provided by the challenge orga-nizers. Our ablation studies, as summarized in Table 1, indicate thatwhen the input of [ 33] is used (the information of seed gestures andspeech is given directly over the full length of a training sample),both metrics perform worse; when additional seed gestures aregiven over the full length of a training sample on our model, bothmetrics also become worse. The purpose of using seed gestures[33,37] is to smooth the transition between generated segments,so they should not contain speech information and should only beconsidered at the beginning for consistency with the previouslygenerated gestures. We also learn that although the diffusion modelhas the ability to learn useful information from redundant repre-sentations, careful design of the network structure of the denoisingmodule can further improve performance.The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, France3.5 Discussion3.5.1 Takeaways. Our co-speech gesture generation model (SF),based on the diffusion model, exhibits comparable levels of human-likeness and appropriateness for the interlocutor when comparedto the best performing models (SG, SA). Furthermore, it achievescompetitive performance with the leading model (SG) in terms ofappropriateness for agent speech. These findings suggest that ourproposed model performs at a top-tier level. Our model achievesgood results due to the ability of the diffusion model to generatehigh-quality gestures and the local attention-based structure togenerate gestures that correspond to the current short durationof speech. Notably, based on the diffusion model, this can easilygenerate diverse gestures since the main part of the input is noiseand any seed gesture can be set. Moreover, based on the structure ofthe diffusion model, we add random masks to the denoising module,which enables the interpolation and extrapolation of conditionssuch as speaker identity (style), and a high degree of control overthe style intensity of the generated gestures. However, stylizationand diversity are not included as one of the evaluation dimensionsin the challenge.3.5.2 Limitation. Our model does not consider the information ofthe interlocutor, this is also not significantly different from a randomselection. Taking into account information about the interlocutoris important in the interaction, and this is a direction for futureresearch. Moreover, pre-processing the data should make the resultsbetter. We do not do anything special with motions that do notinclude movement in the hand and still train with its hand, whichcan lead to poorer hand results. For the exploration of the datasetand more discussion, please refer to the Appendix.3.5.3 More Discussion. We also tried to add the BEAT [ 21] dataset(all of them / some of the speakers) to train together with TalkingWith Hands, but we got worse results, the model didn’t converge.We guess the possible reason is that the BEAT dataset is very large,and the diffusion model needs more time to be trained well.Although we did not consider interlocutors, in terms of appro-priateness for the interlocutor, our system (SF) is significantly moreappropriate than SG and SH, and not significantly different fromother conditions. It is worth noting that SG is the best-performingmodel on the first two dimensions of the evaluation. We suspectthat the reason for this is related to the setting of the evaluation,cause “segments should be more or less complete phrases” in theevaluation. However, the evaluation during silence is equally impor-tant, and the model should learn the behavior from the data whennot talking, such as idling and other small gestures, and no otherunexpected actions. Although we did not consider the informationof interlocutors, it is impressive that our model is able to remainidle while the other person is talking (when he/she is not talking).The diffusion model takes a long time to train and inference.The evaluation was performed using 8-10 seconds of speech, andlonger speech evaluation results may be more consistent with hu-man perception. When the number of participants in the speechappropriateness evaluation was 448, there was no difference be-tween our system (SF) and SG; when the number of participantsin the evaluation was increased to 600, SG was significantly betterthan all of the submitted systems, which suggests the differences(a) A gesture indicating largeness. (b) A pointing gesture.(c) A thinking gesture.Figure 4: Case study of generated gestures. The right side ofeach figure shows the generated gestures.between the two systems were relatively small and needed to bestatistically significant until a large number of subjects had beenrecruited and evaluated after FDR correction.3.5.4 Case Study. Our diffusion-based method can extract seman-tic information and generate human-like gestures. For instance,when the speaker says “large”, our system generates a gesture indi-cating largeness. When the speaker asks “Where do you stay?” oursystem generates a pointing gesture, mimicking human behavior.Our diffusion-based models can generate incidental actions forlaughter and surprise. For example, when the speaker laughs, themodel generates a body shake, mimicking human laughter. Whenthe speaker is thinking, the model generates a corresponding think-ing action. This suggests that diffusion-based models can learnsemantics and synthesize semantic actions in specific situations.4 CONCLUSIONIn this paper, we propose DiffuseStyleGesture+, a diffusion modelbased method for speech-driven co-gesture generation. Based onthe DiffuseStyleGesture framework, we add text modality and thenmore logically designed the input architecture of the modality,while tuning the representation of gesture and audio according tothe challenge dataset to be able to generate high-quality, speech-matched, speaker-specific stylized, and diverse gestures and tobe highly controllable on these conditions. The proposed modelis in the first tier in human-likeness and appropriateness for theinterlocutor, with no significant difference from the best model,and achieves competitive performance with the best model onappropriateness for agent speech, showing the effectiveness ofthe proposed method. However, compared with the natural mocap,there is still much room for improvement worth further exploration.ACKNOWLEDGMENTSThis work is supported by National Natural Science Foundationof China (62076144), Shenzhen Science and Technology Program(WDZC20200818121348001) and Shenzhen Key Laboratory of nextgeneration interactive media innovative technology (ZDSYS2021062-3092001004).ICMI ’23, October 09-13 , 2023, Paris, France Sicheng Yang, et al.REFERENCES[1]Chaitanya Ahuja, Dong Won Lee, and Louis-Philippe Morency. 2022. Low-resource adaptation for personalized co-speech gesture generation. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 20566–20576.[2]Juan Pablo Bello, Laurent Daudet, Samer Abdallah, Chris Duxbury, Mike Davies,and Mark B Sandler. 2005. A tutorial on onset detection in music signals. IEEETransactions on speech and audio processing 13, 5 (2005), 1035–1047.[3]Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017.Enriching word vectors with subword information. Transactions of the associationfor computational linguistics 5 (2017), 135–146.[4]Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry tothe GENEA Challenge 2022–A Tacotron2 based method for co-speech gesturegeneration with locality-constraint attention mechanism. In Proceedings of the2022 International Conference on Multimodal Interaction . 784–789.[5]Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen,Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al .2022. Wavlm:Large-scale self-supervised pre-training for full stack speech processing. IEEEJournal of Selected Topics in Signal Processing 16, 6 (2022), 1505–1518.[6]Chung-Cheng Chiu, Louis-Philippe Morency, and Stacy Marsella. 2015. Predictingco-verbal gestures: A deep and temporal modeling approach. In Intelligent VirtualAgents: 15th International Conference, IVA 2015, Delft, The Netherlands, August26-28, 2015, Proceedings 15 . Springer, 152–166.[7]Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and ChristianTheobalt. 2023. Mofusion: A framework for denoising-diffusion-based motionsynthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition . 9760–9770.[8]Saeed Ghorbani, Ylva Ferstl, Daniel Holden, Nikolaus F Troje, and Marc-AndréCarbonneau. 2023. ZeroEGGS: Zero-shot Example-based Gesture Generationfrom Speech. In Computer Graphics Forum , Vol. 42. Wiley Online Library, 206–216.[9]Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng.2022. Generating diverse and natural 3d human motions from text. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 5152–5161.[10] Dai Hasegawa, Naoshi Kaneko, Shinichi Shirakawa, Hiroshi Sakuta, and KazuhikoSumi. 2018. Evaluation of speech-to-gesture generation using bi-directional LSTMnetwork. In Proceedings of the 18th International Conference on Intelligent VirtualAgents . 79–86.[11] Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilisticmodels. Advances in Neural Information Processing Systems 33 (2020), 6840–6851.[12] Peter J Huber. 1992. Robust estimation of a location parameter. In Breakthroughsin statistics . 492–518.[13] Jihoon Kim, Jiseob Kim, and Sungjoon Choi. 2022. FLAME: Free-form Language-based Motion Synthesis & Editing. CoRR abs/2209.00349 (2022). https://doi.org/10.48550/arXiv.2209.00349 arXiv:2209.00349[14] Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficienttransformer. arXiv preprint arXiv:2001.04451 (2020).[15] Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and HedvigKjellström. 2019. Analyzing input and output representations for speech-drivengesture generation. In Proceedings of the 19th ACM International Conference onIntelligent Virtual Agents . 97–104.[16] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[17] Taras Kucherenko, Pieter Wolfert, Youngwoo Yoon, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. Evaluating gesture-generationin a large-scale open challenge: The GENEA Challenge 2022. arXiv preprintarXiv:2303.08737 (2023).[18] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srinivasa,and Yaser Sheikh. 2019. Talking with hands 16.2 m: A large-scale dataset of syn-chronized body-finger motion and audio for conversational motion analysis andsynthesis. In Proceedings of the IEEE/CVF International Conference on ComputerVision . 763–772.[19] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and LinchaoBao. 2021. Audio2gestures: Generating diverse gestures from speech audio withconditional variational autoencoders. In Proceedings of the IEEE/CVF InternationalConference on Computer Vision . 11293–11302.[20] Yuanzhi Liang, Qianyu Feng, Linchao Zhu, Li Hu, Pan Pan, and Yi Yang. 2022.Seeg: Semantic energized co-speech gesture generation. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition . 10473–10482.[21] Haiyang Liu, Zihao Zhu, Naoya Iwamoto, Yichen Peng, Zhengqing Li, You Zhou,Elif Bozkurt, and Bo Zheng. 2022. Beat: A large-scale semantic and emotionalmulti-modal dataset for conversational gestures synthesis. In European Conferenceon Computer Vision . Springer, 612–630.[22] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou,Wayne Wu, Bo Dai, and Bolei Zhou. 2022. Learning hierarchical cross-modalassociation for co-speech gesture generation. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition . 10462–10472.[23] Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization.In7th International Conference on Learning Representations, ICLR 2019, New Or-leans, LA, USA, May 6-9, 2019 . OpenReview.net. https://openreview.net/forum?id=Bkg6RiCqY7[24] Simbarashe Nyatsanga, Taras Kucherenko, Chaitanya Ahuja, Gustav Eje Henter,and Michael Neff. 2023. A Comprehensive Review of Data-Driven Co-SpeechGesture Generation. In Computer Graphics Forum , Vol. 42. Wiley Online Library,569–596.[25] Xingqun Qi, Chen Liu, Lincheng Li, Jie Hou, Haoran Xin, and Xin Yu. 2023. Emo-tionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation.arXiv preprint arXiv:2305.18891 (2023).[26] Manuel Rebol, Christian Güti, and Krzysztof Pietroszek. 2021. Passing a non-verbal turing test: Evaluating gesture animations generated from speech. In 2021IEEE Virtual Reality and 3D User Interfaces (VR) . IEEE, 573–581.[27] Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Effi-cient content-based sparse attention with routing transformers. Transactions ofthe Association for Computational Linguistics 9 (2021), 53–68.[28] Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian,Chen Change Loy, and Ziwei Liu. 2022. Bailando: 3d dance generation by actor-critic gpt with choreographic memory. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition . 11050–11059.[29] Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, andAmit Haim Bermano. 2023. Human Motion Diffusion Model. In The EleventhInternational Conference on Learning Representations, ICLR 2023, Kigali, Rwanda,May 1-5, 2023 . OpenReview.net. https://openreview.net/pdf?id=SJ1kSyO2jwu[30] Jonathan Tseng, Rodrigo Castellon, and Karen Liu. 2023. Edge: Editable dancegeneration from music. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition . 448–458.[31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in neural information processing systems 30 (2017).[32] Pieter Wolfert, Nicole Robinson, and Tony Belpaeme. 2022. A review of evalu-ation practices of gesture generation in embodied conversational agents. IEEETransactions on Human-Machine Systems 52, 3 (2022), 379–389.[33] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, WeihongBao, Ming Cheng, and Long Xiao. 2023. DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models. arXiv preprintarXiv:2305.04919 (2023).[34] Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao,and Haolin Zhuang. 2023. QPGesture: Quantization-Based and Phase-GuidedMotion Matching for Natural Speech-Driven Gesture Generation. In IEEE/CVFConference on Computer Vision and Pattern Recognition, CVPR . IEEE, 2321–2330.[35] Sicheng Yang, Zhiyong Wu, Minglei Li, Mengchen Zhao, Jiuxin Lin, Liyang Chen,and Weihong Bao. 2022. The ReprGesture Entry to the GENEA Challenge 2022.InProceedings of the 2022 International Conference on Multimodal Interaction(Bengaluru, India) (ICMI ’22) . Association for Computing Machinery, New York,NY, USA, 758–763. https://doi.org/10.1145/3536221.3558066[36] Lianying Yin, Yijun Wang, Tianyu He, Jinming Liu, Wei Zhao, Bohan Li, Xin Jin,and Jianxin Lin. 2023. EMoG: Synthesizing Emotive Co-speech 3D Gesture withDiffusion Model. arXiv preprint arXiv:2306.11496 (2023).[37] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim,and Geehyuk Lee. 2020. Speech gesture generation from the trimodal contextof text, audio, and speaker identity. ACM Transactions on Graphics (TOG) 39, 6(2020), 1–16.[38] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and GeehyukLee. 2019. Robots learn social skills: End-to-end learning of co-speech gesturegeneration for humanoid robots. In 2019 International Conference on Robotics andAutomation (ICRA) . IEEE, 4303–4309.[39] Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: Alarge evaluation of data-driven co-speech gesture generation. In Proceedings ofthe 2022 International Conference on Multimodal Interaction . 736–747.[40] He Zhang, Sebastian Starke, Taku Komura, and Jun Saito. 2018. Mode-adaptiveneural networks for quadruped motion control. ACM Transactions on Graphics(TOG) 37, 4 (2018), 1–11.[41] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, LeiYang, and Ziwei Liu. 2022. Motiondiffuse: Text-driven human motion generationwith diffusion model. arXiv preprint arXiv:2208.15001 (2022).[42] Haolin Zhuang, Shun Lei, Long Xiao, Weiqin Li, Liyang Chen, Sicheng Yang, Zhiy-ong Wu, Shiyin Kang, and Helen Meng. 2023. GTN-Bailando: Genre Consistentlong-Term 3D Dance Generation Based on Pre-Trained Genre Token Network. InICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and SignalProcessing (ICASSP) . 1–5. https://doi.org/10.1109/ICASSP49357.2023.10095203The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 ICMI ’23, October 09-13 , 2023, Paris, France(a) The length of audiosin training dataset.(b) The length of audiosin validate dataset.(c) The length of audiosin testing dataset.Figure 5: GENEA Challenge 2023 dataset audio length analy-sis.(a) The number of wordsper sentence in the train-ing dataset.(b) The number of wordsper sentence in the vali-date dataset.(c) The number of wordsper sentence in the testingdataset.Figure 6: GENEA Challenge 2023 dataset text length (wordsper sentence) analysis.A APPENDIXA.1 Exploratory Data AnalysisThe GENEA Challenge 2023 provided 372 training data, 41 vali-dation data, and 70 test data. The training and validation datasetsinclude text, audio, and BVH motion capture files for both the mainagent and the interlocutor. The test data lacks the main agent’s BVHmotion capture file, which our system aims to predict. Metadatafiles with speaker identity information are also provided.A.1.1 Audio Analysis. As shown in Figure 5, the duration of thetraining data varies, ranging from less than 2 minutes to nearly 10minutes (9 minutes and 27 seconds). The validation and test setshave an average duration of about 1 minute. The total duration ofall datasets is approximately 20 hours and 49 minutes.A.1.2 Text Analysis. As shown in Figure 6, the maximum numberof tokens in a single piece of training data is 1135. The distributionof data is non-uniform across all types of datasets. Word-frequencystatistics were also performed. The three most common words inthe dataset are ’like’, ’I’, and ’Yeah’, each used nearly 10,000 times.Laughing is marked with ’#’, while other emotions such as surprise,silence and other states are not marked.A.1.3 Gesture Analysis. As shown in Figure 7, we identified severalissues with the original dataset. Most notably, the upper body ofmost human figures appears to recede (tilt back), especially in sideviews. Many speakers exhibit unnecessary foot movement. Somedatasets also contain severe bone position errors.A.2 Appropriateness StudiesThe bar plots for the appropriateness analysis are shown in Figure 8.In terms of appropriateness for agent speech, SG was significantlyFigure 7: Some possible problems with the dataset. Betterperformance may be obtained if the data is preprocessed.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(b) Appropriateness for the interlocutorFigure 8: Bar plots visualizing the response distribution inthe appropriateness studies. The blue bar (bottom) representsresponses where subjects preferred the matched motion, thelight grey bar (middle) represents tied (“They are equal”)responses, and the red bar (top) represents responses prefer-ring mismatched motion, with the height of each bar beingproportional to the fraction of responses in each category.Lighter colors correspond to slight preference, and darker col-ors to clear preference. On top of each bar is also a confidenceinterval for the mean appropriateness score, scaled to fit thecurrent axes. The dotted black line indicates chance-levelperformance. Conditions are ordered by mean appropriate-ness score.higher than our system (SF); in terms of appropriateness for the in-terlocutor, all systems were not significantly different from randomresults, or even inferior to random selection. Overall all systemsare less appropriate than natural mocap (NA). |
Mm44wlJICIj | The KU-ISPL entry to the GENEA Challenge 2023-A DiffusionModel for Co-speech Gesture generationGwantae Kimkgt1103211@korea.ac.krKorea UniversitySeoul, South KoreaYuanming Lilym7499500@korea.ac.krKorea UniversitySeoul, South KoreaHanseok Kohsko@korea.ac.krKorea UniversitySeoul, South KoreaABSTRACTThis paper describes a diffusion model for co-speech gesture genera-tion presented by KU-ISPL entry of the GENEA Challenge 2023. Weformulate the gesture generation problem as a co-speech gesturegeneration problem and a semantic gesture generation problem,and we focus on solving the co-speech gesture generation prob-lem by denoising diffusion probabilistic model with text, audio,and pre-pose conditions. We use the U-Net with cross-attentionarchitecture as a denoising model, and we propose a gesture au-toencoder as a mapping function from the gesture domain to thelatent domain. The collective evaluation released by GENEA Chal-lenge 2023 shows that our model successfully generates co-speechgestures. Our system receives a mean human-likeness score of 32.0,a preference-matched score of appropriateness for the main agentspeech of 53.6%, and an interlocutor speech appropriateness scoreof 53.5%. We also conduct an ablation study to measure the effects ofthe pre-pose. By the results, our system contributes to the co-speechgesture generation for natural interaction.CCS CONCEPTS•Computing methodologies →Animation ;•Human-centeredcomputing→Human computer interaction (HCI) .KEYWORDSGENEA Challenge, co-speech gesture generation, diffusion, neuralnetworks, generative modelsACM Reference Format:Gwantae Kim, Yuanming Li, and Hanseok Ko. 2023. The KU-ISPL entryto the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesturegeneration. In Proceedings of 25th ACM International Conference on Mul-timodal Interaction (ICMI’23). ACM, New York, NY, USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONSynthesizing synchronized and human-like gestures performs cru-cial roles to improve immersion, engagement, and naturalness forembodied virtual agents and humanoid robots. During the human-computer interaction(HCI) process, human uses both verbal andPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from permissions@acm.org.ICMI’23, October 09–13, 2023, Paris, France©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXnon-verbal expressions to provide their intent to the interlocutor.Gesture generation, which is one of the main challenges for non-verbal interaction, aims to synthesize natural-looking and mean-ingful human gestures. The task can be separated whether verbalexpression exists or not. When verbal expressions, such as audioor text, are given, the gesture generation model focuses on makinggestures that emphasize the meaning of verbal expressions. In theother case, the model should generate gestures that deliver theintent whether verbal expressions are given or not. We define thetask with verbal information as co-speech gesture generation andthe task that focuses on synthesizing meaningful body motions thatdeliver intent as semantic gesture generation. In this research, wefocus on generating high-fidelity co-speech gestures.There are many challenges for the co-speech gesture generation.The first is timing synchronization. Since the speech and gesturesare shown to the interlocutor sequentially, he or she will be con-fused if gestures depart from speech. For example, if the start andend timing of the gestures slightly differs from speech, the userswill think that it is an implemental error. A more detrimental situa-tion is traffic jams during continuous generation. Once the timingis out of sync, the timing between speech and gestures is continu-ally departed and the discomfort will be gradually increased. Withsimilar thinking, semantic synchronization, which is the secondchallenge, is also important to deliver proper intent. For example,when people say "I disagree." by nodding, the interlocutor will beconfused that it is positive or negative.The third obstacle is noise robustness. 3D pose estimation or mo-tion capture is utilized to acquire gesture data. However, the qualityof raw data obtained by 3D pose estimation is not enough becausethe algorithm is basically image-to-3D reconstruction, which is aone-to-many problem. The motion capture is better, but it is too ex-pensive and time-consuming. To secure quality, the cost is increasedexponentially. Therefore, the raw data may contain noise. Sincetraining with noisy data hurts both quantitative and qualitativeperformance, a workaround such as pre-processing or noise-robusttraining is needed.To tackle these problems, deep learning-based approaches havebeen applied to generating co-speech gestures, recently. There arethree types of training strategies: reconstruction-based method[ 15,18,34], generative adversarial network(GAN)[ 8] based method[ 25,33], and diffusion[ 7,12] based methods[ 3,5,38]. The reconstruction-based co-speech gesture generation methods directly estimate ges-tures from text or audio. Although the methods induce reasonableresults in terms of joint error, disadvantages are seen in terms ofdiversity. To generate various results without quantitative perfor-mance degradation, GAN-based co-speech gesture generation mod-els are trained by controlling the weight between reconstructionICMI’23, October 09–13, 2023, Paris, France Kim et al.loss and adversarial loss. Recently, denoising diffusion probabilis-tic models(DDPMs) are achieving huge success in the generativemodel and computer vision fields and expanding to other researchfields[ 14,24]. Especially, the diffusion model could synthesize vari-ous images that reflect input conditions, even if its semantic space islarge. Since the semantic space of the speech for co-speech gesturegeneration is large, the diffusion model may help to synthesizevarious and synchronized results. Therefore, the goal of the paperis to find a suitable diffusion model structure for co-speech gesturegeneration.In this paper, we propose a diffusion-based co-speech gesturegeneration method. We establish a gesture autoencoder to projectfrom gesture space to feature space and vice versa. The model wasconfigured to select suitable features according to the characteristicsof the gesture data. We also present how to deliver audio and textinformation to the diffusion model. We use validated audio featuresand the pre-trained language model to provide rich features.The data and evaluations are provided by GENEA Challenge2023[ 20]. Thanks to the good-quality data, the noise robustnessproblem is under control and we can focus on the synchronizingproblems. The evaluations, which contain human likeness, andappropriateness for the main agent and interlocutor, are also well-formulated to measure the generation performance. The code isavailable here1.2 RELATED WORKS2.1 Co-speech gesture generationKucherenko et al. [ 18] proposed an autoencoder-style audio-to-gesture model with hidden representation learning. The methodfirst find hidden embedding space of gesture by autoencoder andnext train the audio encoder to find joint embedding space betweenaudio and gesture. Yoon et al. [ 34] trained the sequence-to-sequenceLSTM model to map text transcriptions to 2D co-speech gestures.Kim et al. [ 15] trained the transformer-based autoencoder withself-supervised pre-training. These approaches use reconstructionloss to optimize the model. Chang et al.[ 4] presented a locality con-straint attention-based gesture generation model, which is inspiredby Tacotron2. StyleGestures [ 1] uses the method of normalizingflow to generate gestures from speech. Audio2Gestures [ 22] syn-thesize gestures using a variational autoencoder. Yoon et al. [ 33]train the model with adversarial loss and reconstruction loss togenerate gestures from trimodal contexts. HA2G [ 25] adopts ahierarchical decoder to address the structural information of thejoint. Gesturemaster[ 37] used a rhythm embedding module, styleembedding module, motion graph construction, and graph-basedoptimization to extract features and generate gestures.2.2 Semantic gesture generationKim et al. [ 16] generates gestures with the semantics itself or ex-tracted from text. The method with an intent classifier emphasizesco-speech gesture generation. The co-speech gesture model is se-lected to generate gestures if the intent is unclear, else this method1https://github.com/GT-KIM/GENEA2023-KU-ISPLis used to synthesize gestures. SEEG [ 23] generates semantic en-ergized co-speech gestures with the semantic prompt gallery, se-mantic prompter, and semantic energized learning. Gesticulator[ 19]synchronizes between text and audio features in the encoding phaseand generates gestures by autoregression.2.3 Diffusion-based motion generationAlexanderson et al. [ 2] proposed conformer[ 10]-based diffusionmodels for gesture generation, dance synthesis, and path-drivenlocomotion. Zhu et al. [ 38] migrated the diffusion model to speech-driven co-speech gesture generation with diffusion gesture stabi-lizer and implicit classifier-free guidance. FLAME [ 17] generatesand edits human motion with the pre-trained language model andtransformer. Motiondiffuse[ 36] and MDM[ 30] also synthesize hu-man motions from text descriptions. [ 3] learns a gesture-transcriptjoint embedding space using contrastive learning. The learned em-beddings are incorporated into the diffusion model via an adaptiveinstance normalization layer. [ 5] synthesize motions by diffusionmodel using latent space. The motion representations are projectedinto latent space, diffused, and reconstructed to the original motionspace.3 CO-SPEECH GESTURE GENERATIONMODELFigure 1 depicts an overview of the proposed model to generatehigh-fidelity co-speech gestures. In this section, we first introducethe problem formulation of co-speech gesture generation (Section3.1). We propose the gesture autoencoder, which is designed toproject gesture space to feature space (Section 3.2). We then presentthe classifier-free guidance for applying speech conditions to co-speech gestures (Section 3.3). Furthermore, we establish the forwarddiffusion and the reverse conditional generation process in featurespace (Section 3.4).3.1 Problem FormulationThe co-speech gesture training data often consist of 3D pose se-quence x, audio a, text(sentence) s, and metadata. The generativemodel G parameterized by θis optimized to synthesize x, whichis further conditioned on the audio a, text s, and the pre-definedinitial poses x−1of the M frames. The learning objective of theproblem can be formulated as argminθ||x−Gθ(a,s,x−1)||.However, samples in the training data often have a long dura-tion. To reduce the computational cost and memory usage, everymodality of the sample is cropped into segments x={x1,...,xi},a={a1,...,ai}, and s={s1,...,si}, where xihas N frames andai,sihave the same time length as xi. Now the generative model Gestimates xifrom the audio ai, text si, and the M pose frames fromprevious segment x(N−M):Ni−1, instead of synthesizing xat once. Fi-nally, the generative model G synthesizes the gestures {x1,...,xi}continuously.The model is autoregressive because the poses generated by theprevious segment are used to synthesize the current segment, andstochastic because the initial diffusion feature map is random noise.The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceFigure 1: Overview of the proposed diffusion-based co-speechgesture generation method. The model is autoregressive andprobabilistic. For the N-th generation, audio, text, and pre-poses are projected to the latent space and used to conditions.The initialized Gaussian noise is iteratively diffused by thereverse process. The output latent vector is reconstructed tothe gesture space by the decoder.3.2 Gesture AutoencoderIn the Stable Diffusion[ 28], the latent diffusion model provides flex-ible, computationally tractable, and sometimes achieving qualityimprovement. The gesture autoencoder focus on finding good latentembedding space projected from gesture space. The gesture autoen-coder consists of two autoencoder models: pose autoencoder andmotion autoencoder. Since the gesture is the sequential pose data,we design the pose autoencoder for projecting the raw pose spaceto latent space, and the motion autoencoder to find correlationsalong the time axis.The pose encoder and decoder consist of 3 fully-connected layerswith dropout[ 29] and GELU activation function[ 11] each. The inputposes sequence xN×3Jis projected to z′N×Dby the pose encoder,where z′denotes mid-level hidden representation, J is the number ofjoints, and D is the dimension of z′, and the pose decoder performsreverse projection. The pose autoencoder is first trained with L1reconstruction loss. Once the pose autoencoder is optimized, theparameters are frozen in the rest training stages such as diffusiontraining stage.The motion autoencoder aims to capture sequential informationof the data. Thus, the motion encoder and decoder consist of 3gated recurrent units(GRU) layers[ 6] and 3 multi-head self-attentionlayers[ 31], which have strong capacity in sequential data modeling.The motion encoder is formulatedz=MHSA(GRU(z′)) (1)whereMHSA(X)=Attention(X,X,X). The attention mechanismisAttention(Q,K,V)=softmax(QKT√d)·V (2)where Q, K, and V are the query, key, and value from the featurematrix, d is the channel dimension, and T is the matrix transposeoperation.The mid-level hidden representation z′N×Dis projected to zN×Dby the motion encoder, where zdenotes hidden representation infeature space, and the motion decoder performs reverse projection.The motion autoencoder is individually trained with L1 reconstruc-tion loss. The parameters of the motion autoencoder are also frozenafter this training stage.3.3 ConditioningThe diffusion models are theoretically capable of modeling the con-ditional distribution p(z|y). This can be implemented with a condi-tional denoising autoencoder εθ(zt,t,y), wherey∈{a,s,zi−1}, toaddress the generation process through inputs y. To combine condi-tional information and latent vector in the U-Net backbone, we usea cross-attention mechanism, which is used in Stable Diffusion[ 28].The three modalities, which are audio, text, and pre-pose, areused as conditions in the diffusion process. The pre-processed audiofeatures, text features, and pre-pose features are projected to theembedding vectors by fully-connected layers. These three embed-ding vectors are added to the time embedding vector and propagatethe information of each modality to the denoising U-Net model.3.4 DiffusionDDPMs define the latent variable models of the form pθ(x0)=∫pθ(x0:T)dx1:T, wherex1:Tare latent variables in the same samplespace asx0with the same dimensionality.The forward process, which is also called the diffusion process,approximates the posterior distribution q(x1:T|x0)by the Markovchain that gradually adds Gaussian noise to the data according tothe variance schedule β1,...,βT:q(x1:T|x0)=TÖt=1q(xt|xt−1), (3)whereq(xt|xt−1)=N(xt;√︁1−βtxt−1,βtI). (4)The forward process variances βtcan be learned by reparameteri-zation or held constant as hyperparameters. Since our model usesgesture autoencoder for mapping from pose to latent embeddings,the latent embeddings are gradually corrupted by noise, whichfinally leads to a pure white noise when T goes to infinity. There-fore, the prior latent distribution of p(xT)isN(xt;0,I)with onlyinformation of Gaussian noise.The reverse process estimates the joint distribution of pθ(x0:T).It is defined as a Markov chain with learned Gaussian transitionsICMI’23, October 09–13, 2023, Paris, France Kim et al.starting atN(xt;0,I):pθ(x0:T=p(xT)TÖt=1pθ(xt−1|xt), (5)wherepθ(xt−1|xt)=N(xt−1;μθ(xt,t),Σθ(xt,t)). (6)The corrupted noisy latent embedding xtis sampled by q(xt|x0)=N(xt;√ ̄αtx0,(1− ̄αt)I), whereαt=1−βtand ̄αt=Îts=1αs.Since the problem is co-speech gesture generation, which isa conditional generation problem, we have to provide additionalinputs a,s, and zi−1to the model. Therefore, these conditions areinjected into the generation process. The reverse process of eachtimestep can be updated for our problem as:pθ(zt−1|zt,y)=N(xt−1;μθ(zt,t,y),βtI). (7)The reverse process is started by sampling a Gaussian noiseztN(0,I)and following the Markov chain to iteratively denoisethe latent variable xtvia Eq. 7 to get the original latent vector z0.The variational lower bound on negative log-likelihood is usedto optimize the diffusion model. We follow [ 12] to simplify thetraining objective to the ensemble of MSE losses as:L(θ)=Et,x0,ε[||ε−εθ(√ ̄αtx0+√1− ̄αtε,y,t)||2], (8)where t is uniformly sampled between 1 and T, and εis initializedasN(0,I). The diffusion model is trained by the gradient descentsteps on Eq. 8 until converged.4 EXPERIMENT4.1 Data ProcessingWe trained our model using the GENEA Challenge 2023 dataset[ 35],derived from the Talking with hands 16.2M dataset[ 21]. This datasetcomprises a training set containing 371 clips, a validation set with40 clips, and a test set encompassing 70 clips. Each clip consistsof audio recordings, transcriptions, gesture motions for the mainagent, gesture motions for the interlocutor, and associated metadata.The audio data possesses a sampling rate of 44100Hz. The gesturemotions are formatted in BVH (Biovision Hierarchy) format, andtheir frame rate is set at 30 frames per second (FPS).Our system exclusively utilizes audio and text data from the mainagent, disregarding the interlocutor’s information and metadata. Weextract the mel-spectrogram, mel-frequency cepstrum coefficients,and prosody features using n-fft=4096 and a hop length of 33ms.To extract audio features, we employed the Librosa[ 27] packageand the Parselmouth[ 13] library. The network output comprisesjoint angles relative to a T-pose, with these angles parameterizedusing the exponential map[ 9]. Each dimension is normalized tohave a mean of zero and a standard deviation of one across theofficial challenge training set. We selected a total of 26 joints forfull-body expression. Subsequently, we apply a Savitzky-Golayfilter[ 26] with a window length of 9 and a polynomial order of 3 tosmooth the generated gestures. For text segmentation, we employ apre-trained text embedding model[32], featuring 1024 dimensionsper sentence. We opted for sentence embedding due to its capacityto capture semantic information in contrast to word embeddings.Given that the audio, text, and gesture data are temporally aligned,Table 1: Detailed hyperparameters settingHyperparameter Value# of joints (J) 26# of pre-pose frames (M) 8# of frames of the segment (N) 128Denoising diffusion steps 1000Feature dimension (D) 128Condition vector dimension 512# of residual blocks per up/downsampling layer 2# of up/downsampling layers 4# of attention heads 4N-FFT 4096Hop length [ms] 33Text embedding dimension 1024optimizer AdamWlearning rate 1e-4batch size 8the timing of audio features, text embeddings, and pose sequencesare synchronized.5 DISCUSSIONIn this section, we provide some discussions about evaluation re-sults. The submitted co-speech gestures are measured by threeaspects: human likeness, appropriateness for agent speech, andappropriateness for the interlocutor. The natural motion, monadicbaseline, and dyadic baseline are labeled NA, BM, and BD, respec-tively. Our submitted entry name is named SA. Our gesture gener-ation system is tested on a Windows 10 desktop with a 3.20GHzi9-12900K CPU, 128GB RAM, and one RTX 3090 GPU.5.1 Human-likenessThe results of the evaluation are presented in Table 2 and Figure 2.Our submitted system achieves a median human-likeness score of30 and a mean human-likeness score of 32.0. A disparity in humanlikeness is observed between our entry and natural motions. One ofthe significant contributing factors to this phenomenon is the lackof structural information. By not capturing the interdependenciesamong joints, our model generates gestures with a predominantemphasis on arm movements, which tend to exhibit greater motioncompared to head or body joints. Since the movement of the centerof gravity of the agent is ignored by the above reason, the humanlikeness score may decrease. Furthermore, our system omits fingermotions from its generation process. Another conceivable concernis the effectiveness of smoothing techniques. Despite the applicationof a smoothing filter, the motions produced by our system some-times appear to lack smoothness. Potential factors contributing tothese results encompass suboptimal optimization of the smoothingfilter and an insufficient number of pre-pose instances.5.2 AppropriatenessIn respect to the appropriateness of speech exhibited by main agent,Table 3 and Figure 4 provide a description indicating that our entryachieves a preference-matching score of 54.8%. The outcomes ofThe KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceTable 2: Summary of the collective perception study with a0.05 confidence interval about human-likeness. Our entry isSA.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9Human-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100Figure 2: Box plot visualizing the rating distribution in thehuman-likeness study. Red bars are the median ratings (eachwith a 0.05 confidence interval); yellow diamonds are themean ratings (also with a 0.05 confidence interval). Box edgesare at 25 and 75 percentiles, while whiskers cover 95% of allratings for each condition.the assessment, which focuses on the appropriateness of interlocu-tor speech, are displayed in Table 4 and Figure 6. Our developedsystem attains a preference-matching score of 53.5%. We present aconcise overview of several configurations within our experimen-tal framework, which we posit may contribute to enhancing theappropriateness of gestures about speech. One potential rationalewe identify pertains to semantic conditioning. Our system employs...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSCFigure 3: Significance of pairwise differences between con-ditions. White means that the condition listed on the y-axisrated significantly above the condition on the x-axis, blackmeans the opposite ( yrated below x), and grey means nostatistically significant difference at the level α=0.05afterHolm-Bonferroni correction.a pre-trained sentence embedding model without fine-tuning. How-ever, numerous textual segments in the data fail to adhere to propersentence structure. Consequently, the embedding might inaccu-rately convey the semantics of these text segments. To mitigate thisconcern, we will change the sentence embedding model to a wordembedding model, or utilization of extended segments.Furthermore, timing synchronization is a consideration. Giventhat our system incorporates speech features such as mel-spectrogram,MFCC, and prosody to extract temporal information from audio,the model learns to effectively synchronize audio with gestures.Additionally, the pre-pose condition aids in capturing the initia-tion timing. Consequently, the proposed model demonstrates thecapability to regulate the timing of speech onset and pauses.Moreover, we address the issue of gesture smoothness. The gener-ated gesture results from our system sometimes exhibit irregularity.We hypothesize that the phenomenon may be attributed to thearchitecture of the pose autoencoder, the pre-poses, and the extentof the smoothing filter employed. A more intricate exploration ofthese factors will be conducted in the ablation study section.We propose potential methods for enhancing the performanceof our system concerning both the main agent and interlocutorspeech appropriateness. Initially, the model could incorporate inter-locutor gestures, audio, and text as conditioning factors. Secondly,incorporating a more extensive history of features from both themain agent and interlocutor into the conditioning process mightyield improved gesture generation. Thirdly, the meticulous designof the text embedding model and gesture autoencoder could en-hance semantic conditioning and the inherent naturalness of thegenerated gestures, respectively. These specific aspects will be thefocal points of our future works.ICMI’23, October 09–13, 2023, Paris, France Kim et al.Table 3: Summary statistics of user-study responses fromappropriateness for main agent speech, with confidence in-tervals for the mean appropriateness score(MAS) at the levelα=0.05."Pref. matched" identified how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 4: Bar plots visualizing the response distribution in theappropriateness for main agent speech. The blue bar(bottom)represents responses where subjects preferred the matchedmotion, the light grey bar(middle) represents tied responses,and the red bar(top) represents responses preferring mis-matched motion, with the height of each bar being propor-tional to the fraction of each category. Lighter colors corre-spond to slight preference, and darker colors to clear prefer-ence. On top of each bar is also a confidence interval for themean appropriateness score, scaled to fit the current axes.The dotted black line indicates chance-level performance.5.3 ablation studyWe conduct an ablation study to ensure that autoregression is help-ful to co-speech gesture synthesis. We calculate Frechet GestureDistance(FGD), between ground truth and generated motions in theNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...Figure 5: Significant differences between conditions in theappropriateness for main agent speech. White means thecondition listed on the y-axis achieved a MAS significantlyabove the condition on the x-axis, black means the opposite(yscored below x), and grey means no statistically significantdifference at level α=0.05after correction for the false dis-covery rate.Table 4: Summary statistics of user-study responses fromappropriateness for interlocutor speech, with confidence in-tervals for the mean appropriateness score(MAS) at the levelα=0.05."Pref. matched" identified how often test-takers pre-ferred matched motion in terms of appropriateness, ignoringties.Condi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014validation set, which are shown in Table 5. As a result, the FGD ofdiscriminator features and raw gestures are improved when usingthe pre-pose condition.The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation ICMI’23, October 09–13, 2023, Paris, FranceNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatchedFigure 6: Bar plots visualizing the response distributionin the appropriateness for interlocutor speech. The bluebar(bottom) represents responses where subjects preferredthe matched motion, the light grey bar(middle) representstied responses, and the red bar(top) represents responsespreferring mismatched motion, with the height of each barbeing proportional to the fraction of each category. Lightercolors correspond to slight preference, and darker colors toclear preference. On top of each bar is also a confidence in-terval for the mean appropriateness score, scaled to fit thecurrent axes. The dotted black line indicates chance-levelperformance.NA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...Figure 7: Significant differences between conditions in theappropriateness for interlocutor speech. White means thecondition listed on the y-axis achieved a MAS significantlyabove the condition on the x-axis, black means the opposite(yscored below x), and grey means no statistically significantdifference at level α=0.05after correction for the false dis-covery rate.6 CONCLUSIONIn this paper, we introduce an innovative diffusion-based co-speechgesture generation framework that has been submitted to the GE-NEA Challenge 2023. Our approach aims to produce co-speechTable 5: Effects of autoregression.Model FGD(feature) FGD (raw)w/o. pre-pose 154.984 4977.059w. pre-pose 77.909 2279.612gestures of high fidelity, achieved by proposing a gesture autoen-coder for effective domain transfer between the gesture space andlatent feature space. Furthermore, we leverage denoising diffusionprobabilistic models to address the challenge of co-speech ges-ture generation. While the comprehensive results indicate that ourmethod achieves a preference-matching score of 54.8% and 53.5%for appropriateness of main agent speech and interlocutor speech,respectively.Moreover, we conduct an in-depth ablation stud to affirm theutility of autoregressive methods in co-speech gesture synthesis.Our conclusion highlights the strengths of our system in timing syn-chronization and the generation of contextually fitting gestures forinteractive scenarios. Additionally, we propose several forthcomingchallenges for research, such as refining the structures of semanticembeddings and gesture embedding models. Our hope is that ourapproach contributes not only to the advancement of diffusion-based gesture generation research but also finds application acrossvarious gesture generation domains.ACKNOWLEDGMENTSThis work was supported by the "Development of cognitive/responseadvancement technology for AI avatar commercialization" projectfunded by the Brand Engagement Network(BEN)[Q2312881].REFERENCES[1]Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow.2020. Style-controllable speech-driven gesture synthesis using normalising flows.InComputer Graphics Forum , Vol. 39. Wiley Online Library, 487–496.[2]Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. 2022.Listen, denoise, action! audio-driven motion synthesis with diffusion models.arXiv preprint arXiv:2211.09707 (2022).[3]Tenglong Ao, Zeyi Zhang, and Libin Liu. 2023. GestureDiffuCLIP: Gesture diffu-sion model with CLIP latents. arXiv preprint arXiv:2303.14613 (2023).[4]Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry tothe GENEA Challenge 2022–A Tacotron2 based method for co-speech gesturegeneration with locality-constraint attention mechanism. In Proceedings of the2022 International Conference on Multimodal Interaction . 784–789.[5]Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and GangYu. 2023. Executing your Commands via Motion Diffusion in Latent Space. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition .18000–18010.[6]Kyunghyun Cho, Bart van Merriënboer, Çağlar Gulçehre, Dzmitry Bahdanau,Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning PhraseRepresentations using RNN Encoder–Decoder for Statistical Machine Translation.InProceedings of the 2014 Conference on Empirical Methods in Natural LanguageProcessing (EMNLP) . 1724–1734.[7]Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans onimage synthesis. Advances in neural information processing systems 34 (2021),8780–8794.[8]Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley,Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarialnetworks. Commun. ACM 63, 11 (2020), 139–144.[9]F Sebastian Grassia. 1998. Practical parameterization of rotations using theexponential map. Journal of graphics tools 3, 3 (1998), 29–48.[10] Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu,Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al .2020. Conformer:Convolution-augmented transformer for speech recognition. arXiv preprintarXiv:2005.08100 (2020).ICMI’23, October 09–13, 2023, Paris, France Kim et al.[11] Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus).arXiv preprint arXiv:1606.08415 (2016).[12] Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilisticmodels. Advances in neural information processing systems 33 (2020), 6840–6851.[13] Yannick Jadoul, Bill Thompson, and Bart De Boer. 2018. Introducing parselmouth:A python interface to praat. Journal of Phonetics 71 (2018), 1–15.[14] Yifan Jiang, Han Chen, and Hanseok Ko. 2023. Spatial-temporal Transformer-guided Diffusion based Data Augmentation for Efficient Skeleton-based ActionRecognition. arXiv preprint arXiv:2302.13434 (2023).[15] Gwantae Kim, Seonghyeok Noh, Insung Ham, and Hanseok Ko. 2023. MPE4G:Multimodal Pretrained Encoder for Co-Speech Gesture Generation. In ICASSP2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP) . IEEE, 1–5.[16] Gwantae Kim, Youngsuk Ryu, Junyeop Lee, David K Han, Jeongmin Bae, andHanseok Ko. 2022. 3d human motion generation from the text via gesture actionclassification and the autoregressive model. In 2022 IEEE International Conferenceon Image Processing (ICIP) . IEEE, 1036–1040.[17] Jihoon Kim, Jiseob Kim, and Sungjoon Choi. 2023. Flame: Free-form language-based motion synthesis & editing. In Proceedings of the AAAI Conference onArtificial Intelligence , Vol. 37. 8255–8263.[18] Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and HedvigKjellström. 2019. Analyzing input and output representations for speech-drivengesture generation. In Proceedings of the 19th ACM International Conference onIntelligent Virtual Agents . 97–104.[19] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, SimonAlexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator: A frame-work for semantically-aware speech-driven gesture generation. In Proceedings ofthe 2020 international conference on multimodal interaction . 242–250.[20] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[21] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srinivasa,and Yaser Sheikh. 2019. Talking with hands 16.2 m: A large-scale dataset of syn-chronized body-finger motion and audio for conversational motion analysis andsynthesis. In Proceedings of the IEEE/CVF International Conference on ComputerVision . 763–772.[22] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and LinchaoBao. 2021. Audio2gestures: Generating diverse gestures from speech audio withconditional variational autoencoders. In Proceedings of the IEEE/CVF InternationalConference on Computer Vision . 11293–11302.[23] Yuanzhi Liang, Qianyu Feng, Linchao Zhu, Li Hu, Pan Pan, and Yi Yang. 2022.Seeg: Semantic energized co-speech gesture generation. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition . 10473–10482.[24] Songxiang Liu, Yuewen Cao, Dan Su, and Helen Meng. 2021. Diffsvc: A diffusionprobabilistic model for singing voice conversion. In 2021 IEEE Automatic SpeechRecognition and Understanding Workshop (ASRU) . IEEE, 741–748.[25] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou,Wayne Wu, Bo Dai, and Bolei Zhou. 2022. Learning hierarchical cross-modalassociation for co-speech gesture generation. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition . 10462–10472.[26] Jianwen Luo, Kui Ying, and Jing Bai. 2005. Savitzky–Golay smoothing anddifferentiation filter for even number data. Signal processing 85, 7 (2005), 1429–1434.[27] Brian McFee, Colin Raffel, Dawen Liang, Daniel P Ellis, Matt McVicar, EricBattenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis inpython. In Proceedings of the 14th python in science conference , Vol. 8. 18–25.[28] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and BjörnOmmer. 2022. High-resolution image synthesis with latent diffusion models. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition .10684–10695.[29] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and RuslanSalakhutdinov. 2014. Dropout: a simple way to prevent neural networks fromoverfitting. The journal of machine learning research 15, 1 (2014), 1929–1958.[30] Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, andAmit Haim Bermano. 2023. Human Motion Diffusion Model. In The EleventhInternational Conference on Learning Representations . https://openreview.net/forum?id=SJ1kSyO2jwu[31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in neural information processing systems 30 (2017).[32] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang,Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly-supervisedcontrastive pre-training. arXiv preprint arXiv:2212.03533 (2022).[33] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, Jaehong Kim,and Geehyuk Lee. 2020. Speech gesture generation from the trimodal contextof text, audio, and speaker identity. ACM Transactions on Graphics (TOG) 39, 6(2020), 1–16.[34] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and GeehyukLee. 2019. Robots learn social skills: End-to-end learning of co-speech gesturegeneration for humanoid robots. In Proceedings of 2019 International Conferenceon Robotics and Automation . IEEE, 4303–4309.[35] Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: Alarge evaluation of data-driven co-speech gesture generation. In Proceedings ofthe 2022 International Conference on Multimodal Interaction . 736–747.[36] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo,Lei Yang, and Ziwei Liu. 2022. MotionDiffuse: Text-Driven Human MotionGeneration with Diffusion Model. arXiv preprint arXiv:2208.15001 (2022).[37] Chi Zhou, Tengyue Bian, and Kang Chen. 2022. Gesturemaster: Graph-basedspeech-driven gesture generation. In Proceedings of the 2022 International Confer-ence on Multimodal Interaction . 764–770.[38] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. 2023.Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition .10544–10553. |
pVBKLqpAUtP | The FineMotion entry to the GENEA Challenge 2023: DeepPhasefor conversational gestures generationVladislav Korzunkorzun@phystech.eduMoscow Institute of Physics andTechnologyMoscow, RussiaTinkoffMoscow, RussiaAnna Beloborodovabeloborodova.as@phystech.eduMoscow Institute of Physics andTechnologyMoscow, RussiaTinkoffMoscow, RussiaArkady Ilinarkady.ilin@skoltech.ruSkolkovo Institute of Science andTechnologyMoscow, RussiaTinkoffMoscow, RussiaABSTRACTThis paper describes FineMotion’s entry to the GENEA Challenge2023. We explore the potential of DeepPhase embeddings by adapt-ing neural motion controllers to conversational gesture generation.This is achieved by introducing a recurrent encoder for control fea-tures. We additionally use VQ-VAE codebook encoding of gesturesto support dyadic setup. The resulting system generates stable real-istic motion controllable by audio, text and interlocutor’s motion.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSembodied agents, neural networks, gesture generation, social ro-botics, deep learning, phase manifoldACM Reference Format:Vladislav Korzun, Anna Beloborodova, and Arkady Ilin. 2023. The FineMo-tion entry to the GENEA Challenge 2023: DeepPhase for conversationalgestures generation. In INTERNATIONAL CONFERENCE ON MULTIMODALINTERACTION (ICMI ’23), October 9–13, 2023, Paris, France. ACM, New York,NY, USA, 6 pages. https://doi.org/10.1145/3577190.36161191 INTRODUCTIONThe automatic generation of conversational gestures for 3D humanmodels is one of the most opportune problems in character anima-tion. It can be used to simplify video game production and increasethe realism of characters’ movements. Furthermore, as visual as-sistants or VTubers are becoming more popular, the demand forrealistic gestures for embodied virtual agents is also growing.The task of automatic gesture generation from speech has gotseveral promising solutions. During GENEA Challenge 2022 [ 25]one of the approaches was rated even better than real motion cap-ture data by motion quality [ 27]. However, the task at hand isbecoming more complicated year by year.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.ICMI ’23, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0055-2/23/10. . . $15.00https://doi.org/10.1145/3577190.3616119The current GENEA Challenge 2023 [ 15] considers a dialoguesetup. Thus, the participants’ systems should not only considerinput speech but also the conversation partner’s behaviour. As wellas in the previous year «Talking With Hands 16.2M» dataset [ 16]was used, but now each sample contains two sets of motion, audioand text for the main agent and the interlocutor.In relative tasks of condition-based motion generation [ 23] andcharacter controllers [ 26] researchers propose slightly different ap-proaches, that could also benefit conversational gestures generation.One of the most promising approaches for animation representationwas presented in [ 19]. Taking into account that motion curves couldbe considered as periodic functions, they could be decomposed viaFourier Transform to obtain high-level features.Thus, we decided to examine the phase manifold formed by Deep-Phase’s Periodic AutoEncoder in conversational gesture generation.In order to properly address the dyadic setup of the challenge, weimplemented additional interlocutor gesture representation basedon VQ-VAE codebook encoding. Evaluation [ 15] showed that oursystem generates realistic motion which is statistically suitable forthe interlocutor’s behaviour. However, our system showed poorresults on appropriateness for speech, which suggests the needfor further development. Our code along with video examples ofgenerated motion is publicly available1to help other researchersreproduce our results.Our paper is organized as follows: Section 2 gives an appropriateoverview of related work; Section 3 describes our approach gen-erally; Section 4 details generator model input and output format;Section 5 gives results from the evaluation and discusses our results;and Section 6 is for the conclusion.2 RELATED WORKIn this section, we give a general overview of recent conversationalgesture generation approaches. Then we describe some existingapproaches for solving close tasks, that inspired our solution.2.1 Conversational gestures generationThe task of conversational gestures generation has been advanc-ing for several years. Starting from window-based frame-by-framegeneration [ 13] end-to-end approaches lead to auto-regression [ 14].Later, the GENEA Challenge 2022 offered many successful systems.Some of them are based on recurrent models [ 4,6,24], and some1https://github.com/FineMotion/GENEA_2023ICMI ’23, October 9–13, 2023, Paris, France Korzun et al.even utilise GPT-like large architectures [ 18], but the most success-ful hybrid approach was presented in [ 27], where authors use thegraph-based model to transfer between short clips.Slightly weaker results were shown by clear auto-regressiveapproaches [ 11,12], that faced the main shortcoming of such ar-chitectures - converging to mean pose. In [ 12] as well as in [ 14]authors tried to overcome this problem by adding different teacher-forcing techniques to force models first to extract appropriate audiorepresentation. However, auto-regressive approaches have shownsignificant success without such techniques in a different task: char-acter controllers.2.2 Character controllersThe task of creating automatic character controllers is related tolocomotion movements [ 8]. The controlled character should movejoints with respect to the environment and user input. Many data-driven character controller approaches use a mixture-of-experts[10] framework, for example, Mode Adaptive Neural Networks(MANN) [26].Later, the MANN model was improved with local phases [ 20]. Lo-cal phases are computed as a derivative from block function contain-ing binary states of whether bone contacts the object/environment.The efficiency of the proposed approach was demonstrated in cre-ating a neural motion controller for a basketball game, where theblock function represented a player’s contact with the ball or thefloor.Finally, in [ 19] the unsupervised approach for automatic phaseextraction was suggested. The proposed Periodic AutoEncoderextracts periodic features from motion curves after training onunstructured motion datasets. The architecture utilizes a tempo-ral convolutional autoencoder [ 9] additionally applying real FastFourier Transform to each channel of latent space. The obtainedperiodic features then were used to train the motion controller asbefore showing the capability of extracted features.2.3 Text-to-Gesture Animation GenerationThe task of generating human gesture animations from textualprompts involves generating expressive and natural-looking ges-tures that correspond to a given textual input. For example, in thework of [ 7] the authors suggest jointly encoding gestures, text andimages into a single latent space using Contrastive Language-ImagePretraining (CLIP) [ 2]. Also, in GestureDiffuCLIP [ 21] the authorscombined the power of CLIP and diffusion models to generaterealistic and diverse gesture animations from text. To enable theencoding and decoding of gestures, the Vector Quantized Varia-tional Autoencoder (VQ-VAE) [ 1] was used. Additionally, VQ-VAEhas proven to be a valuable tool beyond text-to-gesture generation.In the context of conversational gestures, recent research [ 18] and[22] applied the VQ-VAE to encode and decode gestures, achievingimproved gesture generation performance.3 SYSTEM OVERVIEWOur approach follows the original DeepPhase paper [ 19]. It containstwo main stages: training Periodic AutoEncoder to extract phase fea-tures and building neural motion controller upon extracted phases.The motion controller is based on a mixture-of-experts frameworkalso mentioned in the DeepPhase paper with some ideas from pre-vious author’s work [ 20]. The main difference between our systemand those mentioned above is that we use an auxiliary recurrentControl Variables Encoder to guide motion by audio, text and inter-locutor’s motion instead of the user’s input. Apart from that, wetrained an additional encoder for the interlocutor’s motion and sup-plemented control features with the obtained latent representation.3.1 DeepPhase embeddingsTo prepare the phase manifold we follow the proposed pipelinefrom [ 19] exactly. To train Periodic AutoEncoder (PAE) we firstextract positions from the main agent’s motion data. We use allmotion files, but extract positions for 26 joints, including world rootand excluding fingers. Then we calculate joint position velocitiesand smooth them via Butterworth Filter [3].The training configuration of PAE is as follows: training samplecontains 61 frames and covers a 2-second window with 26*3 chan-nels. The number of latent channels (phases) is equal to 8, followingthe dancing pipeline from the official repository2. The number ofintermediate channels is equal to the number of joints. The model istrained during 150 epochs with batch size equal to 512 and AdamWoptimizer with Cyclic Learning Rate Scheduler with Restarts [ 17]with weight decay and learning rate both equal to 10e-4, restartperiod equal to 10, multiplier equal to 2 and cosine policy.The obtained model extracts phase features as in the originalpaper. From each time window tit extracts amplitude ( A), frequency(F), offset (B) and phase shift ( S).A,F,B,S∈RM, whereM- numberof latent channels (or phases). Phase manifold P∈R2Mfor frametis computed byP(t)2i−1=A(t)i·sin(2π·S(t)i),P(t)2i=A(t)i·cos(2π·S(t)i).(1)To obtain phase features P∈RT×2Mfrom motion with length Twe just extract the phase manifold from the sliding window, i.e.P={P(t)|t∈ [1,T]}. In order to illustrate the periodicity ofextracted phase features the Figure 1 shows them separated bylatent channel on a 10-second sample.Figure 1: Extracted phase features exampleDue to the fact that PAE is trained on joint velocities, obtainedphases can not be used as intermediate representations of motionsinstead of original data to train motion generator. The problem liesin the difficulty of converting joint positions into joint rotationswithout the introduction of kinematic constraints. To overcomethis we also tried to train PAE on joints rotations. Unfortunately,obtained phase manifold does not look like periodic function as2https://github.com/sebastianstarke/AI4AnimationThe FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation ICMI ’23, October 9–13, 2023, Paris, Francebefore. PAE trained on angle velocities could theoretically showsbetter results, but we decided to stop on phase manifold trained onjoint velocities.3.2 Generation modelOur motion generation model extends the mixture-of-experts frame-work from [ 19]. It contains two feedforward neural networks: Gat-ing Network and Motion Prediction Network. The model’s notationfollows [20].The Gating Network is built upon a stack of linear layers withELU[ 5] activations between them. It takes phase features and pre-dicts weights for experts. In our case, there are 8 experts. Then,the Motion Prediction Network uses these weights to make linearcombinations over experts. The Motion Prediction Network itselfconsists of several "Expert Layers" with ELU activations betweenthem. Each of layer Euses experts weights α={αi,i∈[1,N]}andinputxas follows:E(x,α)=N∑︁1αi(Wix+bi) (2)whereWi∈Rh×mandbi∈Rhare weights and biases respec-tively withmandhbeing input and output dimensions respectively.As in the original DeepPhase repository, the number of "Expertlayers" as well as the number of linear layers on the Gating Networkis equal to 3.3.3 Control Variables EncoderInitially, the input and output data formats were similar to [ 20].However, significant changes were introduced. As control variablesinput, we use a similar time window of audio features. But the morecontrol features like text and interlocutor’s pose we added, thelarger the control variables vector would become. So we decided toadd an additional recurrent encoder of control features based onBi-directional GRU over the FeedForward Highway as in [ 12] toshorten this vector. It takes time-window features around the cur-rent frame and returns the output vector from RNN correspondingto the considered frame.3.4 Interlocutor Gesture EncoderModel. To effectively respond to the gestures of the interlocutor,our model leverages the Interlocutor Gesture Encoder, a crucialcomponent based on the VQ-VAE framework from [ 1]. This modelshowed good results in gesture coding, as shown in [ 18] and [ 22].The Interlocutor Gesture Encoder enables us to encode high-qualityrepresentations of gestures into compact vectors.For better learning, we have added improvements such as ex-ponential codebook smoothing and discarding unused vectors, assuggested in the original article.Data processing. To train the VQ-VAE model, we segment ges-tures into gaps according to the bits in the audio. This idea wasproposed in the [ 22]. The authors proposed dividing gestures intosegments that align with the rhythmic structure of the audio, asit is believed to capture the salient aspects of the gestures. Themaximum number of frames in one gesture’s sample with this ap-proach is equal to 18. This approach has shown promising resultsin capturing the temporal dynamics and synchronizing gestureswith the corresponding audio cues. Building upon this concept, weadopt a similar data processing strategy in our study to leveragethe benefits of aligning gestures with the rhythmic elements of theaudio. During training, the network is fed with only those gesturesamples from both partners in which at least one conversationalpartner was speaking. Each selected sample corresponds to thespeaker’s audio bits. During inference, we feed only interlocutor’sgestures corresponding to the active speaking person’s audio bits.In order to determine the moments of speech, we use a text tran-script. If there is no active speaker at the moment, main agent’saudio bits are chosen for guidance.Training. We train the VQ-VAE model with codebook size 2048.The dimensional of codebook vectors was 256. Codebook occupancyreaches 70%. The model was trained over 152 epochs.Inference. To feed the interlocutor’s gestures into the main model,we split the interlocutor’s audio into bits, then we extract vectorsfor each sample. After that, we duplicate each vector to the size ofa bit. Thus, we get the number of vectors equal to the number offrames in the original gesture.4 GENERATOR INPUTS AND OUTPUTSFigure 2: Generator modelThe overall system is illustrated in Figure 2. The model takes theinformation from the current frame and predicts the next frame.We use a notation of a time series window similar to [ 20], i.e.Tt1t0represents features collected within a time window t0≤t≤t1.Following is the description of the final data formats.Inputs. Generator’s input consists of 3 components XSi,XAi,XPi.Character state XSioni-th frame consists of concatenated jointsrotations and velocities. We also initially used joint positions, butwe observed that the model is more stable without them. We repre-sent joint rotations via 6D continuous representation from [ 28] toeliminate cases when Euler’s angles have values equal to 0 or 180degrees. Joint velocities were preliminary smoothed as in the PAEtraining routine. It’s also worth mentioning that character stateand phases were preliminary normalized.Control variables XAiare time-windowT1s−1sfeatures aroundthe current frame, which is passed to Control Variables Encoderto obtain one control vector XCi, which will be concatenated withcharacter state as the main input to Motion Prediction Network.As initial control features, we extract 26 MFCCs from audio, GloVeembedding of size 50 and obtained codebook encoding from VQ-VAE with respect to motion frame rate which is equal to 30 FPS.ICMI ’23, October 9–13, 2023, Paris, France Korzun et al.To align text and interlocutor’s features we distribute them evenlywithin frames corresponding to time span. We also tried othercombinations, including interlocutor’s speech, but they showedless stable results. We decided to make the dimension of XCiequaltoXSi.Motion Phases XPi=Θi∈R2KTare extracted phase featuresvia PAE uniformly sampled from time-window T1s−1sand concate-nated into one vector, i.e. Θi={P(i−30),...,P(i−5),P(i),P(i+5),...,P(i+30)}considering that 13 frames are sampled in the win-dow.Outputs. Our Motion Prediction Network output contains only 2components: the next frame character state YSi+1, which is similarto input one, and future motion phases YPi+1={Θi+1,ΔΘi+1}con-taining not only phases, but phases’ velocity for time-window T1s0swith respect to frame i+1, i.e =Θi+1={P(i+1),P(i+6),...,P(i+31)}with 7 frames total.Training. The model is trained to predict the next frame basedon the current frame, it does not use outputs from the previous step- every frame is taken from the dataset directly and is processedindependently. All parts of the generator are trained simultaneouslyend-to-end during 50 epochs with batch size equal to 2048 and adefault Adam optimizer with a learning rate equal to 10e-4. Thehidden sizes of the Gating Network and the Motion PredictionNetwork are 64 and 1024 respectively.Inference. Finally, during inference, our model predicts the nextframe based on the previous one and follows an auto-regressivefashion. We also blend phases between iterations, before passingthem to the next step: Θ′i+1=λΘi+1+(1−λ)(Θi+ΔΘi+1)withλ=0.5.5 RESULTS AND DISCUSSIONAs in previous challenges, organizers provided a comprehensivehuman evaluation of participating systems[ 15]. This time 3 mainsubjective measures are considered: human likeness, appropriate-ness to speech and appropriateness to the interlocutor’s behaviour.Human-likeness estimates the overall quality of generated mo-tion without taking into account the agent’s speech or interlocutor’sbehaviour. Our approach, indexed SL, shows competitive results(median score is 51∈[50,51]in Table 1) indicating the ability ofDeepPhase embeddings to maintain periodicity and as a result therealism of predicted motion. Although our model is rated ratherwell, it does not reach the quality of natural motions or state-of-the-art approaches.In order to estimate the appropriateness of agent speech, evalu-ation participants were given two motion clips generated by onemodel using separate audio samples and tasked to distinguish whichof the two motion clips corresponds to the target listening sample.Good models generate motions that participants could easily deter-mine from one another by audio. The main quantity of interest inthe appropriateness evaluation is the mean appropriateness score(MAS). Unfortunately, our model provides poor appropriatenessresults ( 0.05±0.05MAS in Table 1). Organizers mentioned (section3.6 in [ 15]) that our solution does not statistically differ from chanceperformance. This leads us to suspect the weakness of used audioand text features.Table 1: Summary statistics of studiesCondi- Human-Likeness Agent Speech Interlocutortion Median Score MAS MASNA 71∈[70,71] 0.81±0.06 0.63±0.08BM 43∈[42,45] 0.20±0.05−0.01±0.06BD 46∈[43,47] 0.14±0.06 0.07±0.06SA 30∈[29,31] 0.11±0.06 0.09±0.06SB 24∈[23,27] 0.13±0.06 0.07±0.08SC 9∈[9,9]−0.02±0.04−0.03±0.05SD 45∈[43,47] 0.14±0.06 0.02±0.07SE 50∈[49,51] 0.16±0.05 0.05±0.07SF 65∈[64,67] 0.20±0.06 0.04±0.06SG 69∈[67,70] 0.39±0.07−0.09±0.08SH 46∈[44,49] 0.09±0.07−0.21±0.07SI 40∈[39,43] 0.16±0.06 0.04±0.08SJ 51∈[50,53] 0.27±0.06−0.03±0.05SK 37∈[35,40] 0.18±0.06−0.06±0.09SL 51∈[50,51] 0.05±0.05 0.07±0.06The addition to this year’s challenge is the introduction of theappropriateness metric for the main agent’s reaction to the inter-locutor’s behaviour. The study itself is similar to the previous onewith changing interlocutor’s motion. It is also conducted whilethe main agent is silent. Surprisingly, using the interlocutor’s mo-tion features yields better results ( 0.07±0.06MAS in Table 1) andsignificantly better than a chance (section 4.7 in [15]).Overall, our system shows promising results, more on human-likeness and appropriateness for the interlocutor. However, thereare ways to improve this approach by adding more compelling au-dio features or adding teacher forcing to make attention to speechfeatures. Nevertheless, using DeepPhase embeddings allow us totrain the model without suffering converging to a rest pose. Addi-tionally, VQ-VAE codebook encoding allowed the resulting solutionto accord the dyadic setup of conversation and generate plausiblereactions to interlocutor behaviour.6 CONCLUSIONSharing approaches between different tasks in the domain of mo-tion generation could significantly improve the overall state ofthe research community. Our system is based on an approach thatproved itself as a neural motion controller and showed promisingresults during evaluation. We assume that using periodic propertiesof motion could yield improvements in all problems connected withanimation. And DeepPhase embeddings are one of the latest andmost successful approaches to extract these properties, so we rec-ommend considering them as well as VQ-VAE codebook encodingduring the development of future models.Despite that our system showed relatively good results in thechallenge, there is room for improvement. For example, a betterspeech encoder or additional data filtering could be used. Themixture-of-experts framework could also be extended to work withsequences. Some teacher-forcing techniques could also be applied.The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation ICMI ’23, October 9–13, 2023, Paris, FranceREFERENCES[1]Koray Kavukcuoglu Aaron van den Oord, Oriol Vinyals. 2017. Neural DiscreteRepresentation Learning. arXiv preprint arXiv:1711.00937 (2017).[2]Chris Hallacy Aditya Ramesh Gabriel Goh-Sandhini Agarwal Girish SastryAmanda Askell Pamela Mishkin Jack Clark Gretchen Krueger Ilya SutskeverAlec Radford, Jong Wook Kim. 2021. Learning Transferable Visual Models FromNatural Language Supervision. arXiv preprint arXiv:2103.00020 (2021).[3]Stephen Butterworth et al .1930. On the theory of filter amplifiers. WirelessEngineer 7, 6 (1930), 536–541.[4]Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry tothe GENEA Challenge 2022–A Tacotron2 based method for co-speech gesturegeneration with locality-constraint attention mechanism. In Proceedings of the2022 International Conference on Multimodal Interaction . 784–789.[5]Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast andaccurate deep network learning by exponential linear units (elus). arXiv preprintarXiv:1511.07289 (2015).[6]Saeed Ghorbani, Ylva Ferstl, and Marc-André Carbonneau. 2022. Exemplar-basedstylized gesture generation from speech: An entry to the GENEA Challenge 2022.InProceedings of the 2022 International Conference on Multimodal Interaction .778–783.[7]Amir Hertz Amit H. Bermano Daniel Cohen-Or Guy Tevet, Brian Gordon. 2022.MotionCLIP: Exposing Human Motion Generation to CLIP Space. arXiv preprintarXiv:2203.08063 (2022).[8]Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neuralnetworks for character control. ACM Transactions on Graphics (TOG) 36, 4 (2017),1–13.[9]Daniel Holden, Jun Saito, Taku Komura, and Thomas Joyce. 2015. Learningmotion manifolds with convolutional autoencoders. In SIGGRAPH Asia 2015technical briefs . 1–4.[10] Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991.Adaptive mixtures of local experts. Neural computation 3, 1 (1991), 79–87.[11] Naoshi Kaneko, Yuna Mitsubayashi, and Geng Mu. 2022. TransGesture: Au-toregressive gesture generation with RNN-transducer. In Proceedings of the 2022International Conference on Multimodal Interaction . 753–757.[12] Vladislav Korzun, Anna Beloborodova, and Arkady Ilin. 2022. ReCell: replicatingrecurrent cell for auto-regressive pose generation. In Companion Publication ofthe 2022 International Conference on Multimodal Interaction . 94–97.[13] Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and HedvigKjellström. 2019. Analyzing input and output representations for speech-drivengesture generation. In Proceedings of the 19th ACM International Conference onIntelligent Virtual Agents . 97–104.[14] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, SimonAlexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator: A frame-work for semantically-aware speech-driven gesture generation. In Proceedings ofthe 2020 international conference on multimodal interaction . 242–250.[15] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[16] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srinivasa,and Yaser Sheikh. 2019. Talking with hands 16.2 m: A large-scale dataset of syn-chronized body-finger motion and audio for conversational motion analysis andsynthesis. In Proceedings of the IEEE/CVF International Conference on ComputerVision . 763–772.[17] Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization.arXiv preprint arXiv:1711.05101 (2017).[18] Andrew Feng Shuhong Lu. 2022. The DeepMotion entry to the GENEA Challenge2022. (2022), 790–796. https://doi.org/10.1145/3536221.3558059[19] Sebastian Starke, Ian Mason, and Taku Komura. 2022. Deepphase: Periodicautoencoders for learning motion phase manifolds. ACM Transactions on Graphics(TOG) 41, 4 (2022), 1–13.[20] Sebastian Starke, Yiwei Zhao, Taku Komura, and Kazi Zaman. 2020. Local motionphases for learning multi-contact character movements. ACM Transactions onGraphics (TOG) 39, 4 (2020), 54–1.[21] Libin Liu Tenglong Ao, Zeyi Zhang. 2023. GestureDiffuCLIP: Gesture DiffusionModel with CLIP Latents. arXiv preprint arXiv:2303.14613 (2023).[22] Yuke Lou Baoquan Chen Libin Liu Tenglong Ao, Qingzhe Gao. 2022. RhythmicGesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with HierarchicalNeural Embeddings. arXiv preprint arXiv:2210.01448 (2022).[23] Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or.2022. Motionclip: Exposing human motion generation to clip space. In EuropeanConference on Computer Vision . Springer, 358–374.[24] Jonathan Windle, David Greenwood, and Sarah Taylor. 2022. UEA Digital Humansentry to the GENEA Challenge 2022. In Proceedings of the 2022 InternationalConference on Multimodal Interaction . 771–777.[25] Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: Alarge evaluation of data-driven co-speech gesture generation. In Proceedings ofthe ACM International Conference on Multimodal Interaction (ICMI ’22) . ACM,736–747. https://doi.org/10.1145/3536221.3558058[26] He Zhang, Sebastian Starke, Taku Komura, and Jun Saito. 2018. Mode-adaptiveneural networks for quadruped motion control. ACM Transactions on Graphics(TOG) 37, 4 (2018), 1–11.[27] Chi Zhou, Tengyue Bian, and Kang Chen. 2022. Gesturemaster: Graph-basedspeech-driven gesture generation. In Proceedings of the 2022 International Confer-ence on Multimodal Interaction . 764–770.[28] Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. 2019. On thecontinuity of rotation representations in neural networks. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition . 5745–5753.APAIRWISE SIGNIFICANT DIFFERENCE FORAPPROPRIATENESS STUDIESNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...(a) Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to interlocutorNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...(b) Appropriateness for interlocutorFigure 3: Significant differences between conditions in thetwo appropriateness studiesICMI ’23, October 9–13, 2023, Paris, France Korzun et al.Figure 3 shows the pairwise significance in appropriateness study.White means the conditions listed on y-axis achieved an MAS signif-icantly above the condition on the x-axis, black means the opposite(yscored below x), and grey means no statistically significant dif-ference at level a=0.05after correction for the false discovery rate.Our entry SLis rated significantly below or equal to other entriesby appropriateness for speech. On the other hand, our solution’sappropriateness to the interlocutor’s speech is significantly belowonly natural motion NA.B RATING DISTRIBUTION AND PAIRWISESIGNIFICANT DIFFERENCE FORHUMAN-LIKENESS STUDYHuman-likeness ratingNA SG SF SJ SL SE SH BD SD BM SI SK SA SB SC020406080100(a) Box plot of ratings distribution...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSC(b) Significance of pairwise differencesFigure 4: Visualisations of human-likeness studyFigure 4 visualizes results of human-likeness study: 4a visualiz-ing the rating distribution and 4b shows the pairwise significance.In 4a Red bars are the median ratings (each with a 0.05 confidenceinterval); yellow diamonds are mean ratings (also with a 0.05 con-fidence interval). Box edges are at 25 and 75 percentiles, whilewhiskers cover 95% of all ratings for each condition. In 4b designa-tion like in 3. Our entry SLis rated significantly below only naturalmotion NAand two participants’ entries: SGandSF. It has also nosignificant difference from two other models: SEandSJ. |
swc28UDR8Wk | DiffuGesture: Generating Human Gesture From Two-personDialogue With Diffusion ModelsWeiyu ZhaoHarbin Institute of TechnologyWeihai, Shandong, Chinaweiyuzhao66@gmail.comLiangxiao Hu∗Harbin Institute of TechnologyWeihai, Shandong, Chinalx.hu@hit.edu.cnShengping ZhangHarbin Institute of TechnologyWeihai, Shandong, Chinas.zhang@hit.edu.cnABSTRACTThis paper describes the DiffuGesture entry to the GENEA Chal-lenge 2023. In this paper, we utilize conditional diffusion models toformulate the gesture generation problem. The DiffuGesture sys-tem generates human-like gestures from the two-person dialoguescenario, which are responsive to the interlocutor motions and ac-company with the input speech. DiffuGesture system is built uponthe recent DiffGesture [ 39]. Specifically, we introduce a lightweighttransformer encoder to fuse the temporal relationships betweenhuman gestures and multi-modal conditions. Moreover, we adoptimplicit classifier-free guidance to trade off between diversity andgesture quality. According to the collective evaluation released byGENEA Challenge 2023, our system demonstrates strong competi-tiveness in the appropriateness evaluation.CCS CONCEPTS•Computing methodologies →Animation ;Neural networks ;•Human-centered computing →Virtual reality .KEYWORDSgesture generation, diffusion models, neural networksACM Reference Format:Weiyu Zhao, Liangxiao Hu∗, and Shengping Zhang. 2023. DiffuGesture:Generating Human Gesture From Two-person Dialogue With DiffusionModels . In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERAC-TION (ICMI ’23 Companion), October 9–13, 2023, Paris, France. ACM, NewYork, NY, USA, 7 pages. https://doi.org/10.1145/3610661.36165521 INTRODUCTIONHuman gestures serve as a distinct mode of communication in dailyconversations, which assists the speakers in conveying semanticinformation more effectively and facilitates interpersonal commu-nication. [ 21,29]. Therefore, generating realistic co-speech humangestures from conversations plays a crucial role in achieving im-proved interaction between virtual entities and humans. Our goal*Corresponding author.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from permissions@acm.org.ICMI ’23 Companion, October 9–13, 2023, Paris, France©2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 979-8-4007-0321-8/23/10. . . $15.00https://doi.org/10.1145/3610661.3616552is to generate co-speech human gestures from the two-person dia-logue. However, generating human gestures with multi-modal datasuch as audio, text, and conversational cues in two-person dialogueremains a challenging and unresolved problem.Early research in data-driven co-speech gesture generation ap-proaches often relies on statistical analysis. Levine [ 16] et al. utilizeprobabilistic models to establish the relationship between audio andgestures. In recent years, deep learning methods have been increas-ingly applied in co-speech gesture generation. Kucherenko [ 12] etal. and Yoon [ 34] et al. employ the multi-layer perceptron (MLP) andrecurrent neural network (RNN) methods to generate deterministichuman gestures, respectively. However, these approaches do notadequately address the implicit mapping between the data and ges-tures [ 13]. To achieve more diverse and personalized gesture move-ments and improve the mapping between data and gestures, thereemerge methods using GAN [ 3,25,30], diffusion models [ 27,32,39]and VQ-VAE [20, 22].However, these methods mainly focus on single-person co-speechgesture generation. In this paper, we present a novel approach forco-speech human gesture generation in the two-person dialoguescenario. Specifically, given the behavior of the interlocutor andthe audio and textual transcriptions of the main agent, we generatethe reaction and co-speech movements of the main agent, respec-tively. Inspired by [ 39], we adopt conditional diffusion models forco-speech gesture generation from the two-person dialogue. Specif-ically, we introduce a lightweight transformer encoder to enhancethe contextual relevance between human gestures and multi-modalconditions. Finally, we introduce implicit classifier-free guidanceto trade off between diversity and gesture quality.The main contributions of our work are:•We present an early attempt to utilize conditional diffusion mod-els for co-speech human gesture generation from two-persondialogue, which generates impressive co-speech gesture move-ments.•We introduce a lightweight transformer encoder that effectivelyfuses the temporal relationships between human gestures andmulti-modal conditions.2 RELATED WORKIn this section, we will discuss the previous work in the fields ofgesture generation and diffusion model generation.2.1 Data-driven Gesture GenerationThe data-driven approach to gesture generation has found extensiveapplications across various domains.In recent years, researchershave utilized audio [ 6,17,18,22], transcribed text [ 3,10,23,26,27,36], and multimodal data [ 2,19,33] to drive gesture generation. TheICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.use of audio-driven gesture generation is quite common in variousapplications. For example, Ginosaret et al. [ 6] utilize an adversarialdiscriminator to regress gestures from audio. Qian et al. [ 22] employconditional learning to achieve audio-driven gesture generation,alleviating the ambiguity in simultaneous speech and gesture syn-thesis. Audio2gestures [ 18] and DanceFormer [ 17] use a variationalautoencoder [ 11] and Transformer [ 28], respectively, to generategestures from audio. Text-driven motion synthesis can be seen aslearning a joint embedding of the text feature space and the motionfeature space[ 22]. Text2gestures [ 3] establishes the connection be-tween text and gesture actions using a transformer. T2M-GPT [ 36]and MotionGPT[ 10], built upon generative pre-trained transformer(GPT), treat gesture actions as a language and utilize VQ-VAE totransform text into gesture actions. MDM [ 27] and MotionClip [ 26]preprocess transcribed text using CLIP[ 23] to establish the conver-sion between action and text embeddings.Recently, there has been an increasing trend in co-speech ges-ture generation to use multimodal data, including audio, text, andspeaker ID. Yoon et al. [ 33] proposed a model that combines multi-modal context and adversarial training to generate gestures thatresemble human-like movements and are synchronized with thespeech content and rhythm. Rhythmic Gesticulator [ 2] is the firstmodel to use neural networks to establish the relationship betweengestures and audio in terms of rhythm and semantics. HA2G [ 19]leverages contrastive learning strategies to fully utilize the richconnections between speech audio, text, and human gestures, re-sulting in the generation of realistic gesture movements. However,none of the aforementioned works considered the influence of otherindividuals in dyadic conversations on the embodied agents.2.2 Diffusion ModelsDiffusion models are a type of probabilistic generative model basedon stochastic processes [ 8], where initial data points graduallyevolve towards the target distribution through a diffusion processat each time step. Dhariwal et al. [ 5] introduce classifier guidance toimprove sample quality and generate higher-quality results. Then,the introduction of the Classifier-Free Guidance [ 9] eliminates theneed for explicit classification models and supports more open-ended and exploratory generation in various tasks. Diffusion modelshave recently been widely applied in various fields, such as imagegeneration [24], 3D shape generation [31], video generation [7].More recently, in the context of gesture generation tasks, dif-fusion generative models [ 1,27,37,39] have also been employedfor co-speech gesture generation. Inspired by the work of DiffGes-ture [ 39] in 2D gesture generation, we have developed a frameworkfor generating 3D gesture poses from multimodal data in a two-person dialogue scenario.3 METHODGiven the behavior of the interlocutor and the audio and textualtranscriptions of the main agent, our goal is to generate the listeningreactions and co-speech motions simultaneously. The architectureof our system is depicted in Figure 1(a). We first introduce theproblem definition in Section 3.1. Then we present the diffusionprocess and reverse process for gesture generation in Section 3.1.Finally, we develop a transformer encoder to fuse the temporalrelationships between human gestures and multi-modal conditionsin Section 3.3.3.1 Problem DefinitionGiven the sequences of 3D full-body motions, we represent them asx={p1,p2,p3,...,pn}∈RN×3J,Nrepresents the sequence lengthandJdenotes the total joint number. The reverse denoising processGof the diffusion model is parameterized by θto synthesize themain agent skeleton sequence xm, which is further conditionedon the multi-modal conditions Cand the initial poses of the pre-viousMframesxpre. The learning objective can be expressed asargminθxm−Gθ(C,xpre).3.2 Diffusion-based Gesture GenerationInspired by the previous work [ 39], we extend this model in thetwo-person dialogue scenario. Unlike generating 2D skeletal upper-body poses in [ 39], we synthesize the full-body human gestures ina two-person dialogue scenario.Diffusion Process. The diffusion process, also known as theforward process, is used to approximate the posterior distributionq(x1:T|x0). It gradually introduces Gaussian noise into the originaldistribution based on the variance sequence β1,...,βt, whereβi∈(0,1). The diffusion process is defined as follows:q(x1:Nt|x1:Nt−1)=N(√︁βtx1:Nt−1,(1−βt)I), (1)q(x1:T|x0)=TÖt=1q(x1:Nt|x1:Nt−1), (2)wherex1:Ntrepresents the main agent motion sequence {pm}Ni=1attdenoising step. Next, we will slightly abuse the use of letters and usexto represent x1:N. By progressively adding noise in this mannerto the original gesture motions x0, it approaches a distribution thatclosely resembles white noise.Reverse Process. The reverse process, also known as the gener-ation process, estimates the joint distribution pθ(x0:T). The reverseprocess of diffusion models also maintains the form of Gaussiantransition. Additionally, following the idea of classifier-free guid-ance, we train the model in both unconditional and conditionalgeneration settings to generate more realistic and diverse gesturemotions. The reverse process is defined as follows:pθ(x0:T)=pθ(xT)TÖt=1pθ(xt−1|xt,C), (3)where pθ(xt−1|xt,C)=N(xt−1;μθ(xt,t,C),∑︁θ(xt,t)).(4)Equation 4 represents the conditional generation and we set theconditionsCas zero (denoted as φ) for unconditional generationin the training stage. The corrupted noisy gesture sequence xtissampled by q(xt|x0).Traning loss. According to DDPM [ 8], the previous corruptedgesture sequence xt−1is defined as follows:xt−1=xt−√1− ̄αtˆε√ ̄αt, (5)where ̄αt=tÖi=11−βi. (6)DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models ICMI ’23 Companion, October 9–13, 2023, Paris, FranceFigure 1: Overview of the Diffu2guesture framwork. In the preprocessing stage (yellow), we develop a condition encoder and apropose encoder to process multi-modal data and previous poses, respectively. Then we concatenate the two outputs together tocreate condition features C. In the training stage (green), we introduce classifier-free guidance to train the transformer encoder.In the sampling stage (pink), we start with random noise xTand generate a clean sample x0throughTdenoising steps.So we can denoise the Gaussian noise to the original gesture motiondistribution step by step. Then, we use the Mean Squared Error(MSE) loss to compute the loss between the estimated noise andthe actual noise at each time step [39]:Lsimple =Eqhε−εθ(√ ̄αtx0+√1− ̄αtε,C,t)2i. (7)Whereεθis the predicted Gaussian noise, and εrepresents theactual added noise. During the training process, we randomly maskthe conditions Cfor the unconditional setting.Sampling. Generating motion from speech is an implicit map-ping rather than a direct one-to-one correspondence between speechand gestures. To ensure a better correlation between audio andactions, we introduce classifier-free guidance [ 5]. From the perspec-tive of gesture generation, we can consider it as follows:GM=G(xt,φ,t)+s·(G(xt,C,t)−G(xt,φ,t)). (8)Wheresis a hyperparameter. As mentioned in the training losssection, during the training process, we utilize random masking tocreate unconditional input for training unconditional models. Then,we train a single transformer encoder and MLP layer under variousconditioning setups between conditional models and unconditionalmodels. This enables us to realize classifier-free guidance.Based on the aforementioned context, diffusion models can beused to generate natural embodied agent gestures in a two-persondialogue setting.3.3 Cross-Modal Attention EncodingGenerating 3D gesture poses using conditional diffusion models isdifferent from generating images. Both the pose sequence xand themulti-modal conditions Cexhibit strong temporal dependencies.Here, we need to establish a module to ensure that our results aretime-dependent. Unlike previous work in the GENEA 2022 chal-lenge that utilizes LSTM [ 4], VQVAE [ 20], and graph models [ 38],we employ a lightweight transformer encoder to encode Nframesof continuous motions and multi-modal data. We align the noisygesture sequence xtand multi-modal conditions Cin the time di-mension and treat each frame as a separate token. The time step tistreated as a separate token. We then utilize attention mechanismsfor encoding.Attention(Q,K,V)=softmax(QKT√︁dk)V. (9)ICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.WhereQ,K, andVare the query, key, and value matrix from inputtokens, in the multi-head attention mechanism.4 EXPERIMENT4.1 Data ProcessingThe only dataset we used is the GENEA Challenge 2023 [ 14] dataset,which is an extension of Lee et al. ’s Talking With Hands [ 15] dataset.The dataset includes participants consisting of a main agent (taskedwith generating motion) and an interlocutor (the other party in theconversation). The conversation data in the dataset is in dyadic form,providing audio and text transcriptions for both parties, speakerIDs, and motion. In the provided official data, each recorded con-versation is duplicated with flipped roles to augment the trainingdata.We fully leverage the various information available in the dataset,including the audio and transcribed text between the main agentand the interlocutor, as well as the speaker IDs. We follow thesame processing approach as the baseline [ 4] for handling audio,transcriptions, and human body joints. We obtain three audio fea-tures at a sampling rate of 44100: mel-spectrograms, MFCCs, andprosodies. The frames generated have a rate of 30 FPS and theirlength matches the duration of the motion sequence. We encodethe text using Fasttext, resulting in word vectors of dimension 300.Additionally, two extra dimensions are used to indicate whether thespeaker is silent or laughing. Furthermore, we define the identityinformation of each speaker using one-hot encoding.For the processing of motion data, we also select 25 joints, in-cluding the root node, which have a significant influence on skele-ton motion. These joints are represented in a dimension of 78. Togenerate high-quality motion sequences, we segment the motionsequence into chunks of 300 frames each, which serve as inputsto the diffusion process. To ensure continuity between adjacentmotion segments, we extract the preceding 50 previous poses aspart of the generation condition. After aligning the audio features,encoded text, identity information, and speakers’ motion sequencesin the temporal dimension, we obtain the same length as the motionsequences. Similarly, the previous pose is mapped to the correspond-ing dimension after being processed by the prepose encoder.4.2 EvaluationThe evaluation of our approach is conducted through subjectiveassessment by the organizers of the GENEA Challenge 2023 andother participating teams. The organizers recruit study participantsresiding in the UK, IE, USA, CAN, AUS, and NZ, who had Englishas their first language, via crowdsourcing platforms to performthe evaluations. Multiple attention checks are implemented dur-ing the experiment to ensure the participants’ engagement andattentiveness. The evaluation of this challenge consisted of threeaspects: human-likeness; appropriateness for agent speech;appropriateness for the interlocutor. The specific results arepresented in Table 1 and Table 2. The natural motion is labeled NA.Our method is labeled SBin the tables.Human-likeness. The study participants watch 8 to 10 secondsof video and rate the motion of the virtual character as human-like,independent of the dialogue content and the speaker. DiffuGestureperforms poorly on this metric.NA SG SJBM SFSK SISEBD SD SBSASH SLSC0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(a) Appropriateness for agent speechNA SABD SB SLSESF SISDBM SJSCSKSGSH0%10%20%30%40%50%60%70%80%90%100%Proportion of annotator preferencesClear pref. matched Slight pref. matched No pref. Slight pref. mismatched Clear pref. mismatched(b) Appropriateness for the interlocutorFigure 2: The bar plots display response distribution in ap-propriateness studies. The blue bar represents preferredmatched motion responses, and the red bar represents pre-ferred mismatched motion responses. The height of each barcorresponds to the fraction of responses in each category. Ontop of each bar is also a confidence interval for the mean ap-propriateness score, scaled to fit the current axes. The dottedblack line indicates chance-level performance. Conditionsare ordered by mean appropriateness score.Appropriateness for agent speech. This metric evaluateswhether the motion of the virtual character is appropriate for thegiven speech while controlling for the overall human-likeness ofthe motion [ 35]. During the testing process, study participants arepresented with a pair of videos, both from the same condition,where one video matches the specific speech and the other is froman unrelated speech. Both videos play the specific speech, and par-ticipants are asked to select the video they believe best matches thespeech.Appropriateness for the interlocutor. During the conver-sation process, both participants in the dialogue influence eachother. Therefore, this metric evaluates whether the motion of thevirtual character is appropriate for the given interlocutor’s behav-ior (including speech and motion) while controlling for the overallhuman-likeness of the motion. Study participants are also presentedwith a pair of videos, where the behavior of the main agent remainsfixed, but the behavior of the interlocutor is randomly replaced inDiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models ICMI ’23 Companion, October 9–13, 2023, Paris, FranceTable 1: Summary statistics of user-study responses from both appropriateness studies, with confidence intervals for the meanappropriateness score (MAS) at the level α=0.05. “Pref. matched” identifies how often test-takers preferred matched motion interms of appropriateness after splitting ties. Conditions are ordered by mean appropriateness score.(a) Appropriateness for agent speechCondi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.81±0.06 73.6% 755 452 185 217 157 1766SG 0.39±0.07 61.8% 531 486 201 330 259 1807SJ 0.27±0.06 58.4% 338 521 391 401 155 1806BM 0.20±0.05 56.6% 269 559 390 451 139 1808SF 0.20±0.06 55.8% 397 483 261 421 249 1811SK 0.18±0.06 55.6% 370 491 283 406 252 1802SI 0.16±0.06 55.5% 283 547 342 428 202 1802SE 0.16±0.05 54.9% 221 525 489 453 117 1805BD 0.14±0.06 54.8% 310 505 357 422 220 1814SD 0.14±0.06 55.0% 252 561 350 459 175 1797SB 0.13±0.06 55.0% 320 508 339 386 262 1815SA 0.11±0.06 53.6% 238 495 438 444 162 1777SH 0.09±0.07 52.9% 384 438 258 393 325 1798SL 0.05±0.05 51.7% 200 522 432 491 170 1815SC−0.02±0.04 49.1% 72 284 1057 314 76 1803(b) Appropriateness for the interlocutorCondi-MASPref. Raw response counttion matched 2 1 0−1−2 SumNA 0.63±0.08 67.9% 367 272 98 189 88 1014SA 0.09±0.06 53.5% 77 243 444 194 55 1013BD 0.07±0.06 53.0% 74 274 374 229 59 1010SB 0.07±0.08 51.8% 156 262 206 263 119 1006SL 0.07±0.06 53.4% 52 267 439 204 47 1009SE 0.05±0.07 51.8% 89 305 263 284 73 1014SF 0.04±0.06 50.9% 94 208 419 208 76 1005SI 0.04±0.08 50.9% 147 269 193 269 129 1007SD 0.02±0.07 52.2% 85 307 278 241 106 1017BM−0.01±0.06 49.9% 55 212 470 206 63 1006SJ−0.03±0.05 49.1% 31 157 617 168 39 1012SC−0.03±0.05 49.1% 34 183 541 190 45 993SK−0.06±0.09 47.4% 200 227 111 276 205 1019SG−0.09±0.08 46.7% 140 252 163 293 167 1015SH−0.21±0.07 44.0% 55 237 308 270 144 1014Table 2: Summary statistics of user-study ratings from thehuman-likeness study, with confidence intervals at the levelα=0.05. Conditions are ordered by decreasing sample me-dian rating. Our entry is SB.Condi- Human-likenesstion Median MeanNA 71∈[70,71]68.4±1.0SG 69∈[67,70]65.6±1.4SF 65∈[64,67]63.6±1.3SJ 51∈[50,53]51.8±1.3SL 51∈[50,51]50.6±1.3SE 50∈[49,51]50.9±1.3SH 46∈[44,49]45.1±1.5BD 46∈[43,47]45.3±1.4SD 45∈[43,47]44.7±1.3BM 43∈[42,45]42.9±1.3SI 40∈[39,43]41.4±1.4SK 37∈[35,40]40.2±1.5SA 30∈[29,31]32.0±1.3SB 24∈[23,27]27.4±1.3SC 9∈[9,9]11.6±0.9one of the videos. Participants are then asked to select the videothat best matches the behavior of the interlocutor. DiffuGesturehas achieved promising results in this metric.5 DISCUSSIONAs shown in Table 1, we achieve satisfactory results in both met-rics of appropriateness for agent speech and the interlocutor. Ourscores for these two metrics are 0.13 and 0.07, respectively. For theappropriateness of the interlocutor, we achieve favorable results.The score of the “Preferred Matche” category is 51.8%. Furthermore,as shown in Figure 2(b), a considerable proportion of participantschose our results as their preferred matched motion responses. Webelieve that several factors contribute to these results. Firstly, wemake effective use of the provided information, including audio,transcribed text, and interlocutor behavior. Our data processingmethods have demonstrated their effectiveness. Additionally, theintroduced cross-modal attention encoder proves to be effective. Itenables us to adequately encode information from different modal-ities, thus generating plausible motions of the main agent withrespect to the behavior of the interlocutor.We also achieve unsatisfactory results in the human-likenessmetric, with a score of only 24. The challenge provides long-termhuman gesture sequences with variable lengths. Our naive diffu-sion models without specific designs only support generating fixed-length motion sequences. We segment the condition sequencesand simply predict 300 frames for each segment and concatenatethe predicted fixed-length motion sequences to generate the com-plete motions. This results in noticeable jitter at the junctions ofthe predicted fixed-length motion sequences. To eliminate the phe-nomenon, we also make some effort such as taking the previouslypredicted motions and acceleration between adjacent frames aspart of the conditions. Furthermore, we also increase the lengthof generated sequences to reduce the discontinuities of generatedmotions. However, these naive methods do not yield the expectedresults. The acceleration constraint reduces the richness of the gen-erated motions, making them less human-like. We also mentionthat the provided motion sequences for evaluation are not finaloptimized ones. This may cause undesired evaluation results.ICMI ’23 Companion, October 9–13, 2023, Paris, France Zhao, et al.6 CONCLUSIONWe propose the DiffuGesture as described in this paper to partici-pate in the GENEA Challenge 2023. Based on conditional diffusionmodels, we develop a system that generates co-speech human ges-tures for the main agent in the two-person dialogue. In our system,we encode the features of audio, transcriptions, interlocutor behav-ior using a transformer encoder. Furthermore, we adopt classifier-free guidance to trade off between diversity and gesture quality. Theevaluation results show that DiffuGesture performs well in termsof appropriateness for the interlocutor metric. However, comparedto other systems participating in the challenge, it does not generatehigh-fidelity human-like motions effectively.In the future, we will continue to explore conditional diffusionmodels to generate high-fidelity co-speech human gestures in vari-ous scenarios. We aim to handle the generation of variable-lengthmotion sequences and reduce the distortion of motions at break-points. Additionally, we intend to investigate the incorporation ofsemantic supervision to aid in the generation of co-speech gestures.We will focus on these aspects in our future work.REFERENCES[1]Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. 2022.Listen, denoise, action! Audio-driven motion synthesis with diffusion models.arXiv preprint arXiv:2211.09707 (2022).[2]Tenglong Ao, Qingzhe Gao, Yuke Lou, Baoquan Chen, and Libin Liu. 2022. Rhyth-mic gesticulator: Rhythm-aware co-speech gesture synthesis with hierarchicalneural embeddings. ACM Transactions on Graphics (TOG) 41, 6 (2022), 1–19.[3]Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan,Aniket Bera, and Dinesh Manocha. 2021. Text2gestures: A transformer-basednetwork for generating emotive body gestures for virtual agents. In 2021 IEEEvirtual reality and 3D user interfaces (VR) . IEEE, 1–10.[4]Che-Jui Chang, Sen Zhang, and Mubbasir Kapadia. 2022. The IVI Lab entry tothe GENEA Challenge 2022–A Tacotron2 based method for co-speech gesturegeneration with locality-constraint attention mechanism. In Proceedings of the2022 International Conference on Multimodal Interaction . 784–789.[5]Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans onimage synthesis. Advances in neural information processing systems 34 (2021),8780–8794.[6]S. Ginosar, A. Bar, G. Kohavi, C. Chan, A. Owens, and J. Malik. 2019. LearningIndividual Styles of Conversational Gesture. In Computer Vision and PatternRecognition (CVPR) . IEEE.[7]William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach, andFrank Wood. 2022. Flexible diffusion modeling of long videos. Advances in NeuralInformation Processing Systems 35 (2022), 27953–27965.[8]Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilisticmodels. Advances in neural information processing systems 33 (2020), 6840–6851.[9]Jonathan Ho and Tim Salimans. 2022. Classifier-free diffusion guidance. arXivpreprint arXiv:2207.12598 (2022).[10] Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. 2023. Mo-tionGPT: Human Motion as a Foreign Language. arXiv preprint arXiv:2306.14795(2023).[11] Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes.arXiv preprint arXiv:1312.6114 (2013).[12] Taras Kucherenko, Patrik Jonell, Sanne Van Waveren, Gustav Eje Henter, SimonAlexandersson, Iolanda Leite, and Hedvig Kjellström. 2020. Gesticulator: A frame-work for semantically-aware speech-driven gesture generation. In Proceedings ofthe 2020 International Conference on Multimodal Interaction . 242–250.[13] Taras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, Zerrin Yumak,and Gustav Henter. 2021. GENEA Workshop 2021: The 2nd Workshop on Genera-tion and Evaluation of Non-verbal Behaviour for Embodied Agents. In Proceedingsof the 2021 International Conference on Multimodal Interaction . 872–873.[14] Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. InProceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.[15] Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S Srinivasa,and Yaser Sheikh. 2019. Talking with hands 16.2 m: A large-scale dataset of syn-chronized body-finger motion and audio for conversational motion analysis andsynthesis. In Proceedings of the IEEE/CVF International Conference on ComputerVision . 763–772.[16] Sergey Levine, Philipp Krähenbühl, Sebastian Thrun, and Vladlen Koltun. 2010.Gesture controllers. In Acm siggraph 2010 papers . 1–11.[17] Buyu Li, Yongchi Zhao, Shi Zhelun, and Lu Sheng. 2022. Danceformer: Music con-ditioned 3d dance generation with parametric motion transformer. In Proceedingsof the AAAI Conference on Artificial Intelligence , Vol. 36. 1272–1279.[18] Jing Li, Di Kang, Wenjie Pei, Xuefei Zhe, Ying Zhang, Zhenyu He, and LinchaoBao. 2021. Audio2gestures: Generating diverse gestures from speech audio withconditional variational autoencoders. In Proceedings of the IEEE/CVF InternationalConference on Computer Vision . 11293–11302.[19] Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou,Wayne Wu, Bo Dai, and Bolei Zhou. 2022. Learning Hierarchical Cross-ModalAssociation for Co-Speech Gesture Generation. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition . 10462–10472.[20] Shuhong Lu and Andrew Feng. 2022. The DeepMotion entry to the GENEAChallenge 2022. In Proceedings of the 2022 International Conference on MultimodalInteraction . 790–796.[21] Steven G McCafferty. 2004. Space for cognition: Gesture and second languagelearning. International Journal of Applied Linguistics 14, 1 (2004), 148–165.[22] Shenhan Qian, Zhi Tu, Yihao Zhi, Wen Liu, and Shenghua Gao. 2021. Speechdrives templates: Co-speech gesture synthesis with learned templates. In Proceed-ings of the IEEE/CVF International Conference on Computer Vision . 11077–11086.[23] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,et al.2021. Learning transferable visual models from natural language supervision.InInternational conference on machine learning . PMLR, 8748–8763.[24] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, andKfir Aberman. 2023. Dreambooth: Fine tuning text-to-image diffusion models forsubject-driven generation. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition . 22500–22510.[25] Najmeh Sadoughi and Carlos Busso. 2018. Novel realizations of speech-drivenhead movements with generative adversarial networks. In 2018 IEEE InternationalConference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 6169–6173.[26] Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or.2022. Motionclip: Exposing human motion generation to clip space. In EuropeanConference on Computer Vision . Springer, 358–374.[27] Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, andAmit H Bermano. 2022. Human motion diffusion model. arXiv preprintarXiv:2209.14916 (2022).[28] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in neural information processing systems 30 (2017).[29] Petra Wagner, Zofia Malisz, and Stefan Kopp. 2014. Gesture and speech ininteraction: An overview. , 209–232 pages.[30] Bowen Wu, Chaoran Liu, Carlos Toshinori Ishi, and Hiroshi Ishiguro. 2021.Modeling the conditional distribution of co-speech upper body gesture jointlyusing conditional-GAN and unrolled-GAN. Electronics 10, 3 (2021), 228.[31] Jamie Wynn and Daniyar Turmukhambetov. 2023. Diffusionerf: Regularizingneural radiance fields with denoising diffusion models. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition . 4180–4189.[32] Lianying Yin, Yijun Wang, Tianyu He, Jinming Liu, Wei Zhao, Bohan Li, Xin Jin,and Jianxin Lin. 2023. EMoG: Synthesizing Emotive Co-speech 3D Gesture withDiffusion Model. arXiv preprint arXiv:2306.11496 (2023).[33] Youngwoo Yoon, Bok Cha, Joo-Haeng Lee, Minsu Jang, Jaeyeon Lee, JaehongKim, and Geehyuk Lee. 2020. Speech Gesture Generation from the TrimodalContext of Text, Audio, and Speaker Identity. ACM Transactions on Graphics 39,6 (2020).[34] Youngwoo Yoon, Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim, and GeehyukLee. 2019. Robots learn social skills: End-to-end learning of co-speech gesturegeneration for humanoid robots. In 2019 International Conference on Robotics andAutomation (ICRA) . IEEE, 4303–4309.[35] Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Carla Viegas, Teodor Nikolov,Mihail Tsakov, and Gustav Eje Henter. 2022. The GENEA Challenge 2022: Alarge evaluation of data-driven co-speech gesture generation. In Proceedings ofthe 2022 International Conference on Multimodal Interaction . 736–747.[36] Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Shaoli Huang, Yong Zhang,Hongwei Zhao, Hongtao Lu, and Xi Shen. 2023. T2m-gpt: Generating humanmotion from textual descriptions with discrete representations. arXiv preprintarXiv:2301.06052 (2023).[37] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, LeiYang, and Ziwei Liu. 2022. Motiondiffuse: Text-driven human motion generationwith diffusion model. arXiv preprint arXiv:2208.15001 (2022).[38] Chi Zhou, Tengyue Bian, and Kang Chen. 2022. Gesturemaster: Graph-basedspeech-driven gesture generation. In Proceedings of the 2022 International Confer-ence on Multimodal Interaction . 764–770.[39] Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. 2023.Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation. InDiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models ICMI ’23 Companion, October 9–13, 2023, Paris, FranceProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) . 10544–10553.A RESEARCH METHODSA.1 Significant differences for AppropriatenessNA SG SJBM SFSK SISEBD SDSBSASH SLSC...over condition x, in terms of appropriateness to speechNASGSJBMSFSKSISEBDSDSBSASHSLSCSignificant preference for condition y...(a) Appropriateness for agent speechNA SABD SBSLSESF SISDBM SJSCSKSGSH...over condition x, in terms of appropriateness to speechNASABDSBSLSESFSISDBMSJSCSKSGSHSignificant preference for condition y...(b) Appropriateness for the interlocutorFigure 3: Significant differences between conditions in thetwo appropriateness studies. White means the conditionlisted on the y-axis achieved an MAS significantly above thecondition on the x-axis, black means the opposite (y scoredbelow x), and grey means no statistically significant differ-ence at level α=0.5after correction for the false discoveryrate.A.2 Significant differences for Human-likeness...over condition x, in terms of human-likenessSignificant preference for condition y...NA SG SF SJ SL SE SH BD SD BM SI SK SA SB SCNASGSFSJSLSESHBDSDBMSISKSASBSC(a) Appropriateness for agent speechFigure 4: Significance of pairwise differences between con-ditions. White means that the condition listed on the y-axisrated significantly above the condition on the x-axis, blackmeans the opposite ( yrated below x), and grey means nostatistically significant difference at the level α=0.05afterHolm-Bonferroni correction. |
eBLV3i7PG1c | ABLATOR: Robust Horizontal-Scaling of Machine LearningAblation ExperimentsIordanis Fostiropoulos1Laurent Itti11University of Southern California, Los Angeles CaliforniaAbstract Understanding the efficacy of a method requires ablation experiments. Current MachineLearning (ML) workflows emphasize the vertical scaling of large models with paradigms suchas ‘data-parallelism’ or ‘model-parallelism’. As a consequence, there is a lack of methodsfor horizontal scaling of multiple experimental trials. Horizontal scaling is labor intensivewhen different tools are used for different experiment stages, such as for hyper-parameteroptimization, distributed execution, or the consolidation of artifacts. We identify that errorsin earlier stages of experimentation propagate to the analysis. Based on our observations,experimental results, and the current literature, we provide recommendations on best prac-tices to prevent errors. To reduce the effort required to perform an accurate analysis andaddress common errors when scaling the execution of multiple experiments, we introduceABLATOR . Our framework uses a stateful experiment design paradigm that provides experi-ment persistence and is robust to errors. Our actionable analysis artifacts are automaticallyproduced by the experiment state and reduce the time to evaluate a hypothesis. We evaluateABLATOR with ablation studies on a Transformer model, ‘Tablator’, where we study the effectof 6 architectural components, 8 model hyperparameters, 3 training hyperparameters, and4 dataset preprocessing methodologies on 11 tabular datasets. We performed the largestablation experiment for tabular data on Transformer models to date, evaluating 2,337 modelsin total. Finally, we open source ABLATOR ; https://github.com/fostiropoulos/ablator1 IntroductionMachine Learning (ML) research has been criticized for an inability to explain the reasons a methodprovides an improvement on a specific benchmark. It can be unclear whether a novel component isresponsible for the improvement or result of a statistical outlier [35].Ablation is used to understand how the hyperparameters and architectural components con-tribute to the performance of a method. This is in contrast to Hyper-Parameter Optimization (HPO)or Neural Architecture Search (NAS) where the objective is to search for the single best performingconfiguration. As the complexity of ML models increases so does the number of components andparameters that need to be ablated, which increases the search space of possible configurations.Therefore, efficient horizontal-scaling of multiple parallel experimental trials is necessary.There are lack of available frameworks for horizontal scaling of ablation experiments. Currently,ML practitioners manually perform horizontal scaling for experiments, such as for hyperparameterselection, distributed execution, consolidation, and analysis of artifacts [ 10]. Additionally, currentframeworks [ 31] for distributed execution do not provide native support for maintaining thestate of an experiment and resuming the execution of multiple trials, referred to as experimentpersistence . We find that errors in the early stages of experiments can propagate to the analysisand lead to misleading conclusions. Possible errors may be introduced from sampling bias in thehyperparameter selection strategy or the distributed execution fault-intolerance, survival bias .The execution of randomized control trials is necessary to determine causal effects [ 23,20]. Weidentify several sources of errors that can influence the results. We categorize them as Analysis,Execution, and Implementation errors. Analysis errors can result from the hyperparameter selectionAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Left is the rapid prototyping process when using ABLATOR where only the method implemen-tation and the configuration is required to RUN() the study and provide ANALAYSIS() .ABLATOR handlesthe horizontal scaling of experimental trials on a cluster of nodes and is fault tolerant, where trials canbe continued on the same or different node due to the Persistence provided by ABLATOR .Right is theprocess without ABLATOR where the user must use different Libraries or manually perform, ‘HPO Selec-tion’, ‘Resource Allocation’, ‘Analysis’. Additional Manual Effort will be required to integrate betweenthe libraries, where errors between different steps propagate to the analysis that will be erroneous.ABLATOR provides automation by removing boiler-plate code and managing errors internally.sampling bias. Nonrandom effects during experiment execution can introduce analysis errors. Forexample, inconclusive trials due to out-of-memory errors caused by a larger model footprint wouldintroduce survival bias to the analysis that will favor smaller models. Implementation errors aremistakes made by users caused by the increased code complexity of ablating multiple methodcomponents while maintaining different code bases. We discuss the details of our analysis inSection 3.2.To aid in error-free horizontal scaling of multiple experiments in ML community, we propose astateful experiment paradigm where we unify all experiment stages under a single framework. Astateful experiment is initialized by the configuration and code implementation of a method. Ourframework maintains the state of each experimental trial and provides experiment persistence , wherethe experiment can continue the execution agnostic to the execution environment. The analysisartifacts are produced automatically by the experiment state for faster prototyping. Our paradigmis implemented in our tool ABLATOR with support for PyTorch [ 33] model development. We presentan analysis of the sources of errors and provide recommendations that can be useful beyond ourframework. We use our framework to study the effect of multiple training and model componentson the performance of a Transformer model for tabular dataset ‘Tablator’ where we perform alarge scale ablation study of 2,337 trials. Our contributions can be summarized: First ; We provide aformalization of a stateful experiment design paradigm that we use to address common errors in theexecution of ML experiments. Second ;ABLATOR , a framework that implements our paradigm andfacilitate the automated execution and analysis of a model implementation given a configuration.Third ; We identify sources of error in ML ablation studies and provide recommendations formitigating them. Fourth ; We perform the largest to date ablation study of Deep Learning model onTabular dataset and provide analysis that can be useful to the research community.We first introduce the features of ABLATOR relevant to horizontal scaling of experiments. Next,we evaluate the main features of our tool in a case study demonstrating the horizontal scalingcapabilities of ABLATOR . We present our results using three research questions Sections 3.1 to 3.3.22 MethodsTo implement ABLATOR and address common issues in horizontal scaling of experiments, it isnecessary to introduce the formalism of a ‘stateful experiment design’ paradigm. In this section,we introduce our paradigm and in Section 2.4 the implementation of ABLATOR . We identify threestages of an experiment, the design, execution, and analysis (Sections 2.1 to 2.3).2.1 Experiment DesignDuring the design phase of an ML ablation study, a hypothesis is defined as an experiment onthe improvement that an architectural component, such as Residual Connections, provides tothe performance of the model. The search-space of our hypothesis can be defined as Residual =[True,False]. The methodology of our experiment is defined by the implementation of the model.Multiple experimental trials are required to improve the statistical power of a test [ 20] thatrequire randomly sampling from the search-space . An experimental trial can be described as astochastic process that produces a performance metric . The stochasticity can be observed whenperformance differs significantly with identical initial conditions, such as re-running the sameexperiment but obtaining different results.Thus, to define a trial, we maintain two states to describe the system at any given point. Theinitial conditions (Sections 2.1.1 and 2.1.2) and the current state (Section 2.2). The initial conditionsof a trial are defined by the sampled hyper-parameters and the implementation .distributed.yamltotal_trials : 2000optim_metrics : [[ val_loss , min ]]tune :train_config .optimizer_config .name : [" adam ", ....train_config . dataset : [" year "," yahoo "," helena ", ...model_config . mask_type : [" mix "," global "," full "," random "]model_config . residual : [True , False ]model_config . random_mask_alpha : [0.5 , 1]prototyping.yamltrain_config :dataset : adultoptimizer_config :name : adammodel_config :mask_type : random1 @configclass2 class TablatorConfig ( ModelConfig ):3 residual : bool = True4 d_out : Derived [ty. Optional [ int ]] = None5 mask_type : MaskType = MaskType (" random ")67 @configclass8 class RunConfig ( ParallelConfig ):9 experiment_dir : Stateless [ Optional [ str ]] = None10 model_config : ModelConfig11 train_config : TrainConfigFigure 2: ABLATOR provides a configuration system specific to ML experiments, where it has to encom-pass multiple trials in a compact definition and be unambiguous. On left, is an illustration of the config-uration for distributed execution ( distributed.yaml ) and method prototyping ( prototyping.yaml ).On the right , the configuration is type checked by the ABLATOR library. The library provides flexibletype definitions (red) that are resolved during run-time. The configuration is compact and unambigu-ous at initialization, supporting our stateful experiment design paradigm in Section 2.1.2.1.1 Configuration. describes the hyperparameter search-space from which the hyperparametersare sampled. Two custom Python annotations are introduced, Stateless andDerived , to defineattributes to which the experiment state is agnostic, while unannotated attributes are assumed tobestateful control variables. Stateful attributes require an assignment during the initializationstage unless they are annotated as Optional .Stateless configuration attributes can be used as a proxy for variables that can take differentvalue assignments between trials or experiments. For example, the learning rate can be set as anindependent variable and must be annotated as stateless. Additionally, there are variables thattake different values between experiments and trials to which the state is agnostic, for example, arandom seed or a directory path between execution environments canbe annotated as stateless.Derived attributes are un-decided at the start of the experiment and do not require a valueassignment. Instead, the value is determined by internal experiment processes that can dependon other experimental attributes, such as the dataset. However, given the same initial state, theattribute is expected to result in the same value and is therefore deterministic . For example, the3input size used in a model’s architecture that depends on the dataset will be annotated as Derivedduring the experiment design phase.The annotations address common requirements of ML experiments, where a configurationmay have to describe a search-space that encompasses multiple trials, as opposed to taking on aspecific value assignment at initialization. Additionally, an ML experiment can have attributes thatare difficult to model at initialization but can be inferred during execution. For a stateful designparadigm, the configuration should be unambiguous at the initialization state, i.e. Figure 2.2.1.2 Implementation. The implementation describes the methodology of the hypothesis. Invariance ofthe implementation w.r.t. the method evaluated produces a single code artifact that encapsulates allmethods i.e. a single code base for using and not using residual connections. The implementationcomputes one or more evaluation metrics. Lastly, the implementation should have a deterministicvalue assignment to the variables we defined as Derived .Implementation invariance provides a compact representation and is robust to errors. A compactrepresentation provides ease of use that is a consequence of a shared implementation among theablating components where the differences are specified through the configuration and applied byconditional ifstatements. The advantage of this approach is that the performance variance causedby implementation differences is minimized, where even the order of matrix multiplication canhave significant effects on the method performance [46].2.2 Experiment ExecutionExperiment state can be Running orComplete as the aggregate of the state of all experimentaltrials . Each trial can be in three additional states as Pending ,Failed orPruned .Pending trials aredefined by their initial conditions alone, i.e. the sampled hyperparameters. A Running trial extendsthe definition to include a checkpoint .Complete trials extends the definition to include one or moremetrics , such as the validation loss. Pruned andFailed trials are a result of irrecoverable errorsduring initialization or execution. A fault-tolerant strategy reschedules trials with recoverableerrors as Pending and attempts to resume from the checkpoint . A long-running experiment can beinterrupted (i.e. server maintenance) while errored trials do not interfere with the results (i.e. failedtrials due to recoverable errors).Checkpoint describes the optimization state of a trial and contains sufficient information toresume execution. ABLATOR store the model weights, optimizer, scheduler, and training meta-datasuch as current training iteration using a compact representation. The checkpoint mechanism inABLATOR can be extended to support custom use cases, i.e. RL. Lastly, maintaining the state of theexperiment requires keeping track of the checkpoints and results. Multiple checkpoints are storedlocally on each node and can be synchronized with cloud storage. The experiment is agnostic tothe execution environment; experiment persistence .2.3 Actionable AnalysisAnalysis that is actionable , is a result of the automation to provide sufficient artifacts to supportdecision making. The artifacts should help facilitate a quick and informed decision on the likelihoodof the hypothesis. The experiment state is used to infer the hypothesis, i.e. ‘what are we ablating?’,and conclusiveness of the analysis i.e. ‘is the trial failed?’. The analyses ABLATOR provides infer thesearch-space, such as control and independent variables from the configuration and the variabletype to produce the corresponding artifacts. The artifacts produced address common problems inevaluating ML methods (Section 3.2). For each attribute, the goal is to encapsulate the best, average,variance and distribution of the performance metric under a single figure; i.e. Figures 4 and 5.2.4 ABLATORABLATOR is designed in Python and with support for PyTorch models, while the distributed executionsystem uses Ray Core [ 31]; Figure 1. We describe the features of ABLATOR important in addressing4a stateful experiment paradigm. ABLATOR can be extended or customized specific to the use-casewithout loss of automation where an object-oriented design provide access to function overwriting.The features of ABLATOR provide ease of use where it requires defining an experiment throughimplementation and configuration. Automation is supported by providing an abstraction layer ondistributed execution with fault tolerance, artifact consolidation, and analysis. Our framework isagnostic to the execution environment and can run on a laptop and a cluster of nodes.Configuration use a hierarchical dictionary-like format that is easy to understand and canbe converted to and from yaml files. ABLATOR uses a strict type-checking system with customannotations (Section 2.1.1). A unique signature identifier ("ID") is generated for each experimentthat corresponds to the values of the stateful configuration attributes, while for a trial, the identifieris based on the unique value assignment of all configurable properties. Thus, the configurationsystem allows for a hierarchical representation of trials under a single experiment and facilitateexperiment persistence where multiple experiments are stored in the same directory.Implementation ATrainer class will manage the physical resources of the experiment. Thereare two options according to the use case, ProtoTrainer for prototyping at a local environment,andParallelTrainer for horizontal scaling of a single experiment. ParallelTrainer is unique toABLATOR , where multiple trials are managed and executed in parallel. Prototyping to experimentdeployment requires a single change ProtoTrainer =⇒ParallelTrainer .Artifact Persistence For every resource node, the trials are executed in parallel, and failure in asingle trial does not result in interruption of the experiment. We use the master node to maintainthe experiment state (Section 2.2) and synchronize the artifacts of all nodes with a central database.Cloud compute nodes are often ephemeral, and restarting the experiment requires only for the filesto be synchronized among the centralized storage and all nodes. Furthermore, the files stored inthe central storage are sufficient to perform an analysis or recover from errors.Analysis Artifacts are specific to numerical attributes and categorical attributes. The attributetype is informed by the configuration. Figure are artifacts that summarize the mean, best, anddistribution of a performance metric. For numerical attributes, we use scatter-plot with optional in-terpolation curves while for categorical attributes we use violin-plots. The analysis can be extendedto support custom use cases, such as additional figures or tables, while still being automaticallygenerated from the experiment state; examples are in Section 3.3 and our supplementary.3 Experiments and ResultsWe first present how ABLATOR can be used for horizontal scaling with an ablation study on the‘Tablator’, a Transformer model we designed for this study; Section 3.1. In Section 3.2 we categorizecommon errors during horizontal scaling of ablation experiments and provide our recommendations.In Section 3.3 we provide the results of an ablation experiment on tabular dataset benchmark. Forreasons of brevity, we discuss only the results most relevant to ABLATOR . We attach the code thatwas used for our experiments and analysis, and additional experiments in the supplementary.3.1 RQ-1: How can ABLATOR improve the horizontal scaling of thousand experimental trials?ABLATOR requires the configuration and implementation. We extend the implementation of FT-Transformers (FT-T)1[17] with minimal changes to the original code. We implement a model wecall ‘Tablator’ and evaluate all the design components of FT-T as well as the effect of ResidualConnections [ 21] and Attention Masks inspired by BigBird [ 45]. We evaluate ‘Full’, ‘Mixed’, ‘Global’,and ‘Random’ attention mechanisms and explain their implementation in the supplementary.We perform an ablation on 14 model hyperparameters and components in total, and evaluatethe effect model-capacity, dropout hyper-parameters , prenormalization, weight initialization,and activation function have on the model performance. Additionally, we evaluate 7 dataset1https://github.com/Yura52/tabular-dl-revisiting-models5preprocessing techniques and training configurations, such as feature encoding methods, missingvalue imputation, feature normalization, training time, optimization.The differences between ‘Tablator’ and FT-T are on an additional module for Attention masksthat requires 9 additional lines of code as well as 2 lines of code insertions for residual connections.The majority of the development effort was directed towards making the original dataset performantand converting it to a PyTorch Dataset as opposed to a Python dataclass . We define the tunableconfigurable hyperparameters as shown in Figure 2.We first verified our implementation with a ProtoTrainer in this section and then we scaleour experiment with a single code change using a ParallelTrainer to thousands of trials for ourresults in Section 3.3. For this experiment, it took significantly more time to write the currentsection of this paper than it took to write the code and start the execution of the experiments.3.2 RQ-2: What are common sources of errors during horizontal scaling of experiments?We identify 3 categories of errors Analysis †, Execution‡and Implemention∗errors that are basedon empirical observations and use previous analysis [ 10,8,9,27,36,1,46,12] to support ourconclusions. In this section, we provide examples of each and attach additional analysis in oursupplementary.Figure 3: We evaluate how Budget Allocation ‡can influence the analysis of an ablation study.We vary the number of trials we use for analysis(‘Ntrials’). We compare estimating the perfor-mance of a method to a dataset using the mean(left) (i.e. ANOVA) or the best ( right ) trial (i.e.proof-by-existence). Evaluating the performanceof a component by its mean performance wouldrequire fewer trials for easier dataset (‘Covtype’)when compared to using the best trial. Whilefor more challenging dataset (‘Aloi’) evaluatingby the best trial would be more efficient, as theperformance converges at around 20 trials (rightfigure) compared to >50 for the mean (left figure).We conclude that the ablation budget should betaken into account and relevant to the type ofanalysis.Sampling Strategy †can be incompatible withthe method used to evaluate the performance ofa component and lead to misleading analysis [ 41].For example, performing HPO and comparing themean performance of the sampled trials can biasthe result towards a single component variant. Weperform two identical experiments using Tablatorwith an identical budget for CovType (‘CO’) dataset[7]. When random sampling between 5 optimiz-ers AdaB [ 47], Adam[ 24], AdamW [ 29], RAdam[ 28],SGD[ 39] every optimization algorithm was sampledwith an even probability P(O) ≈ 0.2. Contrary,when performing HPO with Tree-structured ParzenEstimator (TPE) [ 3], SGD was oversampled withP(SGD)=0.76as it was found to perform relativelybetter compared to other methods. Other optimiza-tion methods were undersampled by TPE and theirestimated performance is lower when compared tothe empirical mean performance of the same methodcalculated via Random Sampling. When TPE wasused, all optimizers appeared to underperform onaverage by 4.6% and 3.8% when evaluating the bestand mean trial performance. We conclude that statis-tical tests can be influenced by the bias of the HPOmethod used to sample configurations and their per-formance might not be fully explored.Survival Bias†can be caused by nonrandomexecution errors. We identify the trials for whichthere were memory errors. We perform feature im-portance analysis and use a surrogate random for-est model [ 34] to predict whether a trial will resultin a memory error. We find that the configurationattributes related to the dataset and the hidden di-6Dataset CA↓AD↑HE↑ JA↑ HI↑AL↑EP↑YE↓CO↑ YA↓ MI↓FT-T 0.459 0.859 0.391 0.732 0.729 0.960 0.898 8.855 0.970 0.756 0.746Tablator 0.535 0.856 0.368 0.718 0.723 0.921 0.896 8.778 0.930 0.780 0.749ΔImp.∗ -0.076 0.003 0.023 0.014 0.006 0.039 0.002 0.077 0.04 -0.024 -0.003Table 1: We evaluate the difference between the best performing trials as reported by FT-Transformer(‘FT-T’)[ 17] and as found by our ablation experiments in Section 2.1. FT-T is in the subspace ofconfigurations of Tablator where a greedy HPO strategy is used as opposed to random sampling forTablator. As such, we expect Tablator to perform similarly but notbetter. We use the benchmark asa way to evaluate Implementation Errors ∗from Section 3.2. We conclude that our implementationcontains no errors, as the relative difference ( ΔImp.∗) is within the expected margin of error betweenHPO and random sampling.mension were the most important. A larger dataset has more features, which leads to a modelwith larger hidden dimension. The attributes related to the hidden dimension scored 23% higherthan the average feature importance. We conclude that smaller models and dataset will have aSurvival Bias from the fewer out-of-memory execution errors and that such bias could be mitigatedby better resource allocation. For example, one can group experiments by their memory utilizationas to avoid out-of-memory errors from the largest trial.Figure 4: Evaluation of the effect of a largermodel for a regression data set, where(RMSE)↓is normalized for the relative dif-ficulty of each dataset. Larger model per-forms better but with higher variance wherethe uncertainty on the estimated perfor-mance increases. A larger model might be amore risky choice when deploying a modelthat requires to be iteratively trained.Resource Utilization statistics ‡We observe the re-source utilization statistics, the mean usage of a trial is3,075±3,578 (MiB) while the maximum is 32,303 (MiB).The high variance in memory utilization is a consequenceof a search space that correlates with memory utilization.Allocating resources based on the largest trial might beinfeasible. Using a heuristic for resource utilization mightbe necessary.Budget Allocation ‡we vary the number of experi-mental trials for 10 repeated observations and report thebest and mean performance in Figure 3. An increased bud-get reduces the variance of the mean performance. Wereport less variance in the performance of the best trial forrepeated observations. We conclude that, for ‘Tablator’,fewer trials are required to obtain an estimate of the topperformance while the mean performance would requiremore trials.Implementation Errors ∗Our observations on imple-mentation errors extend previous analysis [ 46,27,36,12]on the impact of ML tooling where the sources of errorsare poor development practices and variance introducedby tooling. Packaging has the benefit of incremental de-velopment and modular design, where in the example of‘Tablator’ two methods ([ 45] and [ 17]) can be combined.Additionally, as the method complexity increases, versioncontrol that includes the configuration, and analysis that corresponds to the implementation canprevent misinterpretation of the results.3.3 RQ-3: Can ABLATOR be used to perform a large-scale ablation study on Tabular Dataset?We use ‘Tablator’ presented in Section 3.1 to evaluate possible improvements in data processing,the Transformer model architecture, and the effect of training hyperparameters on 2,337 trials,7Figure 5: Example of Automatically generated analysis artifacts from ABLATOR . On the leftare theartifacts for ‘CO’ [ 7] and on the right for ‘AL’ [ 16]. We compare the effect of an Optimizer on theperformance to a dataset. In agreement with [ 44], there is no single model that generalizes across alldataset; where for example Adam [ 24] under-performs for ‘AL’ but not for ‘CO’. We conclude thatseperate ablation studies will be required for different dataset.where the current largest ablation on tabular dataset is 2,000 trials [ 48]. Our results are summarizedin Figures 4 and 5. On Table 1 we report the Accuracy, where higher is better ↑and root square-mean-error (‘RMSE’) where lower is better ↓on 11 dataset; [ 32,25,18,18,2,16,17,4,7,11,38]identical to the benchmark of FT-T [ 17]. We find Tablator performs similarly in all datasets. Thegoal of the benchmark comparison is to verify our implementation, while the goal of our studyis to evaluate general methods that work best among dataset and not a benchmark improvement.Similarly to FT-T [ 17], we conclude that the simplest methods work best in most general cases, i.e.SGD [ 39] with momentum has the best mean performance on 9 of 11 datasets. For more complexmethods, there is a large variance on the performance of the method between datasets.For example, we find that RAdam [ 28] ranks on average 2.71 for classification dataset but 3.75for regression dataset when evaluated by the mean performance. Additionally, more complexmethods may result in the best performing trial but perform worse on average, where RAdam rankson average 2.25 when evaluated on the best-performing trial for regression dataset (compared to3.75). Our results indicate that using a complex method may require a large tuning budget to returngood results. Additionally, we conclude that larger models only perform moderately better Figure 4.The high-performance variance between different components on different datasets leads us toconclude that evaluations should be done with multiple datasets. Additionally, we find that tuningwould be required that is specific to the dataset and the training configuration. Simple designchoices, such as SGD and moderate model capacity, can provide a good starting point, while morecomplex training configurations can provide trade-offs on performance and uncertainty that canbe specific to the use case.From the median and mean performance observed in our results, we did not find that anyof the preprocessing methods to have a consistent, significant effect on the model performance.ABLATOR can help provide actionable results specific to the dataset. We conclude that several ablationexperiments are required to evaluate a method and ABLATOR is the only tool currently available tofacilitate rapid evaluation.4 DiscussionIn our work we present ABLATOR an AutoML framework for ablation experiments. Beyond ourframework, there are several issues w.r.t. automated decision making as there is no universal8statistical test or threshold to accept or reject a hypothesis. Analysis requires domain expertiserelevant to the evaluation setting. Specific to ML research is the lack of methods for evaluation of ahypothesis where the metric can be both non-normally distributed and heteroskedastic i.e. Figure 5.Broader Impact Statement Performing large-scale ablation experiments may require a largenumber of computational resources that can negatively impact the environment through CO2emissions. However, the automation provided by ABLATOR can result in a more effective use ofcomputational resources and reduce CO2 emissions. ABLATOR can help improve research practiceswithout a negative impact on society when used in the context in which it is presented.5 Related WorksWe identify four categories of work that are most similar to ours. Work that focuses on errorsintroduced by tools and incorrect analysis, on horizontal scaling of experiments, works that aid inablation studies, and tools for automated HPO.Previous work [ 10,8,9,27,36,1,46,12] identify the source of erroneous analysis as poorexperiment design practices resulting from improper use of statistical evaluation methods, HPObudget, HPO strategies, and tooling and provide recommendations. We extend their work andinvestigate errors during horizontal scaling of experiments that lead to erroneous analysis. Weidentify errors from the sampling strategy, non-random execution errors, and implementationerrors. We provide general recommendations in Section 3.2 and address the errors with ABLATOR .Several tools are proposed [ 13,15,22,43,26] that support distributed experiment execution .However, they require manual effort in integrating with other libraries for resource allocation,scheduling of experiments, resuming faulty trials, result aggregation, configuration sampling, andanalysis. Contrary, ABLATOR combine all of the above in an automated fashion, where only theimplementation and configuration of the method are used to produce the analysis artifacts.Ablation framework introduce methods and tools specific to constructing ablation analysisartifacts. Such methods can have limited use cases [ 19,5,37] or lack automation [ 42]. In contrast,ABLATOR provides analysis artifacts that provide a holistic view of a method’s performance that canbe extended to support automation and specific use-cases addressed by the works above.AutoML methods [ 14,48,6] are designed for HPO and can be extended to ablation experimentsthat provide support for automated analysis. Unlike ABLATOR , such tools are designed for simple usecases, such as statistical models, and require additional effort to scale the experiments horizontally.Such tools and similar, can be used as the implementation provided to ABLATOR and as suchare orthogonal to our work. AutoAblation [ 40] extends Maggy [ 30] to Deep Learning models.However, allocating and managing GPU resources for each trial requires manual effort. WhileAutoAblation does not provide experiment persistence and as such is not fault-tolerant. Additionally,the declarative design paradigm has limited use cases, as opposed to the object-oriented design ofABLATOR .As such, ABLATOR improves automation by managing GPU resources, storing of experimentalartifacts, restarting erroneous trials, removing boiler-plate code where only the method implemen-tation with the configuration is required to provide automated analysis.6 ConclusionIn this work, we identify several sources of error common in horizontal scaling of multiple experi-mental trials. We provide general recommendations and address errors with a stateful experimentdesign paradigm. ABLATOR implement the paradigm to automate the scaling of ablation experimentsacross multiple resources and produce analysis artifacts in an automated fashion and for rapid iter-ative prototyping. We evaluate ABLATOR with a Transformer model for Tabular dataset, ‘Tablator’,where we study the effect of several architectural components and hyperparameters on the largestablation study for tabular dataset to-date. ABLATOR is an effect tool to conduct large-scale ablationstudies with ease and lead to actionable insights that are particular to the experimental setting.9References[1]Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Belle-mare. Deep reinforcement learning at the edge of the statistical precipice. Advances in neuralinformation processing systems , 34:29304–29320, 2021.[2]Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications , 5(1):4308, 2014.[3]James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems , 24, 2011.[4]Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million songdataset. 2011.[5]André Biedenkapp, Marius Lindauer, Katharina Eggensperger, Frank Hutter, Chris Fawcett,and Holger Hoos. Efficient parameter importance analysis via ablation with surrogates. InProceedings of the AAAI Conference on Artificial Intelligence , volume 31, 2017.[6]André Biedenkapp, Joshua Marben, Marius Lindauer, and Frank Hutter. Cave: Configurationassessment, visualization and evaluation. In Roberto Battiti, Mauro Brunato, Ilias Kotsireas,and Panos M. Pardalos, editors, Learning and Intelligent Optimization , pages 115–130, Cham,2019. Springer International Publishing.[7]Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks anddiscriminant analysis in predicting forest cover types from cartographic variables. Computersand electronics in agriculture , 24(3):131–151, 1999.[8]Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk,Justin Szeto, Nazanin Mohammadi Sepahvand, Edward Raff, Kanika Madan, Vikram Voleti,et al. Accounting for variance in machine learning benchmarks. Proceedings of MachineLearning and Systems , 3:747–769, 2021.[9]Xavier Bouthillier, César Laurent, and Pascal Vincent. Unreproducible research is reproducible.InInternational Conference on Machine Learning , pages 725–734. PMLR, 2019.[10] Xavier Bouthillier and Gaël Varoquaux. Survey of machine-learning experimental methods atNeurIPS2019 and ICLR2020 . PhD thesis, Inria Saclay Ile de France, 2020.[11] Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings ofthe learning to rank challenge , pages 1–24. PMLR, 2011.[12] Katharina Eggensperger, Marius Lindauer, and Frank Hutter. Pitfalls and best practices inalgorithm configuration. Journal of Artificial Intelligence Research , 64:861–893, 2019.[13] William Falcon et al. Pytorch lightning. GitHub repository , 3, 2019.[14] Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter.Auto-sklearn 2.0: The next generation. CoRR , abs/2007.04074, 2020.[15] V. Fomin, J. Anmol, S. Desroziers, J. Kriss, and A. Tejani. High-level library to help withtraining neural networks in pytorch. https://github.com/pytorch/ignite , 2020.[16] Jan-Mark Geusebroek, Gertjan J Burghouts, and Arnold WM Smeulders. The amsterdamlibrary of object images. International Journal of Computer Vision , 61:103–112, 2005.10[17] Yury Gorishniy, Ivan Rubachev, Valentin Khrulkov, and Artem Babenko. Revisiting deeplearning models for tabular data. CoRR , abs/2106.11959, 2021.[18] Isabelle Guyon, Lisheng Sun-Hosoya, Marc Boullé, Hugo Jair Escalante, Sergio Escalera,Zhengying Liu, Damir Jajetic, Bisakha Ray, Mehreen Saeed, Michèle Sebag, et al. Analysis ofthe automl challenge series. Automated Machine Learning , 177, 2019.[19] Isha Hameed, Samuel Sharpe, Daniel Barcklow, Justin Au-Yeung, Sahil Verma, Jocelyn Huang,Brian Barr, and C Bayan Bruss. Based-xai: Breaking ablation studies down for explainableartificial intelligence. arXiv preprint arXiv:2207.05566 , 2022.[20] Eduardo Hariton and Joseph J Locascio. Randomised controlled trials—the gold standard for ef-fectiveness research. BJOG: an international journal of obstetrics and gynaecology , 125(13):1716,2018.[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition. CoRR , abs/1512.03385, 2015.[22] Jeremy Howard and Sylvain Gugger. fastai: A layered API for deep learning. CoRR ,abs/2002.04688, 2020.[23] Kosuke Imai, Dustin Tingley, and Teppei Yamamoto. Experimental Designs for IdentifyingCausal Mechanisms. Journal of the Royal Statistical Society Series A: Statistics in Society ,176(1):5–51, 11 2012.[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[25] Ron Kohavi et al. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid.InKdd, volume 96, pages 202–207, 1996.[26] Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and IonStoica. Tune: A research platform for distributed model selection and training. arXiv preprintarXiv:1807.05118 , 2018.[27] Chao Liu, Cuiyun Gao, Xin Xia, David Lo, John Grundy, and Xiaohu Yang. On the repro-ducibility and replicability of deep learning in software engineering. ACM Transactions onSoftware Engineering and Methodology (TOSEM) , 31(1):1–46, 2021.[28] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, andJiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprintarXiv:1908.03265 , 2019.[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.[30] Moritz Meister, Sina Sheikholeslami, Amir H Payberah, Vladimir Vlassov, and Jim Dowling.Maggy: Scalable asynchronous parallel hyperparameter search. In Proceedings of the 1stWorkshop on Distributed Machine Learning , pages 28–33, 2020.[31] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang,William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emergingAI applications. CoRR , abs/1712.05889, 2017.11[32] R Kelley Pace and Ronald Barry. Sparse spatial autoregressions. Statistics & Probability Letters ,33(3):291–297, 1997.[33] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, AndreasKöpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style,High-Performance Deep Learning Library . Curran Associates Inc., Red Hook, NY, USA, 2019.[34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret-tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot,and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine LearningResearch , 12:2825–2830, 2011.[35] David Picard. Torch.manual_seed(3407) is all you need: On the influence of random seeds indeep learning architectures for computer vision, 2021.[36] Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer,Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. Improving reproducibility in machinelearning research (a report from the neurips 2019 reproducibility program). The Journal ofMachine Learning Research , 22(1):7459–7478, 2021.[37] Philipp Probst, Anne-Laure Boulesteix, and Bernd Bischl. Tunability: Importance of hy-perparameters of machine learning algorithms. The Journal of Machine Learning Research ,20(1):1934–1965, 2019.[38] Tao Qin and Tie-Yan Liu. Introducing letor 4.0 datasets. arXiv preprint arXiv:1306.2597 , 2013.[39] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals ofmathematical statistics , pages 400–407, 1951.[40] Sina Sheikholeslami, Moritz Meister, Tianze Wang, Amir H Payberah, Vladimir Vlassov,and Jim Dowling. Autoablation: Automated parallel ablation studies for deep learning. InProceedings of the 1st Workshop on Machine Learning and Systems , pages 55–61, 2021.[41] Ryan Turner, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, andIsabelle Guyon. Bayesian optimization is superior to random search for machine learninghyperparameter tuning: Analysis of the black-box optimization challenge 2020. In Hugo JairEscalante and Katja Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demon-stration Track , volume 133 of Proceedings of Machine Learning Research , pages 3–26. PMLR,06–12 Dec 2021.[42] Jan N Van Rijn and Frank Hutter. Hyperparameter importance across datasets. In Proceedingsof the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining ,pages 2367–2376, 2018.[43] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, AnthonyMoi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer,Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, SylvainGugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methodsin Natural Language Processing: System Demonstrations , pages 38–45, Online, October 2020.Association for Computational Linguistics.12[44] David H Wolpert and William G Macready. No free lunch theorems for optimization. IEEEtransactions on evolutionary computation , 1(1):67–82, 1997.[45] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santi-ago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. Big bird: Transformersfor longer sequences. Advances in neural information processing systems , 33:17283–17297,2020.[46] Donglin Zhuang, Xingyao Zhang, Shuaiwen Song, and Sara Hooker. Randomness in neuralnetwork training: Characterizing the impact of tooling. Proceedings of Machine Learning andSystems , 4:316–336, 2022.[47] Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar C Tatikonda, Nicha Dvornek, XenophonPapademetris, and James Duncan. Adabelief optimizer: Adapting stepsizes by the belief inobserved gradients. Advances in neural information processing systems , 33:18795–18806, 2020.[48] Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelitymetalearning for efficient and robust autodl. arXiv preprint arXiv:2006.13799 , 2020.137 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] Our results can be found in sections 3.1 to 3.3.(b) Did you describe the limitations of your work? [Yes] See section 4.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See sectionsec-tion 4.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] They are appliedthroughout the paper.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] There are notheoretical results in our work(b)Did you include complete proofs of all theoretical results? [N/A] There are no theoreticalresults in our work3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We have included the code that was used to run all the experiments,produce the tables and figures as a zip file.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] We include the raw results that were used to obtain our analysis.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Wehave included them in the supplementary.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] We have followed standard development practices.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyper-parameter settings, and how they were chosen)? [Yes] We have included them in thesupplementary.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We have included them in the supplementary.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See section 3.3(h)Did you use the same evaluation protocol for the methods being compared? [Yes] We useidentical evaluation protocol when comparing between methods for all our experiments insections 3.1 to 3.3(i)Did you compare performance over time? [N/A] Performance over time is not applicablefor our work.14(j)Did you perform multiple runs of your experiments and report random seeds? [Yes] Therandom seeds used are in the code in our supplementary.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] results are in sections 3.2 and 3.3(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We use thesame benchmark as [17](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] We have included it in the supplementary.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] They are described in section 3.1 andthe supplementary.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] table 1 and supplementary.(b)Did you mention the license of the assets? [Yes] We provide details of all assets in thesupplementary.(c)Did you include any new assets either in the supplemental material or as a url? [N/A] Wedo not use any new assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]15 |
uN70Dum6pC2 | MA-BBOB: Many-Affine Combinations of BBOB Functionsfor Evaluating AutoML Approaches in Noiseless NumericalBlack-Box Optimization ContextsDiederick Vermetten1Furong Ye1Thomas Bäck1Carola Doerr21Leiden Institute for Advanced Computer Science (LIACS), Leiden University, The Netherlands2Sorbonne Université, CNRS, LIP6, Paris, FranceAbstract Extending a recent suggestion to generate new instances for numerical black-box optimiza-tion benchmarking by interpolating pairs of the well-established BBOB functions fromthe COmparing COntinuous Optimizers (COCO) platform, we propose in this work a fur-ther generalization that allows multiple affine combinations of the original instances andarbitrarily chosen locations of the global optima.We demonstrate that the MA-BBOB generator can help fill the instance space, while overallpatterns in algorithm performance are preserved. By combining the landscape features ofthe problems with the performance data, we pose the question of whether these features areas useful for algorithm selection as previous studies have implied.MA-BBOB is built on the publicly available IOHprofiler platform, which facilitates standard-ized experimentation routines, provides access to the interactive IOHanalyzer module forperformance analysis and visualization, and enables comparisons with the rich and growingdata collection available for the (MA-)BBOB functions.1 IntroductionDespite a long tradition of developing automated Machine Learning (AutoML) approaches fornumerical black-box optimization contexts [ 3,12,28], empirical evaluations are heavily centeredaround very few benchmark collections. One of the most popular collections is the BBOB suite [ 10]of the COmparing COntinuous Optimizers (COCO) platform [ 9]. The BBOB suite was originallydesigned to help researchers analyze the behavior of black-numerical black-box algorithms indifferent optimization contexts. Over time, however, BBOB has been used for many other purposes,including evaluating AutoML methods, even though the problems were never designed to besuitable for this task.With the increasing popularity of the BBOB benchmarks, wide availability of shared perfor-mance data enabled the application of, e.g., algorithm selection methods [ 12]. To achieve thesealgorithm selectors, a representation of the problem space is required based on which the perfor-mance of different algorithms can be predicted. In the case of BBOB, the most commonly usedrepresentation makes use of Exploratory Landscape Analysis (ELA), which has been shown to beable to accurately distinguish between BBOB problems [20, 27].A key problem of algorithm selection based on BBOB problems lies in the ability to test howwell the results generalize. One approach is to use a leave-one-function-out method [ 23], wherethe selector is trained on 23 functions and tested on the remaining one. This generally leads topoor performance, as each problem has been specifically designed to have different global functionproperties. As such, another common method is to leave out a set of problem instances for testing.This way, the selector is trained on all types of problems. However, this has a high potential tooverfit the particular biases of the BBOB problems, an often overlooked risk.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0To remedy these potential issues, the ability to construct new functions which fill the spacesbetween existing BBOB functions could be critical. If the instance space can be filled with newproblems, these could be used to not only test the generalizability of algorithm selection methods,but also more generally to gain insights into e.g., the relation between the ELA representation of aproblem and the behavior of optimization algorithms.Filling the instance space is a topic of rising interest within the optimization community [ 1,19,22,34]. While some work has been conducted to create problem instances that reflect theproperties of real-world applications or obtain similar characteristics of the existing problems, otherwork is trying to generate diverse instances. For example, symbolic regression and simulationof Gaussian processes have been applied to generate benchmarks reflecting real-world problembehaviours in [ 35] and [ 17,29]. On the other hand, research in generating diverse instances ofcombinatorial optimization has been conducted in [ 4,5,16,19]. Regarding black-box numericaloptimization, approaches based on Genetic Programming (GP) have succeeded in generating novelproblem instances with controllable characteristics defined by their ELA features in [ 21], in whichthe authors used ELA features of BBOB instances as a baseline to regenerate similar instances anddesign diverse instances. However, to obtain problems with desired characteristics, the GP needs tobe executed for each dimension. A recent paper proposed a different perspective on generating newproblem instances for numerical optimization. In their paper, Dietrich and Mersmann propose tocreate new problems through weighted combinations of BBOB problems. By creating these affinecombinations of existing problems, it seems that the ELA features can transition smoothly betweenthe two component functions. Moreover, affine combinations of two BBOB problems were appliedto analyze the behavior of optimization algorithms in [ 32]. The paper’s results demonstrated thatthe algorithms’ performance alters along the weights of two combined problems.In this paper, we extend upon the modified version of the affine BBOB combinations [ 32] bygeneralizing to combinations between any number of BBOB functions. Through doing this, weaddress the concerns regarding the scaling of the component functions and the impact of thelocation of the global optimum. We also propose a modified mechanism to sample weights to avoidpotential biases resulting from including too many problems.From the proposed many-affine problem generation method, we sample 1 000 instances, forwhich we perform both an ELA based analysis as well as an analysis of the performance of a setof algorithms. By combining these results in a simple algorithm selection model, we raise thequestion of whether or not the ELA features are sufficiently representative to create a generalizablealgorithm selection model.In summary, our key contributions and findings are:1.We introduce MA-BBOB, a generator of arbitrary affine combinations of the 24 BBOB functions.We explain the rationales behind the various design choices, which include the location of theoptimum, the scaling used for interpolating the different functions and the way of sampling infunctions from this space. The resulting generator is build on the IOHprofiler platform, whichenables equivalent benchmarking setups to the original BBOB problems.2.We analyze 1 000 randomly sampled instances in 2dand in 5dvia Exploratory Landscape Analysis(ELA [ 20]) and show that the combined MA-BBOB functions cover the space between the original‘pure’ BBOB functions quite well, with the exception of some of problems like the linear slopeand ellipsoid problem, which are essentially only available in the ‘pure’ BBOB functions, butdisappear in the MA-BBOB instances with non-trivial weights.3.We compare the performance of five black-box optimization algorithms on the original BBOB andthe 1 000 randomly sampled MA-BBOB instances and show that the rank distribution changesslightly in favour of the CMA-ES algorithms and to the disadvantage of RCobyla.24.Finally, we also perform per-instance algorithm performance prediction studies on MA-BBOB.The results confirm that the regression accuracy is better when the training set includes gener-alized BBOB functions. However, we also observe a considerable performance gap between ELAbased regression models and those trained with full knowledge of the weights that are usedto construct the test instances. These results indicate that the current set of ELA features failto capture some instance properties that are crucial for algorithm performance, a shortcomingthat we expect to motivate future research on the design of features for numerical black-boxoptimization.2 BackgroundThe BBOB Problem Suite. The BBOB collection [ 10] is one of the main components of the COCOframework [ 9]. It is heavily used in the black-box optimization community for evaluating derivative-free numerical optimization techniques. On the original BBOB suite of 24 single-objective, noiselessoptimization problems [10], hundreds of different optimization algorithms have been tested [2].One key reason for the popularity of this suite is the ability to create independent instancesof the same problem, which are generated by applying transformations in the domain and theobjective space. These transformations include rotation, scaling of objective value and moving thelocation of the global optimum. They allow researchers to evaluate possible bias in their algorithms,and are hence an important component of algorithm benchmarking.The availability of many instances are also a key enabler for the evaluation of AutoML ap-proaches in black-box optimization contexts. Since not all instances are easily accessible via theoriginal COCO implementation, we have made direct access to the instances available in ourIOHprofiler benchmarking environment [7, 33].Affine Function Combinations. While the availability of numerous instances per each BBOBfunction facilitates AutoML studies, it has been observed that the generalization ability of modelstrained on BBOB and tested on independent problems is disappointing [ 13,15]. This motivated thedesign of new problems to extend the existing BBOB suite. One such approach was proposed in [ 8].It suggests to consider affine combinations of two different problem instances [ 8]. The resultingproblems were analyzed with respect to their fitness landscapes, as seen via exploratory landscapeanalysis (ELA [ 20]). They have been shown to smoothly connect their component functions in areduced-dimensionality ELA space. This seems to imply that we can use these problems to connectany pair of existing problems, which would significantly add to the instance space.In our follow-up study [ 32] we recently proposed a modified version of creating these affinefunction combinations, see Sec. 3.1 for details. We used these functions to compare the performanceof five selected black-box optimization algorithms and showed that the behavior differences arenot as smooth as the differences in ELA space. In several cases, combinations of two functions arebest solved by a different algorithm than the one which solved the component problems.3 The MA-BBOB Benchmark Suite3.1 Scaling of Function ValuesWhen combining multiple functions to create a new benchmark problem, one key factor whichimpacts the landscape is the scaling of the combined functions. Since we are interested in takingaffine combinations of existing functions, a difference in scale might lead one function to dominateall others, leading to limited coverage of the feature space.The original affine BBOB functions proposed in [ 8] make use of a tuning procedure for findinguseable weights. While this allows for selecting suitable problems, it makes it more challengingto just randomly sample a set of new problems. We therefore suggested an alternative way togenerate the affine combinations in [ 32]. This change is two-fold: each component problem fisfirst transformed by subtracting the global optimum value min f. This way, we know that each3−4 −2 0 2 4mean−4−2024−4 −2 0 2 4max−4 −2 0 2 4minmax−4 −2 0 2 4min−4 −2 0 2 4equal−1.2−0.60.00.61.21.82.43.03.6Figure 1: Log-scaled fitness values of an example of a single many-affine function with 5 different waysof scaling. The first 4 are taking the mean, max, (max+min)/2and min of 50 000 randomsamples to create the scale factor, while the ’equal’ option does not make use of this scaling.Function ID 1 2 3 4 5 6 7 8 9 10 11 12Scale Factor 11.0 17.5 12.3 12.6 11.5 15.3 12.1 15.3 15.2 17.4 13.4 20.4Function ID 13 14 15 16 17 18 19 20 21 22 23 24Scale Factor 12.9 10.4 12.3 10.3 9.8 10.6 10.0 14.7 10.7 10.8 9.0 12.1Table 1: Final scale factors used to generate MA-BBOB problems.component functions optimum function value is set to 0. Then, instead of arithmetic weighting, alogarithmic combination is used to limit the impact of scale differences. While this simplifies theprocedure of generating random function combinations, BBOB functions can sometimes differ bymultiple orders of magnitude, which still produces some bias in this procedure.To address this shortcoming in MA-BBOB, we have investigated different scaling procedures. Westill scale the global optima and perform a logarithmic transform, but we now add a normalizationstep. This transforms the log-precision values into an approximation of [0,1], and then mapsthis back to the commonly used BBOB domain [10−8,102]. This is achieved by taking the log-transformed precision (capped at −8), adding 8so the minimum is at 0and dividing by a scalefactor . The aim of this procedure is to make sure that the target precision of 102is similarly easy toachieve on all problems.In order to select appropriate scale factors, we need to determine practical limits of the functionvalue for each BBOB function. We do this by considering a set of 50 000 random samples andaggregating the corresponding function values. We consider the following aggregation methods(based on the log-scaled precision): min,mean ,max,(max+min)/2. Fig. 1 illustrates the differencesbetween these methods, for a 2dproblem. Note that because we use log-scaled precision, thedifferences between instances are rather small, so we opted to only do the sampling for one instanceof each BBOB problem. Based on visual interpretation of the contour plots in Fig. 1, we (somewhatsubjectively) select the (max+min)/2scaling as the most promising method.To avoid having to constantly repeat this random sampling procedure, we also investigate theway in which the scales of the random factors, and thus the scale factors, differ across dimensions.The results are shown in Fig. 2. With exception of the smallest dimensions, the values remain quitestable. As such, we decide to implement them as hard-coded values based on the median of theshown values, rounded to the nearest decimal. The resulting factors are shown in Tab. 1.3.2 Instance CreationA second aspect to consider when combining multiple functions is the placement of the globaloptimum. In the previous two papers [ 8,32] on affine BBOB functions, this was done based425 10 15 20 25 30 35 40Dimension05101520(Log(max)+Log(min))/2Figure 2: Evolution of the log-scaled (max+min)/2scaling factor, rel-ative to the problem dimension. The values are based on50 000 samples. Each line corresponds to one of the 24 BBOBfunctions.−5−4 −2 0 2 45−5−4−20245Figure 3: Location of optima ofthe 24 2d BBOB func-tions. The red linesmark the commonlyused box-constraintsof[−5,5]D.−4−2 024−4−2024−4−2 024−4−2 024−4−2 024−4−2 024−1.6−1.2−0.8−0.40.00.40.81.2Figure 4: Log-scaled fitness values of an example of a single many-affine function with changedlocation of optimum.on the instance of one of the two component functions. However, the original BBOB instancecreation process can be considered somewhat biased, as not all functions make use of the sametransformations [ 10,18]. As such, if we extend the process of using the optimum of one of theused component functions, the optimum would be distributed as in Fig. 3. To avoid this issue,we decided to generate the optimum location separately, uniformly at random in the full domain[−5,5]d. Fig. 4 shows how a 2d-function changes when moving the optimum location.3.3 Sampling random functionsAs a final factor impacting the types of problems generated, we consider the way in which weightsare sampled. While this can indeed be done uniformly at random (with a normalization afterwards),this might not lead to the most useful set of benchmark problems. When the weights for eachfunction are generated this way, the probability of having a weight of 0for any component is 0.This means that every function will contribute to some extent to the newly generated problem. Assuch, it would be almost impossible for this procedure to result in a unimodal problem.One way to address this bias in function generation is to adapt how many functions are part ofthe newly created problem. Indeed, the combinations of two problems already lead to a vast spaceof interesting landscapes. We opt for a different approach: we make use of a threshold value whichdetermines which functions contribute to the problem. The procedure for generating weights is5−4−2 024T=0−4−2024−4−2 024T=0.4−4−2 024T=0.55−4−2 024T=0.7−4−2 024T=0.85−1.5−1.0−0.50.00.51.01.52.02.5Figure 5: Log-scaled fitness values of an example of a ’single’ many-affine function with 5 differentsampling thresholds.thus as follows: (1) Generate initial weight uniformly at random, (2) adapt the threshold to be theminimum of the selected value and the third-highest weight, (3) this threshold is subtracted fromthe weights, all negative values are set to 0. The second step is to ensure that at least two problemsalways contribute to the new problem. Fig. 5 provides an example of a problem generated withdifferent threshold values. We decide to set the default value at T=0.85, such that on average 3.6problems will have a non-zero weight.4 Experimental SetupIn the remainder of this paper, we will make use of 1 000 functions, with weights sampled accordingto Sec. 3.3 with T=0.85. Each problem uses instances uniformly selected between 1 and 100 foreach of the component functions, and uniformly sampled locations of the global optimum. We usethe same set of weights, instances and optima locations in both 5and2dimensions.Comparing this set of generated problems with the pure BBOB functions is a key aspect of thiswork. To remove biases in terms of scaling, we apply the same scale factors to the BBOB functions.Practically, this means we use the all-zero weights with a 1 for the selected function to collectthe BBOB data (with the location of the optima set as original). We use 5instances of each BBOBfunction for our comparisons. We refer to these ‘pure’ BBOB functions as ‘BBOB’, while we referto the MA-BBOB instances as ‘affine’.Reproducibility: The code used during this project, as well as all resulting data, is availableat [31]. The repository also contains additional versions of the figures which could not be includedhere because of the page limit. We are actively working towards a data repository for MA-BBOBperformance data which will also allow automated annotation via the OPTION ontology [14], forFAIR data sharing [11].5 Landscape AnalysisTo analyze the landscapes of the created affine problems, we make use of the pflacco package [ 24]to compute ELA features. We use 5sets of 1 000 dpoints from a scrambled Sobol’ sequence. Wethen evaluate these points and follow the advice of [25] and use min-max normalization on thesefunction values. We finally remove all features which are constant across all problems or containNAN values, resulting in a total of 44remaining features. For each of these features, we then takethe mean value among the 5samples.To gain insight into the differences between the BBOB and affine functions, we reduce theoriginal 44dimensional space into 2d. To achieve this, we make use of the Uniform ManifoldApproximation Projection (UMAP). To focus on the parts of the instance space covered by thenewly generated problems, we create the mapping based only on the BBOB problems. The result ofapplying this mapping to all 2dproblems is visualized in Fig. 6b.6−2 0 2 4 6 8 10 12x0−10123456x1W60.00.20.40.60.81.0kindAffineBBOB(a)Points are colored according to the weights usedfor BBOB function F7.4 6 8 10 12 14 16x002468x1kindAffineBBOB(b)Points are colored according to the functiontype: BBOB of affine combination.Figure 6: UMAP-reduction of the 24 BBOB functions (5 instances each) and 1000 affine combinationsfor5d(a) and 2d(b). The projection is created based on the BBOB only.ela_meta.lin_simple.adj_r2ela_meta.lin_simple.interceptela_meta.lin_simple.coef.minela_meta.lin_simple.coef.maxela_meta.lin_simple.coef.max_by_minela_meta.lin_w_interact.adj_r2ela_meta.quad_simple.adj_r2ela_meta.quad_simple.condela_meta.quad_w_interact.adj_r2ela_distr.skewnessela_distr.kurtosisela_distr.number_of_peaksela_level.mmce_lda_10ela_level.mmce_qda_10ela_level.lda_qda_10ela_level.mmce_lda_25ela_level.mmce_qda_25ela_level.mmce_lda_50ela_level.mmce_qda_50ela_level.lda_qda_50nbc.nn_nb.sd_rationbc.nn_nb.mean_rationbc.nn_nb.cornbc.dist_ratio.coeff_varnbc.nb_fitness.cordisp.ratio_mean_02disp.ratio_mean_05disp.ratio_mean_10disp.ratio_mean_25disp.ratio_median_02disp.ratio_median_05disp.ratio_median_10disp.ratio_median_25disp.diff_mean_02disp.diff_mean_05disp.diff_mean_10disp.diff_mean_25disp.diff_median_02disp.diff_median_05disp.diff_median_10disp.diff_median_25ic.h_maxic.eps_sic.m00.00.20.40.60.81.0AffineBBOBFigure 7: Distribution of (normalized) ELA feature values on the 5dversion of the problems.From Fig. 6b, we observe that many of the affine problems are clustered together. While someregions between existing BBOB problems are filled, it seems that the function generation process isnot able to find solutions close to every BBOB problem. This might be caused by the fact that bycombining an average of 3.6functions, it is highly unlikely that we find functions similar to e.g., alinear slope or a function with low global structure.In addition to the dimensionality reduction, we can also investigate the distributions of indi-vidual ELA features. By comparing the distributions on the BBOB functions with the ones on theaffine problems, we can gain some insight into the most common types of problems generated. InFig. 7, we show these distributions for the min-max normalized ELA features. From this figure, wecan see that for many features, the affine problems are much more clustered than the BBOB ones,which are distributed more uniformly over the space of feature values.6 Algorithm PerformanceWhile the ELA based analysis gives us some insight into the low-level characteristics of the generatedproblems, it does not directly give insight into the power of these problems to differentiate betweenalgorithms. As such, we also run a set of 5 different algorithms on each problem instance. Thealgorithms we consider are: (1) Diagonal CMA-ES from the Nevergrad platform [ 26] (dCMA), (2)RCobyla from the Nevergrad platform [ 26] (Cobyla), (3) Differential Evolution from the Nevergradplatform [ 26] (DE), (4) CMA-ES from the modular CMA-ES package [ 6] (modCMA), and (5) L-SHADE, implemented using the modular DE package [30] (modDE).For each of these algorithms, we perform 50independent runs on each of the 1 000 affinefunctions as well as the 5instances from each of the 24BBOB problems. It is important to note that71 2 3 4 5Rank (BBOB)0.00.10.20.30.40.50.60.70.80.91.0Proportion1 2 3 4 5Rank (Affine)IDDiagonalCMADifferentialEvolutionRCobylamodcmamodde(a)Distribution of ranks based on per-functionAUC after 10 000 evaluations.0 2 4 6 8 10 12x0−101234567x1bestDiagonalCMAmodcmamoddeRCobylakindAffineBBOB(b)UMAP-reduction of BBOB functions (5 in-stances) and 1000 affine combinations. Projec-tion created based on BBOB only. Color basedon the algorithm with the largest AUC.Figure 8: Results of ranking the 5 algorithms on the 5dproblems, based on AUC after 10 000 evaluations.the BBOB functions make use of the same scale factors as used to generate the affine functions inorder to further reduce the impact of scale differences. These experiments are performed on boththe2dand5dversions of these problems.To analyze the differences in algorithm performance between the two sets of problems, weconsider the normalized area under the curve (AUC) of the empirical cumulative distributionfunction (ECDF) as the performance metric. For the ECDF, we use a set of 51 logarithmicallyspaced targets from 10−8to102. Based on the AUC values, we then rank the set of 5 algorithmson each problem. The distribution of these ranks is shown in Fig. 8a. We observe that the overallpatterns between the BBOB and affine problems are preserved. There are some notable differences,particularly with regard to the performance of Cobyla. While this algorithm often performspoorly on BBOB, for the affine problems it is ranked worst in a majority of cases. This suggeststhat problems where this algorithm performs well (mostly unimodal problems) are not as well-represented in the MA-BBOB functions.In addition to this ranking, we can also link the ELA features to the algorithm performance. Toexplore whether the used features might correlate with the problem’s difficulty from the algorithm’sperspective, we link the dimensionality reduction with the best algorithm from the portfolio. Thisis visualized for the 5dproblems in Fig. 8b.7 Algorithm SelectionAs a final experiment, we now use the generated problems in an algorithm selection context. Foreach of the 5 algorithms, we train a random forest regression model to predict the AUC on eachproblem. The input variables for this model are either the ELA features, as is commonly done, orthe weights used to generate the functions. By contrasting these approaches, we obtain an intuitionfor how well the ELA features capture the algorithm-relevant properties of the function.While we can train our models in a common cross-validation manner, we can also use thesame setup to test the generalizability of models trained on the original BBOB problems only. Theresulting mean absolute errors MAE of these models are plotted in Fig. 9a.We observe that the ELA representation is often worse than the weights-based one. Thissuggests that the used ELA features might not be sufficient to achieve generalization of an ASmodel. This is especially clear for the generalizability scenario, where we would have expectedELA to perform better. This poor performance seems to suggest that the ELA features might notfully capture all instance properties that determine the behavior of the algorithms.8dCMA DEmodCMA modDE Cobyla dCMA DEmodCMA modDE CobylaELAWeightsELAWeights0.000.050.100.150.200.250.30(a)Mean average error obtained when predictingthe AUC of each of the 5 algorithms based oneither the ELA features or the used weights.Top: model trained on mixture of BBOB andAffine functions using 10-fold cross-validation.Bottom: model trained on BBOB only and pre-dicting performance on affine problems. Left:2dproblems, right: 5dproblems.0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8loss0.40.50.60.70.80.91.0Proportion('ela', 'generalize')('ela', 'cv')('weights', 'generalize')('weights', 'cv')(b)Cumulative distribution of loss (AUC) of therandom forest models predicting the bestalgorithm ( 2dand 5dproblems combined),based on either the ELA features or weights-representation of the problems.Figure 9: Performance of the random forest model predicting algorithm performance (a) or the bestalgorithm for each problem (b).When training a very basic AS model (predicting the best algorithm) in the same manner(training on BBOB and evaluating on Affine), we achieve similar performance differences assuggested by Fig. 9a: the weighted F1-score based on ELA is 0.67, while the score based on weightsis0.70. The corresponding loss in terms of AUC values is plotted in Fig. 9b. This figure confirmsthe previous observation that the ELA features are not sufficiently representative to accuratelyrepresent the problems in a way which is relevant for ranking optimization algorithms.8 Conclusions and Future WorkThe proposed procedure for generating new problems as an affine combination of the 24 BBOBproblems can serve as a function generator to help fill the instance space spanned by the BBOBfunctions. By applying a scaling step before combining the problems, we make sure that theresulting problems all have an equivalent range of objective values, regardless of the used weights.In addition, the uniform location of the global optima in the full domain avoids some of the biasof the BBOB problems. By analyzing the ELA features of 1 000 of these many-affine MA-BBOBproblems, we observed that they do indeed fill a part of the instance space. There are still someinherent limitations arising from the fact that the building blocks are fixed. For example, it isimpossible to generate a problem similar to the linear slope. Similarly, it is highly unlikely that newproblems have specific properties such as low global structure. Nevertheless, the overall abilityranking of optimization algorithms on these problems remains similar to the ranking on the BBOBproblems, suggesting that the algorithmic challenges might be similar.The results presented above had as primary focus a first analysis of the generated MA-BBOBinstances, and how they compare to the BBOB functions. For this purpose, we have consideredrandomly sampled instances. The selection of ‘representative’ instance collections still remains tobe done. Another important step for future work is to test the generalization ability of AutoMLsystems that are trained on MA-BBOB functions and tested on numerical black-box optimizationproblems that do not originate from the BBOB family. In this context, our basic Random Forest-basedalgorithm selector indicates that the ELA features might not be as suitable for this generalizationtask as expected, motivating further research on feature engineering for black-box optimization.99 Broader Impact StatementAfter careful reflection, the authors have determined that this work presents no notable negativeimpacts to society or the environment.10 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes](b) Did you describe the limitations of your work? [Yes](c) Did you discuss any potential negative societal impacts of your work? [N/A](d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes]2. If you are including theoretical results. . .(a) Did you state the full set of assumptions of all theoretical results? [N/A](b) Did you include complete proofs of all theoretical results? [N/A]3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes](b)Did you include the raw results of running the given instructions on the given code anddata? [Yes](c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes](d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes](e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes](f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [N/A](g)Did you run ablation studies to assess the impact of different components of your approach?[N/A](h) Did you use the same evaluation protocol for the methods being compared? [Yes](i) Did you compare performance over time? [Yes](j) Did you perform multiple runs of your experiments and report random seeds? [Yes](k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [No] We aggregate data into AUC instead of reporting error bars onfixed-budget or fixed-target results.10(l) Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A](m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [No] We did not record the computation timeneeded while running experiments.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A]4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [N/A](b) Did you mention the license of the assets? [N/A](c) Did you include any new assets either in the supplemental material or as a url? [N/A](d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A](e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A]5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A](b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A](c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A]Acknowledgements. Our work is financially supported by ANR-22-ERCS-0003-01 project VARIA-TION, by the CNRS INS2I project IOHprofiler, and by the NWO DACCOMPLI project (628.011.002).References[1]Hossein Alipour, Mario Andrés Muñoz, and Kate Smith-Miles. 2023. Enhanced instancespace analysis for the maximum flow problem. Eur. J. Oper. Res. 304, 2 (2023), 411–428.https://doi.org/10.1016/j.ejor.2022.04.012[2]Anne Auger and Nikolaus Hansen. 2020. A SIGEVO Impact Award for a Paper Arising fromthe COCO Platform: A Summary and Beyond. https://evolution.sigevo.org/issues/HTML/sigevolution-13-4/home.html . Issue 3.[3]Nacim Belkhir, Johann Dréo, Pierre Savéant, and Marc Schoenauer. 2017. Per instance al-gorithm configuration of CMA-ES with limited budget. In Proc. of Genetic and EvolutionaryComputation (GECCO’17) . ACM, 681–688. https://doi.org/10.1145/3071178.3071343[4]Jakob Bossek, Pascal Kerschke, Aneta Neumann, Markus Wagner, Frank Neumann, and HeikeTrautmann. 2019. Evolving diverse TSP instances by means of novel and creative mutationoperators. In Proc. of Conference on Foundations of Genetic Algorithms (FOGA’19) , TobiasFriedrich, Carola Doerr, and Dirk V. Arnold (Eds.). ACM, 58–71. https://doi.org/10.1145/3299904.334030711[5]Jakob Bossek and Markus Wagner. 2021. Generating instances with performance differencesfor more than just two algorithms. In Proc. of Genetic and Evolutionary Computation Conference(GECCO’21, Companion material) , Krzysztof Krawiec (Ed.). ACM, 1423–1432. https://doi.org/10.1145/3449726.3463165[6]Jacob de Nobel, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck. 2021.Tuning as a means of assessing the benefits of new ideas in interplay with existing algorithmicmodules. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’21, Companionmaterial) . ACM, 1375–1384. https://doi.org/10.1145/3449726.3463167[7]Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck.2021. IOHexperimenter: Benchmarking Platform for Iterative Optimization Heuristics. CoRRabs/2111.04077 (2021). arXiv:2111.04077 https://arxiv.org/abs/2111.04077[8]Konstantin Dietrich and Olaf Mersmann. 2022. Increasing the Diversity of Benchmark Func-tion Sets Through Affine Recombination. In Proc. of Parallel Problem Solving from Nature(PPSN’22) (LNCS, Vol. 13398) , Günter Rudolph, Anna V. Kononova, Hernán E. Aguirre, PascalKerschke, Gabriela Ochoa, and Tea Tusar (Eds.). Springer, 590–602. https://doi.org/10.1007/978-3-031-14714-2_41[9]Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff.2021. COCO: A platform for comparing continuous optimizers in a black-box setting. Optim.Methods Softw. 36, 1 (2021), 114–144.[10] Nikolaus Hansen, Steffen Finck, Raymond Ros, and Anne Auger. 2009. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions . Technical Report RR-6829.INRIA. https://hal.inria.fr/inria-00362633/document[11] Annika Jacobsen, Ricardo de Miranda Azevedo, Nick S. Juty, Dominique Batista, Simon J.Coles, Ronald Cornet, Mélanie Courtot, Mercè Crosas, Michel Dumontier, Chris T. A. Evelo,Carole A. Goble, Giancarlo Guizzardi, Karsten Kryger Hansen, Ali Hasnain, Kristina M. Hettne,Jaap Heringa, Rob W. W. Hooft, Melanie Imming, Keith G. Jeffery, Rajaram Kaliyaperumal,Martijn G. Kersloot, Christine R. Kirkpatrick, Tobias Kuhn, Ignasi Labastida, Barbara Magagna,Peter McQuilton, Natalie Meyers, Annalisa Montesanti, Mirjam van Reisen, Philippe Rocca-Serra, Robert Pergl, Susanna-Assunta Sansone, Luiz Olavo Bonino da Silva Santos, JulianeSchneider, George O. Strawn, Mark Thompson, Andra Waagmeester, Tobias Weigel, Mark D.Wilkinson, Egon L. Willighagen, Peter Wittenburg, Marco Roos, Barend Mons, and ErikSchultes. 2020. FAIR Principles: Interpretations and Implementation Considerations. DataIntell. 2, 1-2 (2020), 10–29. https://doi.org/10.1162/dint_r_00024[12] Pascal Kerschke, Holger H. Hoos, Frank Neumann, and Heike Trautmann. 2019. AutomatedAlgorithm Selection: Survey and Perspectives. Evol. Comput. 27, 1 (2019), 3–45. https://doi.org/10.1162/evco_a_00242[13] Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov,and Carola Doerr. 2022. Per-run Algorithm Selection with Warm-starting using Trajectory-based Features. In Proc. of Parallel Problem Solving from Nature (PPSN’22) (LNCS, Vol. 13398) .Springer, 46–60. https://doi.org/10.1007/978-3-031-14714-2_4 Free version availableathttps://arxiv.org/abs/2204.09483 .[14] Ana Kostovska, Diederick Vermetten, Carola Doerr, Sašo Džeroski, Panče Panov, and TomeEftimov. 2022. OPTION: OPTImization Algorithm Benchmarking ONtology. IEEE Trans. Evol.Comput. (2022). https://doi.org/10.1109/TEVC.2022.3232844 To appear. Free versionavailable at https://arxiv.org/abs/2211.11332 .12[15] Benjamin Lacroix and John McCall. 2019. Limitations of Benchmark Sets and LandscapeFeatures for Algorithm Selection and Performance Prediction. In Proc. of Genetic and Evolution-ary Computation (GECCO’19) (Prague, Czech Republic). ACM, New York, NY, USA, 261–262.https://doi.org/10.1145/3319619.3322051[16] Thibault Lechien, Jorik Jooken, and Patrick De Causmaecker. 2023. Evolving test instancesof the Hamiltonian completion problem. Comput. Oper. Res. 149 (2023), 106019. https://doi.org/10.1016/j.cor.2022.106019[17] Fu Xing Long, Bas van Stein, Moritz Frenzel, Peter Krause, Markus Gitterle, and Thomas Bäck.2022. Learning the characteristics of engineering optimization problems with applications inautomotive crash. In Proc. of Genetic and Evolutionary Computation (GECCO’22) , Jonathan E.Fieldsend and Markus Wagner (Eds.). ACM, 1227–1236. https://doi.org/10.1145/3512290.3528712[18] Fu Xing Long, Diederick Vermetten, Bas van Stein, and Anna V. Kononova. 2022. BBOBInstance Analysis: Landscape Properties and Algorithm Performance across Problem In-stances. CoRR abs/2211.16318 (2022). https://doi.org/10.48550/arXiv.2211.16318arXiv:2211.16318[19] Alejandro Marrero, Eduardo Segredo, Coromoto León, and Emma Hart. 2022. A Novelty-Search Approach to Filling an Instance-Space with Diverse and Discriminatory Instancesfor the Knapsack Problem. In Proc. of Parallel Problem Solving from Nature (PPSN’22) (LNCS,Vol. 13398) . Springer, 223–236. https://doi.org/10.1007/978-3-031-14714-2_16[20] Olaf Mersmann, Bernd Bischl, Heike Trautmann, Mike Preuss, Claus Weihs, and GünterRudolph. 2011. Exploratory landscape analysis. In Proc. of Genetic and Evolutionary Computa-tion (GECCO’11) . ACM, 829–836.[21] Mario A. Muñoz and Kate Smith-Miles. 2020. Generating New Space-Filling Test Instances forContinuous Black-Box Optimization. Evol. Comput. 28, 3 (2020), 379–404. https://doi.org/10.1162/evco_a_00262[22] Mario Andrés Muñoz, Tao Yan, Matheus R. Leal, Kate Smith-Miles, Ana Carolina Lorena,Gisele L. Pappa, and Rômulo Madureira Rodrigues. 2021. An Instance Space Analysis ofRegression Problems. ACM Trans. Knowl. Discov. Data 15, 2 (2021), 28:1–28:25. https://doi.org/10.1145/3436893[23] Ana Nikolikj, Carola Doerr, and Tome Eftimov. 2023. RF+ clust for Leave-One-Problem-OutPerformance Prediction. In Proc. of Applications of Evolutionary Computation (Evo Applica-tions’23) . Springer, 285–301.[24] Raphael Patrick Prager. 2022. pFlacco. https://pypi.org/project/pflacco/ .[25] Raphael Patrick Prager and Heike Trautmann. 2023. Nullifying the Inherent Bias of Non-invariant Exploratory Landscape Analysis Features. In Proc. of Applications of EvolutionaryComputation (Evo Applications’23) . Springer, 411–425.[26] Jérémy Rapin and Olivier Teytaud. 2018. Nevergrad - A gradient-free optimization platform.https://GitHub.com/FacebookResearch/Nevergrad .[27] Quentin Renau, Johann Dreo, Carola Doerr, and Benjamin Doerr. 2019. Expressiveness and Ro-bustness of Landscape Features. In Proc. of Genetic and Evolutionary Computation (GECCO’19)(Prague, Czech Republic). ACM, 2048–2051. https://doi.org/10.1145/3319619.332691313[28] Gresa Shala, André Biedenkapp, Noor H. Awad, Steven Adriaensen, Marius Lindauer, andFrank Hutter. 2020. Learning Step-Size Adaptation in CMA-ES. In Proc. of Parallel ProblemSolving from Nature (PPSN’20) (LNCS, Vol. 12269) . Springer, 691–706. https://doi.org/10.1007/978-3-030-58112-1_48[29] Ye Tian, Shichen Peng, Xingyi Zhang, Tobias Rodemann, Kay Chen Tan, and Yaochu Jin.2020. A Recommender System for Metaheuristic Algorithms for Continuous OptimizationBased on Deep Recurrent Neural Networks. IEEE Trans. Artif. Intell. 1, 1 (2020), 5–18. https://doi.org/10.1109/TAI.2020.3022339[30] Diederick Vermetten. 2023. modular Differential Evolution. https://github.com/Dvermetten/ModDE .[31] Diederick Vermetten, Furong Ye, Thomas Bäck, and Carola Doerr. 2023. Reproducibil-ity files and additional figures. Code repository: https://github.com/Dvermetten/Many-affine-BBOB Data and figure repository: https://doi.org/10.5281/zenodo.7826036 .[32] Diederick Vermetten, Furong Ye, and Carola Doerr. 2023. Using Affine Combinations of BBOBProblems for Performance Assessment. CoRR abs/2303.04573 (2023). https://doi.org/10.48550/arXiv.2303.04573 arXiv:2303.04573[33] Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, and Thomas Bäck. 2022. IOH-analyzer: Detailed Performance Analysis for Iterative Optimization Heuristic. ACM Trans.Evol. Learn. Optim. 2, 1 (2022), 3:1–3:29. https://doi.org/10.1145/3510426 IOHanalyzeris available at CRAN, on GitHub, and as web-based GUI, see https://iohprofiler.github.io/IOHanalyzer/ for links.[34] Estefania Yap, Mario Andrés Muñoz, and Kate Smith-Miles. 2022. Informing MultiobjectiveOptimization Benchmark Construction Through Instance Space Analysis. IEEE Trans. Evol.Comput. 26, 6 (2022), 1246–1260. https://doi.org/10.1109/TEVC.2022.3205165[35] Martin Zaefferer and Frederik Rehbach. 2020. Continuous Optimization Benchmarks by Simula-tion. In Proc. of Parallel Problem Solving from Nature (PPSN’20) (LNCS, Vol. 12269) , Thomas Bäck,Mike Preuss, André H. Deutz, Hao Wang, Carola Doerr, Michael T. M. Emmerich, and HeikeTrautmann (Eds.). Springer, 273–286. https://doi.org/10.1007/978-3-030-58112-1_1914 |
71eJdMzCCIi | AlphaD3M: An Open-Source AutoML Libraryfor Multiple ML TasksRoque Lopez1Raoni Lourenço2Remi Rampin1Sonia Castelo1Aécio Santos1Jorge Ono1Claudio Silva1Juliana Freire11New York University2University of LuxembourgAbstract We present AlphaD3M, an open-source Python library that supports a wide range of machinelearning tasks over different data types. We discuss the challenges involved in supportingmultiple tasks and how AlphaD3M addresses them by combining deep reinforcement learningand meta-learning to construct pipelines over a large collection of primitives effectively.To better integrate the use of AutoML within the data science lifecycle, we have builtan ecosystem of tools around AlphaD3M that support user-in-the-loop tasks, includingselecting suitable pipelines and developing custom solutions for complex problems. Wepresent use cases that demonstrate some of these features. We report the results of adetailed experimental evaluation showing that AlphaD3M is effective and derives high-quality pipelines for a diverse set of problems with performance comparable or superior tostate-of-the-art AutoML systems.1 IntroductionAutomated Machine Learning (AutoML) has emerged as an alternative to automatically synthesizemachine learning (ML) pipelines, thereby democratizing ML techniques to non-experts as wellas increasing the productivity of data scientists. Different approaches have been proposed forAutoML systems. Some focus on specific components of an ML pipeline, such as hyperparameteroptimization or model selection, while others, given a dataset and a prediction task, generateend-to-end pipelines that encompass data pre-processing, feature, and model selection (Hutteret al., 2019). Most end-to-end systems are designed to work with tabular data and only supportclassification and regression problems (Feurer et al., 2015; LeDell and Poirier, 2020; Olson and Moore,2016; Kotthoff et al., 2017). Cloud AutoML (Google Cloud AutoML, 2020) and AutoGluon (Ericksonet al., 2020) also create pipelines to classify text and images and perform object detection tasks.However, these systems do not support more complex data types such as graphs, time series, audio,and video, limiting the types of problems they can address. Table 1 shows the set of task typessupported by different AutoML systems.In the context of DARPA’s Data-Driven Discovery of Models (D3M) program (Elliott, 2020),several AutoML systems have been developed to support a wide range of data types and MLtasks using an extensive set of computational primitives as building blocks – we refer to theseasmulti-task AutoML systems (MT-AutoML). MT-AutoML systems face an essential challenge:effectively searching an ample space of primitives required to synthesize pipelines for a broadrange of tasks and data types. To prune the search space, many D3M MT-AutoML systems usemanually-crafted templates and grammars (D3M, 2022) that prescribe combinations of primitivesthat make sense for different problems. This, in turn, leads to other challenges: creating thesetemplates or grammars is not only time-consuming but failing to include the necessary rules thatcover the relevant primitives (and their combination) for multiple task types can negatively impactthe ability of an MT-AutoML system to derive performant pipelines.AutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Table 1: Tasks supported by different AutoML Systems.SystemsTabularClassificationTextclassificationImageclassificationAudioclassificationVideoclassificationTabularRegressionClusteringTime seriesforecastingTime seriesclassificationObjectdetectionLUPICommunitydetectionLinkpredictionGraphmatchingVertexclassificationCollaborativefilteringSemisupervisedclassificationAutoGluon ✓✓✓ ✓ ✓ ✓AutoWEKA ✓ ✓Auto-Sklearn ✓ ✓Cloud AutoML ✓✓✓ ✓✓ ✓H2O ✓✓ ✓TPOT ✓ ✓AlphaD3M ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓✓✓✓ ✓✓✓ ✓✓✓ ✓✓✓✓✓✓ ✓✓✓We present AlphaD3M, an open-source AutoML library1that supports a wide range of dataand problem types (see Table 1). AlphaD3M introduces new techniques to navigate the large searchspaces MT-AutoML systems must navigate effectively. They include an algorithm that appliesmeta-learning to automatically derive task-based context-free grammars (CFGs) which cover amultitude of problems; and a novel search strategy that, based on previously generated pipelinesand their performance, prioritizes primitives that are correlated with good pipeline performance.AlphaD3M includes components that aim to support usability and integration with other tasksin the data science lifecycle, from data exploration and model summarization to model deployment.It is possible to extend AlphaD3M and combine it with other tools through its flexible API. Forexample, its integration with the PipelineProfile (Ono et al., 2021) allows users to explore andcompare the set of derived pipelines visually. Besides describing the API and these components, wealso present case studies demonstrating how users can improve the ML solutions via interaction inAlphaD3M.We conducted a detailed experimental evaluation to assess the ability of AlphaD3M to handlea rich set of tasks and data types as well as to compare its performance against state-of-the-artAutoML and MT-AutoML systems. We used two benchmarks: (a) a collection of 112 datasetsthat covers seventeen different ML tasks, and (b) the OpenML AutoML Benchmark for tabularclassification problems. Our results show that the search strategies used by AlphaD3M are effective:the system generates pipelines whose performance is superior or on par with those derived byother systems, including systems that focus on a small set of problems and have to navigate a muchsmaller search space.2 Related WorkTask Coverage. Many AutoML systems have been proposed to work with tabular data, for example:Auto-sklearn (Feurer et al., 2015), TPOT (Olson and Moore, 2016), and H2O (LeDell and Poirier,2020). The deep reinforcement learning algorithm proposed by Drori et al. (2019) aimed to supportmultiple learning tasks and data types, however, its implementation was limited to classificationand regression tasks over tabular and text data. AutoML systems developed in industry, such asCloud AutoML by Google and AutoGluon by Amazon, handle text and image data, but still supporta limited number of learning tasks. In contrast, AlphaD3M supports a wide range of data types(tabular, text, images, audio, video, and graph) and a rich set of ML tasks as shown in Table 1.Data and Model Exploration. Interactive data analytics systems such as Visus (Santos et al., 2019),TwoRavens (Gil et al., 2019), and Snowcat (Cashman et al., 2018) have been developed to guideusers throughout the model-building process, from exploring the input data to comparing the MLpipelines produced by AutoML systems. They target primarily domain experts who have little or1https://gitlab.com/ViDA-NYU/d3m/alphad3m2no expertise in ML and thus lack support for the customization of pipelines for complex problems.These systems trade off flexibility for ease of use. As such, they are limited to the operationsimplemented in their visual interfaces; extensive and time-consuming changes in their workflowsare required to support new data types and tasks (e.g., graph data). Other approaches mimic theinterface of traditional ML libraries, through which developers often build a single solution for agiven task (Grafberger et al., 2021). AlphaD3M allows ML experts to explore the derived pipelinesand customize them through a user-friendly interface within a Jupyter Notebook environment. Inaddition, instead of retrieving only the best pipeline, AlphaD3M returns all valid pipelines, ranks,and presents them to the user for comparison, refinement, and selection.3 The AlphaD3M LibraryFigure 1: Overview of AlphaD3M.AlphaD3M is a multi-task Au-toML system. It is imple-mented in Python and canbe used via pipinstallationor Docker. Figure 1 showsan overview of this libraryand its components. Tobuild ML pipelines, AlphaD3Muses a rich set of primitivesand a meta-learning databasefrom the D3M ecosystem D3M(2022). The pipeline search is conducted by four modules which: (a) automatically construct oftask-specific grammars; (b) prioritize primitives that are more likely to be effective; (c) synthesizepipelines using Monte Carlo Tree Search and Neural Networks (Drori et al., 2019); and (d) tunehyperparameters. The library implements a Python API through which users can define the problemto be solved, explore the input data, obtain model summaries, analyze and compare the producedpipelines, as well as improve and deploy them.3.1 The D3M EcosystemPrimitives. AlphaD3M uses a comprehensive collection of primitives developed by performersin the D3M program as well as from open-source libraries (e.g., scikit-learn). In total, there are312 primitives available for different steps in ML pipelines, including data pre-processing, featureextraction, feature selection, prediction, and clustering (D3M Primitives, 2022), and implementstate-of-the-art methods, such as ResNet50 (He et al., 2016), ARIMA (Wilson, 2016), among others.The Marvin Meta-Learning Database. Marvin is an open corpus of curated ML pipelines, datasets,and problems (Marvin, 2020). All pipelines in Marvin share the same set of primitives and arespecified using the D3M format. Marvin stores approximately 2.5 million pipelines executed over600 datasets. Since data scientists and AutoML systems that use different search strategies haveproduced these pipelines, the database covers a wide variety of pipeline patterns. As discussedbelow, we leverage the data in Marvin to assist in and improve the AlphaD3M search process. Tothe best of our knowledge, ours is the first work that explores this corpus.3.2 Pipeline SearchThe automatic synthesis of pipelines is a combinatorial problem in which we must find the bestcombinations of primitives and their hyperparameters. With 312 primitives and over 1,500 hy-perparameters in the D3M ecosystem, the search space becomes prohibitively large. For instance,considering just the classification task over tabular data, there are 22 data cleaning, 87 data trans-formation, and 44 classifier primitives, leading to 84,216 possible pipelines to test. AlphaD3M usesa multi-pronged approach to manage this search space described below.3APipeline Synthesis Using Monte Carlo Tree Search and Neural Networks. To synthesize theML pipelines, AlphaD3M uses the strategy introduced by Drori et al. (2019), which is based on asingle-player game technique inspired by AlphaZero (Silver et al., 2017). It applies model-basedreinforcement learning with a neural network sequence model, and a Monte Carlo Tree Search(MCTS). The metadata encoding the pipeline, the dataset, and the task are analogous to an entiregame board configuration in AlphaZero. The possible game states consist of all valid pipelinesgenerated from a set of primitives and modified by actions guided by a manually-designed CFG.The model outputs a sequence of primitives. Pipelines are constructed by an LSTM. Given a state scomposed of a vector encoding the whole board configuration (dataset, task, pipeline), the neuralnetwork predicts the probabilities P(s,a)over actions afrom a state s. This process produces aset of action sequences Sthat describe a pipeline, which in turn solves task Ton datasetD. Thenetwork also outputs an estimate of pipeline performance v. The reinforcement learning algorithmtakes the predictions (P(s,a),v(s))produced be the neural network and uses them in the MCTS byrunning multiple simulations to search for the pipeline sequence Rwith the best evaluation. Animportant benefit of this strategy is that it learns to synthesize pipelines.BAutomatic Generation of Task-Based CFG via Meta-Learning. Manually designed CFGs havemany limitations, notably they may not cover all applicable rules and pipeline structures andconsequently prevent the search process from exploring desirable pipelines that do not fit thegrammar. Furthermore, to create the production rules or patterns in the grammar, a user needsto have knowledge of all the available primitives for a specific task and how they work. For largeprimitive collections, this is a difficult task, which is compounded for MT-AutoML systems thatsupport multiple problem types. Instead of relying on manually created CFGs, we propose a newstrategy that uses meta-learning to derive grammars automatically and on the fly. It does so in twosteps: 1) it selects task-specific pipelines and datasets from a meta-learning database (MLDB), and2) uses these to derive a portfolio of pipeline patterns.Selecting Task-Oriented Datasets. Since AlphaD3M supports different tasks, we need to retrievefrom the Marvin MLDB pipelines produced for tasks and datasets similar to the ones we provided asinputs to the AutoML system. For instance, if we want to solve a clustering problem over a datasetD, we retrieve the pipelines used for this problem over datasets similar to D. To select relevantpipelines for a given problem Pover dataset D, we use the “task keywords" tag list provided in theproblem definition as features that describe the task to be solved, and search Marvin for pipelinesthat contain a similar set of keywords. The list is encoded as a bag-of-words (BOW). Since the setis small and most of the tags are non-standard words, e.g., collaborativeFiltering, timeSeries , it ispossible to obtain accurate matches with this simple approach.Given the set of relevant pipelines RP, we select a subset RPDcontaining pipelines that wereapplied on datasets similar to D. To determine whether two datasets are similar, we use datasetfeatures including semantic types (e.g., categorical, date-time) and missing values, and encode themusing one-hot encoding. Datasets are compared using cosine similarity.The current implementation uses 16 unique semantic types detected by the data-mart_profiler (Datamart Profiler Library, 2021). In contrast to other approaches like TabSim(Habibi et al., 2020), or StruBERT (Trabelsi et al., 2022), AlphaD3M uses semantic types because, inthe grammar, it defines components to handle the dataset’s features, such as categorical or date-timeencoders, and these components are strongly related to semantic types. Also, these approachesfocus on tabular datasets, AlphaD3M handles other types of datasets, like image and text datasets.Finally, running these approaches is a very time-consuming task.Creating a Portfolio of Patterns. After identifying similar datasets, the next step is to select the bestpipelines to create a portfolio of pipeline patterns. To select these AlphaD3M takes into considerationpipeline performance for different datasets. Some datasets are more challenging than others – theperformance of a pipeline can vary widely for different datasets. To properly compare pipeline4performance, AlphaD3M uses a strategy based on the average distance to minimum (ADTM) (Wistubaet al., 2015), which transforms the performance to the distance to the best-observed performancescaled between 0 and 1. In contrast to ADTM, which uses the misclassification rate, AlphaD3Muses the actual performance (the score) of the pipelines and thus, it applies the average distance tomaximum instead to select the best pipelines. It then transforms the primitives within the pipelinesto their classes. For instance, the primitive imputer.SKlearn belongs to the class IMPUTATION . Ifthere is a pipeline with this structure: [ imputer.SKlearn svm.SKlearn ], it is converted to this pattern:[IMPUTATION CLASSIFICATION ]. Unlike Feurer et al. (2021), which creates a unique portfolioof pipelines in an offline phase, AlphaD3M creates the portfolio online, based on the query taskand dataset. Also, the output is a portfolio of patterns, not of static pipelines, which allows moreflexibility to construct pipelines. These patterns are used as production rules of the grammar.Algorithm 1 in the Appendix describes the process of building the grammar.CPrioritization of Primitives. When a data scientist builds an ML pipeline, they start this processusing primitives that are known to perform well. For example, XGBoost or Random Forests aregood initial candidates for classification tasks. AlphaD3M follows this intuition to identify goodcandidate primitives for a specific task, using the data from Marvin. This prior knowledge aboutpromising primitives can be helpful to find better pipelines faster.Similar to Ono et al. (2021), AlphaD3M uses Pearson Correlation (PC) to estimate how mucha primitive contributes to the score of the pipeline. However, instead of using the raw scores, ituses the ADTMs values because they are scaled across different datasets. AlphaD3M estimatesthe primitive importance using PC between the primitive indicator vector p(pi=1if pipelineicontains the primitive in question and pi=0otherwise) and the pipeline score vector s, wheresiisthe score for pipeline i. Sincepandsare dichotomous and quantitative variables, respectively, thePoint-Biserial Correlation coefficient (PBC) Sheskin (2003) is an appropriate correlation measure – itis mathematically equivalent to the PC but can be calculated with fewer operations. The correlationvalues are normalized between 0 and 1 (using min-max normalization).AlphaD3M calculates these correlations for the primitives at two levels: (a) global, when itconsiders all the pipelines, and (b) local, when it considers only the pipelines for each pattern.The main goal is to estimate how important a primitive is for all the pipelines and each pattern.Primitives with higher values of importance should have priority during the search of pipelines.Algorithm 2 describes the process of calculating the primitive importance values in detail (see theAppendix). To prioritize the usage of potential primitives in AlphaD3M, it includes these values ofprimitive importance in the MCTS formula:U(s,a)=Q(s,a)+c(αP(s,a)+( 1−α)R(a))√︁N(s)1+N(s,a)(1)whereQ(s,a)is the expected reward for action a(selection of primitive a) from state s,N(s,a)isthe number of times action awas taken from state s,N(s)is the number of times state swas visited.P(s,a)are the probabilities predicted by the neural network over actions afrom a state s,cis aconstant which determines the amount of exploration, R(a)=G(a)∗L(a),G(a)andL(a)are theglobal and local importance of the action a, andαis a coefficient to keep the trade-off betweenR(a)andP(s,a).DDecoupled Hyperparameter Tuning. Hyperparameter tuning is an essential part of fitting machinelearning models (Bergstra et al., 2011; Snoek et al., 2015; Dolatnia et al., 2016). This is also the casefor end-to-end ML pipelines that target different tasks, and all primitives contain hyperparameters,not just the estimators.AlphaD3M performs hyperparameter tuning as an independent task, after the pipelines areconstructed. It uses Bayesian optimization, which is the state-of-the-art for hyperparameter tuning5Figure 2: (a) A code snippet to solve a semi-supervised classification task. (b) AlphaD3M allows usersto inspect the contents of the input dataset, including column statistics and data types. (c)Analyzing ML pipelines through the integration with PipelineProfiler.(Bergstra and Bengio, 2012; Snoek et al., 2015; Dolatnia et al., 2016) and was shown to outperformmanual setting of parameters, grid search, and random search (Bergstra and Bengio, 2012; Turneret al., 2021).Tuning Top- kPipelines. AlphaD3M synthesizes and evaluates the pipelines using primitives withdefault values for hyperparameters. The pipelines are then ranked by performance, and the top-kpipelines are selected for tuning. AlphaD3M uses Sequential Model-Based Algorithm Configuration(SMAC) (Lindauer et al., 2022), a Python library for Bayesian optimization. It approximates aprobability model of the performance outcome given a parameter configuration that is updatedfrom a history of executions. AlphaD3M selects the Gaussian Processes models from SMAC tominimize an arbitrary acquisition function using the Expected Improvement criterion to choose theparameter values for each iteration until a condition (number of iterations) is met. The acquisitionfunction is designed to normalize the performance metric used to synthesize the pipelines betweenzero and one, as the pipeline execution evaluations increase, the acquisition function gets closer tozero. SMAC requires a set of unique parameters to assign values during its tuning procedure. SinceAlphaD3M considers multiple primitives with identical names, it constructs an internal hierarchicalnomenclature of parameters and designs their dependencies using ConfigSpace.3.3 The APIWe have developed a Python-based API that supports the process of building and exploration of MLpipelines within a Jupyter Notebook environment. The API is integrated with the D3M AutoMLsystems and supports various dataset formats such as raw CSV, D3M, and OpenML. Model synthesiscan be done with a few lines of code, as shown in Figure 2(a). The API allows users to (a) define aproblem, (b) explore summaries of their input dataset, (c) summarize the produced pipelines and (d)analyze and compare pipelines with respect to their performance scores and prediction outputs.We describe the main components of the API below.Problem Definition. To build a predictive model, AlphaD3M needs a problem specification thatdescribes a prediction problem, specifically: (a) the training dataset; (b) a target variable, i.e., whatshould be predicted by the predictive model; (c) the maximum running time that controls how longthe search can take (to control the use of computational resources); (d) the desired performancemetric; and (e) a list of task keywords that specify the kind of prediction task and, therefore, thetechniques that should be used to solve the prediction problem. Figure 2(a) shows an example ofhow to define a problem in AlphaD3M.6Table 2: Comparison of MT-AutoML systems with respect to the number of supported task types,winner pipelines, and average rank by each system.AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Unique ML tasks supported 17 16 15 17 15 16 14 2Winner pipelines 49 39 30 21 20 11 10 7Average rank 2.85 2.89 2.90 3.99 4.68 5.32 5.73 6.85Data Exploration. To build good predictive models, it is important to identify data attributes thatlead to accurate predictions. The API provides multiple tools for data exploration. For example, itshows different visualizations (compact, detail, and column views) that summarize the content oftabular datasets (see Figure 2 (b)).Pipeline Summary. After the pipeline search is complete, users can display a leaderboard, trainindividual pipelines with the complete data, perform predictions and evaluate them against aheld-out dataset.Pipeline Exploration. Users can analyze the produced pipelines using the PipelineProfiler Onoet al. (2021), which is fully integrated into AlphaD3M as shown in Figure 2(c). PipelineProfiler isa visual analytics tool that enables users to compare and explore the pipelines generated by theAutoML systems.Pipeline Refinement and Deployment. AlphaD3M allows users to save and load pipelines, enablingusers to reload them later and perform analyses without having to re-run the AutoML search.They can load the saved pipelines at any time for training or testing purposes. In addition, userscan export pipelines to Python code. This gives them more control and the ability to modify(and customize) the automatically generated pipelines (e.g., change hyperparameters, or replacea classifier primitive). More information about the API can be found on the documentation webpage: https://alphad3m.readthedocs.io/en/latest/api.html .4 EvaluationTo demonstrate the effectiveness of AlphaD3M and its ability to handle a rich set of ML tasks, wecompared AlphaD3M with state-of-the-art AutoML systems using two dataset collections. We alsopresent use cases to show how useful, flexible, and easy to use AlphaD3M is.4.1 Comparing AutoML SystemsD3M Datasets. This collection contains challenging datasets and cover a wide variety of tasks (atotal of 17 task types) and data types (see Table 3). We evaluated all the systems using train and testsplits. In most of the cases, the sizes are 0.8 and 0.2 for the train and test splits, respectively (see thedataset’s repository2for details). For each dataset, we ran the systems over the train split for onehour, a time-bound used by others works (Erickson et al., 2020; Feurer et al., 2021). After that, weevaluated the best pipeline produced by each system in the test split. For this experiment, we used1 GPU (GeForce GTX 1080 Ti), 14 CPU cores (Intel Xeon E5-2695 v4, 2.10 GHz), and 56 GB memory.Table 2 shows the number of supported task types (ML tasks), winner pipelines (i.e., pipelineswith the best performance for a given dataset), and the average rank by each AutoML system (rankof each system among the 8 AutoML systems applied to each dataset). If two or more systemsproduce pipelines that tie in the best score, all of them are considered winner pipelines. As we cansee, AlphaD3M and Aika were able to solve 17 out of 17 unique tasks, obtaining the best coverage.We also evaluated the effectiveness of AlphaD3M. It had the best overall performance, producingthe best pipeline for 49 datasets with the best average rank (2.85). Analyzing the support for each2https://datasets.datadrivendiscovery.org/d3m/datasets7Table 3: Number of datasets by task type and number of solved datasets by each AutoML system forall task types covered by the D3M datasets.ML Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori et al. (2019)Tabular Classification (20) 20 19 18 20 18 17 13 20Tabular Regression (11) 11 11 11 8 9 6 5 9Image Classification (9) 9 8 9 9 7 7 2 0Image Regression (1) 1 1 1 1 1 1 1 0Text Classification (9) 9 9 9 9 8 8 9 0Audio Classification (2) 2 2 2 2 1 2 2 0Graph Matching (3) 3 3 3 3 2 2 2 0Time series Forecasting (13) 13 13 13 13 2 12 10 0Link Prediction (3) 3 3 3 3 2 2 2 0Collaborative Filtering (1) 1 0 1 1 0 1 0 0Time series Classification (19) 19 19 19 17 19 15 19 0Community Detection (3) 3 3 0 2 2 1 0 0Video Classification (2) 2 2 2 2 0 2 2 0Vertex Classification (4) 4 4 4 4 4 4 4 0Object Detection (2) 2 2 0 1 1 0 0 0Semisupervised Classification (6) 6 6 6 3 6 4 3 0LUPI (4) 4 4 4 4 4 4 4 0task type individually in Table 3, we can see that AlphaD3M was able to produce valid pipelinesfor all the datasets and it solved more datasets than the other systems. Even though AlphaD3M isinspired by Drori et al. (2019), in Table Table 2 and Table 3, we can clearly see the difference betweenthem, AlphaD3M handles a larger number of tasks and produces many more winned pipelines.This shows that the different components of AlphaD3M are effective at handling the larger searchspaces required by MT-AutoML systems. The detailed scores obtained by each system in all theD3M datasets and the average rank by tasks can be found in Table 4 and Table 5 (Appendix).Additionally, we calculated the number of winner pipelines for the top-3 systems only in thedatasets where all of them produced pipelines. AlphaD3M, Ensemble, and AutonML systems got 48,42, and 38, respectively. These results confirm that the superior performance of AlphaD3M is notsolely due to its support for a broader range of ML tasks.Figure 3: Ablation study for the different components of AlphaD3M.We performed an ablationstudy to analyze the contribu-tion of each component of Al-phaD3M on a random sample offive D3M datasets for classifica-tion tasks2(datasets for whichAlphaD3M obtained the best, av-erage and worst performances).Figure 3 shows the best scoresfor each dataset reached by thefull AlphaD3M and the versionswith some components removed(or replaced). As we can see, us-ing all components leads to thebest results.To evaluate the importance of the automatic grammar, we replaced it with the manually-designed grammar used in Drori et al. (2019). For POKER ,SPECTRO ,WORDS , and SICK datasets,when the manual grammar was used, AlphaD3M was not able to produce valid pipelines, whichhighlights the importance of automatically generating the grammar. These datasets contain multi-ple types of features like text, DateTime, etc., which were not covered by the manually-constructed8Figure 4: Performance of AutoML systems in OpenML Benchmark. X-axis shows the accuracy values(normalized by the best score), and Y-axis shows the IDs of the OpenML tasks.grammar. The prioritization of primitives also plays an important role in AlphaD3M. When thisfeature was not used, the performance decreased, e.g. in POKER ,SPECTRO , and LIBRAS datasets. Aswe can see in Figure 3, in most of the datasets, when we removed the hyperparameter tuning com-ponent, AlphaD3M obtained the same results. This suggests that the heuristic used by AlphaD3M(tuning only the top- kpipelines) may miss good pipelines that would attain better performanceafter tuning. In future work, we plan to investigate alternative strategies for hyperparameter tuningthat attain a better balance of computational cost and pipeline performance.OpenML Benchmark. Similar to Erickson et al. (2020), we compared our system with AutoWEKA,TPOT, H2O, AutoGluon, and Auto-Sklearn 2.0 (hereinafter referred to as Auto-Sklearn) on the 39OpenML datasets (Gijsbers et al., 2019). This corpus contains a variety of datasets intended torepresent real-world data science problems and covers binary and multiclass classification tasks.We used AMLB (Gijsbers et al., 2022) to compare the systems, running them locally for one hourusing 1 fold split and accuracy as the optimization metric. For this experiment, we used 4 CPUcores (Intel Xeon Platinum 8268 Processor, 2.9 GHz) and 32 GB memory.Figure 4 shows the scores (normalized by the best score) of all the systems (the detailed scorescan be found in Tables 6 and 7 in the Appendix). As we can see, AlphaD3M produced pipelineswhose performance is on par with the other AutoML systems. We also calculated the averagerank for all the systems for the 39 datasets. AlphaD3M got 3.64 of average rank, while Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA got 2.08, 2.33, 3.08, 3.72, and 5.10, respectively.To understand better these numbers, we also estimated the performance gain of the pipelines foundby AlphaD3M against pipelines generated by other systems. The average gain of AlphaD3M forthe OpenML datasets was +0.001, which shows that, in general, AlphaD3M attained good resultsfor this collection. We analyzed the 3 datasets ( task_146195 ,task_167119 andtask_168331 ) forwhich AlphaD3M generated pipelines with performance lower than other systems. This happenedbecause these datasets are imbalanced with multiple classes. The performance of AlphaD3M forthese could be improved with the inclusion of primitives to handle imbalanced datasets. Thisunderscores the importance of being able to add primitives to AutoML systems.Concerning the coverage, it is important to highlight that AlphaD3M succeeded for 38 datasets.Auto-Sklearn, AutoGluon, H2O, TPOT, and AutoWEKA solved 39, 39, 34, 29, and 28 datasets,respectively. As pointed out by Gijsbers et al. (2022), the results of Auto-Sklearn on the OpenMLdatasets must be considered very carefully, since there could be an overlap between the datasetsused in its meta-learning process and the ones used in the evaluation. It’s important to highlightthat none of the OpenML datasets are included in the version of Marvin that was used by AlphaD3Min these experiments.94.2 Use CasesPivoting across ML tasks. Predicting hostile actions against ships and mariners worldwide isimportant to prevent piracy and prosecute the aggressors. Consider that an analyst from the U.S.National Geospatial-Intelligence Agency (NGA) is building a model using the Anti-Shipping ActivityMessages dataset (ASAM, 2021). She wants to identify which records mention guns and whichrecords do not. This is a non-trivial problem since a variety of terms (e.g., pistol, rifle, etc.) indicatewhether a gun is present. This dataset contains 8,000 documents, of which 1,400 were annotated.She started by using AlphaD3M to create models using the 1,400 labeled documents setting themodel search to 1 hour. AlphaD3M derived high-quality pipelines – the best pipeline had 0.90 ofF1. However, she wondered whether these pipelines could be further improved, in particular, byleveraging the 6,600 unlabeled documents through semi-supervised learning. AlphaD3M supportsa wide range of tasks, including semi-supervised learning – users just need to add the keyword“semiSupervised” as a parameter. The user then ran a new experiment using the 1,400 labeled and6,000 unlabeled instances as a training dataset. The results improved from 0.90 to 0.95 of F1. Theseexperiments show that by using AlphaD3M, data scientists can improve the results, pivoting fromone task (classification) to another (semi-supervised classification) very quickly.Reducing pipeline execution time through models exploration. Using content analysis andpredictive modeling for conflict assessment is a common approach for conflict analysts to guidepolicy-making decisions D’Orazio (2020). Consider a conflict analyst trying to categorize explosionevents that involve terrorist activities. She uses the explosion events dataset (Raleigh et al., 2010)that contains 20,000 articles describing events that involve terrorist activities. An article is relevantif it describes attacks involving explosions. To create classification models, she ran AlphaD3M for 1hour. The system synthesized high-quality pipelines, with F1 values around 0.9. To identify themost suitable pipeline, she used the PipelineProfiler to explore the derived models. She observedthat the top-10 pipelines had similar scores but their execution times were above 800 seconds. Toaddress this problem, she tried a different strategy: combining progressive sampling and activelearning to reduce the number of training data from 20,000 to 3,200 documents. Then, she re-ranAlphaD3M using the smaller set as the training dataset, while keeping the rest of the workflowunchanged. The top F1 score improved from 0.91 to 0.96 and the time from 800 to 125 seconds.5 ConclusionsWe introduced AlphaD3M, an MT-AutoML library that automatically synthesizes end-to-endpipelines for 17 ML tasks and 6 different data types. AlphaD3M introduces new methods to auto-matically derive grammars and prioritize primitives, which are essential for effectively managingthe large space MT-AutoML systems must search. In addition, AlphaD3M embraces a user-in-the-loop approach, through an API that allows the users to explore the input data and the derived MLpipelines, as well as customized the pipelines. We presented a detailed experimental evaluationthat compares our approach to several state-of-the-art AutoML systems over different problemsand datasets. The results suggest that AlphaD3M is effective: not only does it solve a larger numberof problem types, but it also derives pipelines with performance that is superior or on par withthose derived by other systems.Although AlphaD3M’s approach is primitive-agnostic, so far, it only relies on the D3M primitivesto build ML pipelines. We plan to extend AlphaD3M by including additional state-of-the-artand more-recent primitives, e.g., models published in HuggingFace or PyTorch Hub repositories.Moreover, we would like to improve the system interoperability with existing open-source primitivesthat use standard APIs such as the well-known scikit-learn’s fit-predict API.Acknowledgements. This work was partially supported by the DARPA D3M program. Anyopinions, findings, conclusions, or recommendations expressed in this material are those of theauthors and do not necessarily reflect the views of DARPA.10ReferencesASAM (2021). ASAM: Anti-Shipping Activity Messages. https://msi.nga.mil/Piracy .Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B. (2011). Algorithms for Hyper-Parameter Opti-mization. In Proceedings of NIPS , pages 2546–2554.Bergstra, J. and Bengio, Y. (2012). Random Search for Hyper-parameter Optimization. JMLR , pages281–305.Cashman, D., Humayoun, S. R., Heimerl, F., Park, K., Das, S., Thompson, J., Saket, B., Mosca, A.,Stasko, J. T., Endert, A., Gleicher, M., and Chang, R. (2018). Visual Analytics for AutomatedModel Discovery. CoRR .D3M (2022). D3M Website. https://datadrivendiscovery.org .D3M Primitives (2022). D3M Primitives Website. https://gitlab.com/datadrivendiscovery/primitives/-/tree/master/primitives .Datamart Profiler Library (2021). Datamart Profiler Website. https://pypi.org/project/datamart-profiler/ .Dolatnia, N., Fern, A., and Fern, X. (2016). Bayesian Optimization with Resource Constraints andProduction. In Proceedings of ICAPS , pages 115–123.D’Orazio, V. (2020). Conflict Forecasting and Prediction. In Oxford Research Encyclopedia ofInternational Studies . Oxford University Press.Drori, I., Krishnamurthy, Y., Lourenco, R., Rampin, R., Cho, K., Silva, C., and Freire, J. (2019).Automatic Machine Learning by Pipeline Synthesis using Model-based Reinforcement Learningand a Grammar. In 6th ICML Workshop on Automated Machine Learning .Elliott, J. (2020). DARPA Data-Driven Discovery of Models (D3M) Program. https://www.darpa.mil/program/data-driven-discovery-of-models .Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arXiv preprint arXiv:2003.06505 .Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2021). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand Robust Automated Machine Learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M.,and Garnett, R., editors, Advances in Neural Information Processing Systems , volume 28. CurranAssociates, Inc.Gijsbers, P., Bueno, M. L. P., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren,J. (2022). Amlb: an automl benchmark.Gijsbers, P., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J. (2019). An Open SourceAutoML Benchmark. In 6th ICML Workshop on Automated Machine Learning .Gil, Y., Honaker, J., Gupta, S., Ma, Y., D’Orazio, V., Garijo, D., Gadewar, S., Yang, Q., and Jahanshad, N.(2019). Towards Human-guided Machine Learning. In Proceedings of the Conference on IntelligentUser Interfaces (IUI) , pages 614–624. ACM.11Google Cloud AutoML (2020). Google Cloud AutoML Website. https://cloud.google.com/automl .Grafberger, S., Guha, S., Stoyanovich, J., and Schelter, S. (2021). MLINSPECT: a Data DistributionDebugger for Machine Learning Pipelines. age, 20:123.Habibi, M., Starlinger, J., and Leser, U. (2020). Tabsim: A Siamese Neural Network for AccurateEstimation of Table Similarity. In 2020 IEEE International Conference on Big Data (Big Data) ,pages 930–937. IEEE.He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Residual Learning for Image Recognition. In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778.Hutter, F., Kotthoff, L., and Vanschoren, J. (2019). Automated Machine Learning: Methods, Systems,Challenges . Springer.Kotthoff, L., Thornton, C., Hoos, H. H., Hutter, F., and Leyton-Brown, K. (2017). Auto-WEKA 2.0:Automatic Model Selection and Hyperparameter Optimization in WEKA. The Journal of MachineLearning Research , 18(1).LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable Automatic Machine Learning. 7th ICMLWorkshop on Automated Machine Learning (AutoML) .Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Ruhkopf, T.,Sass, R., and Hutter, F. (2022). Smac3: A versatile bayesian optimization package for hyperpa-rameter optimization. Journal of Machine Learning Research , 23(54):1–9.Marvin (2020). Marvin Website. https://datadrivendiscovery.org/marvin .Olson, R. S. and Moore, J. H. (2016). TPOT: A Tree-based Pipeline Optimization Tool for AutomatingMachine Learning. In ICML AutoML Workshop , pages 66–74.Ono, J. P., Castelo, S., López, R., Bertini, E., Freire, J., and Silva, C. T. (2021). PipelineProfiler: AVisual Analytics Tool for the Exploration of AutoML Pipelines. IEEE Transactions on Visualizationand Computer Graphics , 27:390–400.Raleigh, C., Linke, A., Hegre, H., and Karlsen, J. (2010). Introducing ACLED: An Armed ConflictLocation and Event Dataset: Special Data Feature. Journal of peace research , 47(5):651–660.Santos, A., Castelo, S., Felix, C., Ono, J. P., Yu, B., Hong, S. R., Silva, C. T., Bertini, E., and Freire,J. (2019). Visus: An Interactive System for Automatic Machine Learning Model Building andCuration. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA) , pages1–7. Association for Computing Machinery.Sheskin, D. J. (2003). Handbook of Parametric and Nonparametric Statistical Procedures . crc Press.Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L.,Kumaran, D., Graepel, T., et al. (2017). Mastering Chess and Shogi by Self-Play with a GeneralReinforcement Learning Algorithm. Conference on Neural Information Processing Systems .Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M. M. A., Prabhat,P., and Adams, R. P. (2015). Scalable Bayesian Optimization Using Deep Neural Networks. InProceedings of the ICML , pages 2171–2180.Trabelsi, M., Chen, Z., Zhang, S., Davison, B. D., and Heflin, J. (2022). StruBERT: Structure-awareBERT for Table Search and Matching. arXiv preprint arXiv:2203.14278 .12Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z., and Guyon, I. (2021). BayesianOptimization is Superior to Random Search for Machine Learning Hyperparameter Tuning:Analysis of the Black-Box Optimization Challenge 2020. CoRR , abs/2104.10201.Wilson, G. T. (2016). Time Series Analysis: Forecasting and Control, 5th Edition. Journal of TimeSeries Analysis , 37(5):709–711.Wistuba, M., Schilling, N., and Schmidt-Thieme, L. (2015). Learning Hyperparameter OptimizationInitializations. In 2015 IEEE international conference on data science and advanced analytics(DSAA) , pages 1–10. IEEE.13A Broader Impact StatementAlphaD3M can potentially strengthen the efforts in democratizing data science by broadening theapplication of automated predictive pipelines. Subject experts can create their own pipelines andexplore them in the context of an ethical framework. Its interoperable software infrastructureenables external auditing and improves the trust and interpretability of synthesized pipelines.The search space management mechanism also allows efficient resource allocation and helps toprototype pipelines before performing high energy-consuming model training.B Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] See it mainly in Section 3 and 4.(b)Did you describe the limitations of your work? [Yes] See Section 5. We also discuss theinfeasibility of AutoML system in general, and our efforts to mitigate limitations.(c)Did you discuss any potential negative societal impacts of your work? [No] However, weadvocate for the necessity of human-in-the-loop to build trust in the generated pipelines.(d)Have you read the ethics review guidelines and ensured that your paper conforms to them?https://automl.cc/ethics-accessibility/ [Yes] Our paper follows these guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We are not includingtheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We are not includingtheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an instruc-tiveREADME with installation, and execution commands (either in the supplemental materialor as a url)? [Yes] We provide a link to our public GitLab repository and documentationwebpage, where users can find information about the installation and instructions to runour system. The reported evaluation was conducted by a third (independent) party in acompetition among AutoML systems, so we can not release that code.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] See the scripts/paper_automlconference folder in our repository.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [Yes] Seethescripts/paper_automlconference folder in our repository.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] Our code is well documented and follows codingstandards and best practices. We provide different Jupyter notebook examples and an APIto show how to use AlphaD3M.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [No] We do not specify allthe details.14However, some details, like the data split and search spaces are publicly available in thereferences.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] See Section 4.1.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 4.1.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Wepresented two comparisons (see Section 4). For the first comparison, we used the sameprotocol. For the second one, we used an existing asset and we evaluated our system usingthe same time protocol.(i)Did you compare performance over time? [No] We ran the systems during one hour, atime-bound used by others works (Erickson et al., 2020; Feurer et al., 2021), and reportedthe best score during this time.(j)Did you perform multiple runs of your experiments and report random seeds? [N/A] Wedo not perform multiple runs of our experiments.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not report error bars.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [N/A] We did notuse surrogate benchmarks.(m)Did you include the total amount of compute and the type of resources used (e.g., typeofgpus, internal cluster, or cloud provider)? [No] Some of the reported evaluations wereconducted by a third party.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] The hyperparameters were automaticallytuned by our AutoML engine.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a) If your work uses existing assets, did you cite the creators? [Yes] See Section 4.1.(b)Did you mention the license of the assets? [No] However, all assets are publicly availableand the licenses can be retrieved from the references.(c)Did you include any new assets either in the supplemental material or as a url? [Yes] Weincluded a urlto the data used in the experiments.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The assets used in this paper are publicly available.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The data used do not contain personally identifiableinformation neither offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not carry out a user study.15(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not carry out a user study.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not carry out a user study.C Additional DetailsC.1 AlgorithmsAlgorithm 1 describes the process of building the grammar. getVectorTK andgetVectorST repre-sent the BOW and one-hot encoding functions, respectively. The best values empirically calculatedfor the thresholds tsimandtperfare 0.8 and 0.5, respectively.Algorithm 1 Grammar BuilderInput: Marvin datasets D, query dataset q, thresholdtInitializeS=[]// Similar datasetsfordiinDdosimTK =cosineSimilarity(getVectorTK(di),getVectorTK(q))ifsimTK >tsimthensimST =cosineSimilarity(getVectorST(di),getVectorST(q))ifsimST >tsimthenAddditoSInitializeP=calculateADTM(S)InitializeR=[]// Production RulesforpiinPdoifperformance(pi)>tperfthenri=convertToPattern(pi))AddritoRreturnRAlgorithm 2 describes the process of calculating the primitive importance values in detail. Forinstance, the primitive importance values calculated for XGBoost and Random Forrest are 0.62 and0.56, whereas for Nearest Centroid and K-Nearest Neighbors the values are 0.46 and 0.44. It showsthat the importance values can be used as an indicator to prioritize the usage of primitives.Algorithm 2 Primitives ImportanceInput: PipelinesP, PatternsTInitializeR=getPrimitives(P)InitializeG,L=[]// Global and Local correlationsforriinRdopc=PearsonCorrelation (ri,P)npc=normalize(pc)AddnpctoGfortiinTdopi=getPipelines(ti,P)R=getPrimitives(ti,pi)forriinRdopc=PearsonCorrelation (ri,R)npc=normalize(pc)AddnpctoLreturn(G,L)16C.2 GrammarsDifferent tasks require different grammars. For instance, the algorithms needed to solve time-series and semi-supervised classification problems have a different structure and use a differentset of primitives. Consequently, specialized grammars and production rules are needed for eachtask. Manually creating these grammars is time-consuming and error-prone, and relying on thesegrammars can limit the effectiveness of the AutoML systems with respect to problem coverage andquality of the derived pipelines.Figure 5 shows an excerpt of a grammar automatically generated in AlphaD3M to solve classi-fication problems. The start symbol ( S) is the starting point from which all the production rulescan be derived. In the grammar, the terminal ‘primitive’ can be any of the available algorithms inAlphaD3M, and ‘E’represents the empty symbol.S ::= CATEGORICAL_ENCODER TEXT_FEATURIZER DATA_CONVERSION IMPUTATION CLASSIFICATIONS ::= TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER FEATURE_SCALING FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION TEXT_FEATURIZER CATEGORICAL_ENCODER DIMENSIONALITY_REDUCTION CLASSIFICATIONS ::= DATA_STRUCTURE_ALIGNMENT IMPUTATION CLASSIFICATIONS ::= IMPUTATION FEATURE_SCALING CLASSIFICATIONS ::= IMPUTATION FEATURE_SELECTION CLASSIFICATIONS ::= IMPUTATION DIMENSIONALITY_REDUCTION CLASSIFICATIONIMPUTATION ::= 'primitive '|'E'CATEGORICAL_ENCODER ::= 'primitive '|'E'FEATURE_SCALING ::= 'primitive '|'E'FEATURE_SELECTION ::= 'primitive '|'E'DIMENSIONALITY_REDUCTION ::= 'primitive '|'E'DATA_CONVERSION ::= 'primitive 'TEXT_FEATURIZER ::= 'primitive 'DATA_STRUCTURE_ALIGNMENT ::= 'primitive 'CLASSIFICATION ::= 'primitive 'Figure 5: Excerpt of a grammar automatically generated by AlphaD3M for classification tasksIn Figure 6, you can see the manual grammar used in the experiments. This grammar wasproposed by Drori et al. (2019). To generate this grammar for classification and regression tabulartasks, a developer was asked to review manually the primitives to group them into categories. Forinstance, the primitives decision _tree.SKlearn andrandom _forest.SKlearn were grouped into thecategory ‘CLASSIFICATION’. Then, using his knowledge in ML, he created the production rules ofthe grammar using these categories.S ::= CLASSIFICATION_TASK | REGRESSION_TASKCLASSIFICATION_TASK ::= CLASSIFICATION | DATA_CLEANING CLASSIFICATION | DATA_TRANSFORMATION CLASSIFICATION |DATA_CLEANING DATA_TRANSFORMATION CLASSIFICATIONREGRESSION_TASK ::= REGRESSION | DATA_CLEANING REGRESSION | DATA_TRANSFORMATION REGRESSION |DATA_CLEANING DATA_TRANSFORMATION REGRESSIONCLASSIFICATION ::= 'primitive 'REGRESSION ::= 'primitive 'DATA_CLEANING ::= 'primitive 'DATA_CLEANING | 'E'DATA_TRANSFORMATION ::= 'primitive 'DATA_TRANSFORMATION | 'E'Figure 6: Manual GrammarC.3 ExperimentsIn Table 4, we can see the scores obtained by all AutoML systems developed in the D3M program,including a majority voting ensemble system, on a collection of 112 datasets2. This collection17contains challenging datasets that go beyond the simple tabular data and cover a wide variety oftasks and data types.Table 4: Scores obtained by AlphaD3M and the other AutoML systems developed in the D3M program.Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl Drori124_120_mnist_8747 0.98 0.94 0.46 0.18 0.94 0.11 - -124_138_cifar100_1858 0.67 0.48 0.42 0.12 0.48 0.01 - -124_16_fashion_mnist 0.90 0.83 0.84 0.12 0.85 0.10 - -124_174_cifar10_MIN 0.88 0.82 0.84 0.27 0.80 0.10 - -124_188_usps_MIN 0.96 0.95 0.94 0.26 0.92 0.18 0.11 -124_214_coil20_MIN 0.99 0.99 0.99 0.85 0.97 - - -124_95_uc_merced_land_use_MIN 0.90 - 0.72 0.52 - 0.05 0.33 -1491_one_hundred_plants_margin_MIN 0.80 0.79 0.88 0.92 0.75 0.83 0.81 0.831567_poker_hand_MIN 0.90 0.84 0.28 0.48 0.12 0.13 - 0.27185_baseball_MIN 0.66 0.70 0.65 0.68 0.68 0.67 0.66 0.64196_autoMpg_MIN 6.57 9.12 5.74 11.95 7.49 6.01 15.36 7.0322_handgeometry_MIN 0.24 0.23 0.23 0.14 0.80 0.36 0.36 -26_radon_seed_MIN 0.02 0.02 0.24 0.03 0.02 0.06 1.40 0.0227_wordLevels_MIN 0.32 0.28 0.28 0.32 0.29 0.27 0.26 0.27299_libras_move_MIN 0.98 - - 0.48 - - 0.98 0.9730_personae_MIN 0.62 0.65 0.65 0.62 0.61 0.55 0.61 -313_spectrometer_MIN 0.43 0.37 0.37 0.30 0.32 0.33 0.23 0.4031_urbansound_MIN 0.93 0.93 0.91 0.75 0.92 0.77 0.49 -32_fma_MIN 0.55 0.57 0.34 0.28 - 0.11 0.11 -32_wikiqa_MIN 0.00 0.02 0.14 0.13 0.50 - 0.13 -38_sick_MIN 1.00 1.00 - 1.00 - - 0.49 1.004550_MiceProtein_MIN 1.00 1.00 1.00 0.99 1.00 1.00 1.00 1.0049_facebook_MIN 0.88 0.87 0.87 0.87 0.87 0.88 0.44 -534_cps_85_wages_MIN 20.11 20.35 22.07 23.15 24.86 21.44 - 20.7056_sunspots_MIN 34.55 11.82 8.64 8.45 58.30 9.40 90.60 -56_sunspots_monthly_MIN 64.61 41.18 46.86 41.04 - 62.20 27.74 -57_hypothyroid_MIN 0.96 0.98 0.99 0.98 0.74 0.99 0.97 0.9859_LP_karate_MIN 0.93 0.45 0.83 0.83 0.45 0.45 0.93 -59_umls_MIN 0.92 0.94 0.94 0.94 0.94 0.70 0.73 -60_jester_MIN 4.25 - 4.24 4.15 - 4.51 - -66_chlorineConcentration_MIN 0.82 0.86 0.81 0.52 0.78 0.68 0.23 -6_70_com_amazon_MIN 0.85 0.85 - 0.85 0.85 - - -6_86_com_DBLP_MIN 0.72 0.72 - 0.72 0.72 - - -JIDO_SOHR_Articles_1061 0.98 0.94 0.94 0.81 0.56 0.60 0.64 -JIDO_SOHR_Tab_Articles_8569 1.00 0.99 1.00 1.00 0.56 1.00 1.00 -LL0_1100_popularkids_MIN 0.42 0.45 0.38 0.38 0.40 0.44 - 0.47LL0_186_braziltourism_MIN 0.14 0.35 0.36 0.17 0.24 0.20 0.34 0.16LL0_207_autoPrice_MIN 4.89·1065.76·1066.04·1063.76·1075.36·1065.43·1061.56·1085.81·106LL0_acled_reduced_MIN 0.83 0.88 0.89 0.84 0.91 0.85 0.74 0.91LL0_jido_reduced_MIN 0.90 0.89 0.91 0.90 0.90 0.90 - 0.90LL1_2734_CLIR 0.88 0.50 0.52 0.88 - - 0.50 -LL1_336_MS_Geolife_transport_MIN 0.60 1.00 0.99 - 0.85 - 0.98 -LL1_336_MS_Geolife_transport_separate 0.67 1.00 0.99 - 0.86 - 0.99 -LL1_3476_HMDB_actio_recognition_MIN 0.11 1.00 0.90 0.11 - 0.48 0.08 -LL1_50words_MIN 0.35 0.55 0.56 0.41 0.51 0.45 0.35 -LL1_726_TIDY_GPS_carpool 0.54 0.58 0.58 0.46 0.59 - 0.63 -LL1_736_population_spawn_MIN 1636.12 1806.40 1804.76 1644.26 - 2845.89 - -LL1_736_population_spawn_simpler_MIN 1346.10 1490.15 3669.54 1347.65 1323.72 1550.40 19887.20 -LL1_736_stock_market_MIN 7.64 1.49 8.69 1.75 - 30.66 - -LL1_ACLED_TOR_online_behavior_MIN 0.40 0.05 0.44 0.64 0.43 0.66 0.08 0.40LL1_Adiac_MIN 0.75 0.70 0.73 0.54 0.67 0.70 0.49 -LL1_ArrowHead_MIN 0.75 0.82 0.78 0.72 0.65 0.55 0.72 -LL1_CONFLICT_3457_atrocity 9.53 6.75 11.43 12.84 - 17.21 13.91 -LL1_Cricket_Y_MIN 0.52 0.54 0.59 0.52 0.62 0.53 0.45 -LL1_DIC28_net_MIN 0.84 0.80 0.80 0.80 0.80 0.84 - -LL1_ECG200_MIN 0.90 0.87 0.87 0.86 0.91 0.85 0.86 -LL1_EDGELIST_net_nomination_MIN 0.99 0.66 0.85 0.94 0.66 0.35 0.84 -LL1_ElectricDevices_MIN 0.54 0.42 0.46 0.06 0.44 0.27 0.31 -LL1_FISH_MIN 0.80 0.87 0.89 0.73 0.84 0.86 0.78 -LL1_FaceFour_MIN 0.84 0.83 0.71 0.55 0.65 0.40 0.66 -18(Table 4: Continued from the previous page)Dataset AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriLL1_GS_process_classification_tab_MIN 0.80 0.80 0.80 0.80 0.80 0.73 - 0.81LL1_GS_process_classification_text_MIN 0.65 0.80 0.65 0.80 0.80 0.76 0.80 -LL1_GT_actor_group_association_MIN 0.25 0.13 0.17 0.13 - - - -LL1_HandOutlines_MIN 0.89 0.91 0.90 0.88 0.88 0.88 0.88 -LL1_Haptics_MIN 0.43 0.42 0.44 0.42 0.41 0.45 0.42 -LL1_ItalyPowerDemand_MIN 0.93 0.95 0.95 0.95 0.95 0.91 0.90 -LL1_MIL_MUSK 0.68 0.77 0.83 0.67 0.80 0.80 - 0.72LL1_MIL_Mutagenesis 0.80 0.73 0.72 0.71 0.70 0.63 - 0.79LL1_MITLL_synthetic_vora_E_2538 0.29 0.53 0.52 0.50 0.31 0.44 - 0.38LL1_Meat_MIN 0.95 0.94 0.88 0.92 0.88 0.17 0.95 -LL1_OSULeaf_MIN 0.53 0.44 0.52 0.77 0.45 0.47 0.32 -LL1_PHEM_Monthly_Malnutrition_MIN 10.63 9.56 9.39 9.73 - 12.18 - -LL1_PHEM_weekly_malnutrition_MIN 3.34 4.32 3.45 2.94 - 4.23 4.18 -LL1_TXT_CLS_3746_newsgroup_MIN 0.60 0.46 0.55 0.48 0.60 0.45 0.23 -LL1_TXT_CLS_SST_Binary 0.73 0.82 0.82 0.55 - 0.51 0.53 -LL1_TXT_CLS_airline_opinion_MIN 0.81 0.80 0.81 0.80 0.81 0.72 0.72 -LL1_TXT_CLS_apple_products_sent_MIN 0.73 0.71 0.72 0.72 0.73 0.66 0.69 -LL1_VID_UCF11_MIN 0.99 0.99 0.25 0.27 - 0.02 0.08 -LL1_VTXC_1343_cora_MIN 0.61 0.04 0.22 0.17 0.04 0.13 0.52 -LL1_VTXC_1369_synthetic_MIN 0.95 0.22 0.33 0.21 0.22 0.19 0.48 -LL1_ViEWS_CM_S1 0.69 1.20 0.90 0.72 0.75 2.52 - 0.82LL1_ViEWS_PGM_S1 0.02 0.04 0.02 - 0.02 0.02 0.30 0.02LL1_bigearth_landuse_detection 0.90 0.96 0.76 0.65 0.21 - - -LL1_bn_fly_drosophila_medulla_net_MIN 0.24 0.24 - - - 0.19 - -LL1_h1b_visa_apps_7480 0.44 0.47 0.43 0.44 0.41 0.41 0.47 0.42LL1_net_nomination_seed_MIN 0.99 0.99 0.96 0.94 0.99 0.34 0.46 -LL1_penn_fudan_pedestrian_MIN 0.94 0.94 - 0.94 0.94 - - -LL1_retail_sales_total_MIN 1989.19 1921.54 1941.06 1966.30 1992.17 - 1971.76 2022.41LL1_terra_canopy_height_s4_100_MIN 113.04 68.44 39.02 52.21 - 79.86 343.27 -LL1_terra_canopy_height_s4_70_MIN 104.92 547.94 126.06 136.32 - 169.63 136.98 -LL1_terra_canopy_height_s4_80_MIN 112.95 92.95 32.57 74.59 - 111.49 74.54 -LL1_terra_canopy_height_s4_90_MIN 117.13 85.73 35.12 60.44 - 104.49 60.45 -LL1_terra_leaf_angle_mean_s4_MIN 0.04 0.09 0.05 0.04 - - 0.05 -LL1_tidy_terra_panicle_detection_MIN 0.01 0.03 - - - - - -SEMI_1040_sylva_prior_MIN 0.93 0.90 0.93 - 0.92 - - -SEMI_1044_eye_movements_MIN 0.52 0.57 0.61 0.55 0.60 0.53 0.54 -SEMI_1053_jm1_MIN 0.26 1.00 0.16 - 0.16 0.41 - -SEMI_1217_click_prediction_small_MIN 0.04 0.03 0.04 - 0.17 - - -SEMI_1459_artificial_characters_MIN 0.68 0.99 0.83 0.99 0.67 0.61 0.52 -SEMI_155_pokerhand_MIN 0.58 0.66 0.60 0.05 0.64 0.50 0.51 -kaggle_music_hackathon_MIN 21.88 17.56 19.64 24.24 21.79 - - 21.85loan_status_MIN 0.40 0.50 0.51 0.44 0.33 - 0.48 0.46political_instability_MIN 0.81 0.89 0.89 0.89 0.89 - 0.88 -uu1_datasmash_MIN 1.00 1.00 1.00 1.00 0.61 1.00 1.00 -uu2_gp_hyperparameter_estimation_MIN 0.89 0.88 0.57 0.89 - - - 0.89uu3_world_development_indicators_MIN 2.39·10105.54·10124.12·1012-4.40·1012- - -uu3_world_development_indicators_raw 7.83·10131.04·10125.22·1011- - - - -uu4_SPECT_MIN 0.00 0.92 0.92 0.90 0.89 0.90 0.78 -uu5_heartstatlog_MIN 0.70 0.69 0.72 0.62 0.61 0.72 0.67 -uu6_hepatitis_MIN 0.00 0.47 0.89 0.40 0.27 0.31 0.44 -uu7_pima_diabetes_MIN 0.59 0.57 0.60 0.57 0.60 0.63 0.57 -uu_101_object_categories_MIN 0.95 0.89 0.84 0.34 - 0.10 - -19The average rank values obtained by different AutoML systems for each task type in the D3Mdatasets can be seen in Table 5. These datasets contain a total of 17 unique ML tasks.Table 5: Average rank values by task obtained by different AutoML systems.Task AlphaD3M AutonML Ensemble Aika Distil Autoflow Axolotl DroriImage Classification 1.11 2.78 2.78 4.56 4.33 6.22 7.44 8.00Tabular Classification 3.75 3.30 3.35 3.85 4.85 4.65 5.85 3.55Tabular Regression 2.27 3.18 3.00 5.73 4.27 5.73 7.54 4.36Image Regression 4.00 2.00 2.00 1.00 7.00 5.00 5.00 8.00Text Classification 2.56 3.33 2.22 3.00 3.56 5.78 4.33 8.00Audio Classification 1.50 1.00 3.50 5.00 5.50 5.00 6.00 8.00Graph Matching 1.00 3.33 3.00 2.33 4.67 3.33 6.33 8.00Time series Forecasting 3.38 3.62 2.62 2.23 7.31 5.08 5.08 8.00Link Prediction 3.33 2.33 2.33 1.67 4.67 6.67 5.00 8.00Collaborative Filtering 3.00 8.00 2.00 1.00 8.00 4.00 8.00 8.00Time series Classification 3.26 2.26 2.16 4.68 3.79 5.32 4.53 8.00Community Detection 1.00 1.00 8.00 3.33 3.33 6.33 8.00 8.00Video Classification 2.50 1.00 3.00 3.50 8.00 4.50 5.50 8.00Vertex Classification 1.00 4.00 3.25 4.25 4.00 6.50 3.50 8.00Object Detection 1.50 1.00 8.00 4.50 4.50 8.00 8.00 8.00Semisupervised Classification 3.50 2.33 2.33 6.00 2.83 6.00 6.83 8.00LUPI 5.25 3.00 1.25 4.50 5.00 2.50 4.75 8.0020Table 6 and Table 7 show the raw and normalized scores (normalized by the best score) obtainedby each system on the 39 datasets of the OpenML AutoML Benchmark (Gijsbers et al., 2019).This benchmark represents real-world data science problems and covers binary and multiclassclassification tasks. Additionally, Table 6 shows the gain of AlphaD3M regarding the other systems.Table 6: Raw scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3M Gaintask_10101 0.76 0.76 0.76 0.76 0.76 0.79 0.03task_12 0.98 0.98 0.98 0.98 - 0.96 -0.01task_146195 0.88 0.71 0.86 0.88 0.85 0.81 -0.03task_146212 1.00 1.00 1.00 1.00 1.00 1.00 0.00task_146606 0.74 0.60 0.73 0.72 - 0.73 0.03task_146818 0.91 0.86 0.84 0.90 0.87 0.87 -0.01task_146821 0.99 1.00 1.00 1.00 1.00 0.97 -0.03task_146822 0.97 0.97 0.97 0.97 0.98 0.97 0.00task_146825 0.91 - 0.91 0.90 - 0.86 -0.05task_14965 0.91 0.88 0.91 0.91 0.91 0.91 0.00task_167119 0.92 0.80 0.94 0.96 0.90 0.83 -0.08task_167120 0.51 0.51 0.51 0.51 - 0.51 -0.00task_168329 0.40 0.27 0.38 0.35 0.35 0.37 0.02task_168330 0.73 0.65 0.73 0.73 0.70 0.72 0.01task_168331 0.73 0.62 0.73 0.69 0.66 0.66 -0.02task_168332 0.56 - 0.54 0.51 0.44 0.41 -0.10task_168335 0.94 - 0.94 - 0.93 0.94 -0.00task_168337 0.84 - 0.86 0.83 0.77 0.61 -0.21task_168338 1.00 - 1.00 1.00 0.99 0.97 -0.03task_168868 0.99 0.99 0.99 1.00 0.99 0.99 0.00task_168908 0.74 0.73 0.76 0.72 - 0.77 0.03task_168909 0.99 0.96 0.99 0.98 - 0.99 0.01task_168910 0.72 0.60 0.72 0.72 0.71 0.65 -0.04task_168911 0.81 0.82 0.82 0.82 0.81 0.81 -0.01task_168912 0.93 0.92 0.95 0.95 0.95 0.94 -0.00task_189354 0.67 - 0.67 0.61 0.67 0.65 -0.01task_189355 0.94 - 0.00 - - 0.88 0.41task_189356 0.71 - 0.69 - - - -task_3 0.99 0.93 0.99 1.00 0.99 0.99 0.01task_31 0.77 0.66 0.82 - 0.82 0.77 0.00task_34539 0.95 - 0.95 0.95 0.95 0.95 -0.01task_3917 0.87 - 0.86 - 0.88 0.86 -0.01task_3945 0.98 - 0.98 0.98 0.98 0.98 0.00task_53 0.86 0.67 0.85 0.88 - 0.82 0.01task_7592 0.87 0.87 0.87 0.86 0.87 0.87 0.00task_7593 0.97 0.66 0.96 0.80 - 0.95 0.10task_9952 0.88 0.91 0.90 0.90 0.91 0.91 0.01task_9977 0.98 0.95 0.97 0.98 0.97 0.96 -0.00task_9981 0.94 0.86 0.96 0.94 0.96 0.94 0.0121Table 7: Normalized scores obtained by AlphaD3M and the other AutoML systems.Dataset AutoGluon AutoWEKA Auto-Sklearn H2O TPOT AlphaD3Mtask_10101 0.97 0.97 0.97 0.97 0.97 1.00task_12 0.99 1.00 0.99 0.99 - 0.98task_146195 1.00 0.81 0.98 1.00 0.97 0.92task_146212 1.00 1.00 1.00 1.00 1.00 1.00task_146606 1.00 0.82 1.00 0.98 - 0.99task_146818 1.00 0.94 0.92 0.98 0.95 0.95task_146821 0.99 1.00 1.00 1.00 1.00 0.97task_146822 1.00 0.99 1.00 1.00 1.00 1.00task_146825 1.00 - 0.99 0.99 - 0.94task_14965 1.00 0.96 1.00 1.00 1.00 1.00task_167119 0.96 0.83 0.98 1.00 0.94 0.86task_167120 1.00 1.00 1.00 0.99 - 0.99task_168329 1.00 0.69 0.96 0.88 0.89 0.94task_168330 1.00 0.89 1.00 1.00 0.97 0.98task_168331 1.00 0.84 1.00 0.95 0.90 0.91task_168332 1.00 - 0.98 0.93 0.80 0.75task_168335 1.00 - 1.00 - 0.99 0.99task_168337 0.98 - 1.00 0.97 0.89 0.71task_168338 1.00 - 1.00 1.00 0.99 0.97task_168868 1.00 0.99 1.00 1.00 1.00 1.00task_168908 0.97 0.96 0.99 0.94 - 1.00task_168909 1.00 0.97 1.00 0.99 - 1.00task_168910 1.00 0.83 1.00 1.00 0.98 0.90task_168911 0.99 1.00 1.00 1.00 0.99 0.98task_168912 0.98 0.97 0.99 1.00 1.00 0.98task_189354 1.00 - 1.00 0.91 1.00 0.96task_189355 1.00 - 0.00 - - 0.94task_189356 1.00 - 0.97 - - -task_3 1.00 0.94 1.00 1.00 1.00 1.00task_31 0.94 0.80 1.00 - 1.00 0.94task_34539 1.00 - 1.00 1.00 0.99 0.99task_3917 0.99 - 0.98 - 1.00 0.98task_3945 1.00 - 1.00 0.99 1.00 1.00task_53 0.97 0.76 0.96 1.00 - 0.93task_7592 1.00 0.99 1.00 0.99 1.00 1.00task_7593 1.00 0.68 0.99 0.82 - 0.97task_9952 0.96 0.99 0.98 0.98 1.00 0.99task_9977 1.00 0.97 1.00 1.00 1.00 0.99task_9981 0.98 0.89 1.00 0.98 1.00 0.9822 |
Q3DWpGoX7PD | forester: A Novel Approach to Accessible and InterpretableAutoML for Tree-Based ModelingAnna Kozak1Hubert Ruczyński11Warsaw University of TechnologyAbstract The majority of AutoML solutions are developed in Python. However, a large percentageof data scientists are associated with the R language. Unfortunately, there are limitedR solutions available with high entry level which means they are not accessible to everyone.To fill this gap, we present the forester package, which offers ease of use regardless of theuser’s proficiency in the area of machine learning.The forester package is an open-source AutoML package implemented in R designed fortraining high-quality tree-based models on tabular data. It supports regression and binaryclassification tasks. A single line of code allows the use of unprocessed datasets, informsabout potential issues concerning them, and handles feature engineering automatically.Moreover, hyperparameter tuning is performed by Bayesian optimization, which provideshigh-quality outcomes. The results are later served as a ranked list of models. Finally, theforester package offers a vast training report, including the ranked list, a comparison oftrained models, and explanations for the best one.1 IntroductionMachine learning is being used more and more in the world around us. Every day, models arecreated to assist doctors (Shimizu and Nakayama, 2020), financiers (Jorge et al., 2022), or tourists(Fararni et al., 2021). With the increasing demand for model building, research is being conductedon automatically developing tools to build artificial intelligence based solutions.Many types of models are used in machine learning, such as decision rules (scoring card model) tocomplex neural network structures modeling natural language (large language models, for example,ChatGPT (Bavarian et al., 2022)). Viewing machine learning in terms of tabular data, we havea wide range of models available, from decision trees and linear or logistic regression to randomforests, SVM, or neural networks. However, tree-based models are the most widely used; the mainreason behind this is their high predictive efficiency. A simple decision tree model gives relativelysatisfactory results, but using multiple trees to create a random forest allows significantly higherpredictive power (Caruana et al., 2008; Grinsztajn et al., 2022).Automating the process to build machine learning models can include many different components.For example, the CRoss Industry Standard Process for Data Mining (CRISP-DM) (Wirth and Hipp,2000) is the most common methodology for data mining, analytics, and data science projects. Butthe basic framework of an automatic machine learning system is the preparation of models basedon data entered by the user. This process can be extended in various directions; for example,a preliminary analysis of the given data can be taken care of to look for potential data errorsor outlier observations, i.e. exploratory data analysis. Another essential element may be thesearch space of the model’s hyperparameters. Optimization of hyperparameters can be based onsimple methods such as a predefined parameter grid or random search. Another way to selecthyperparameters is to use Bayesian optimization (Snoek et al., 2012) or meta-learning (Vilalta et al.,2004; Vanschoren, 2019; Woźnica and Biecek, 2022). After tuning the models with hyperparameteroptimization, the next step we can add is to analyze the results in the form of a leaderboardAutoML 2023 Workshop Track ©2023 the authors, released under CC BY 4.0or visualization. By extending with explanatory methods (Biecek and Burzykowski, 2021) andreporting, the entire machine learning process can be finalized.Automating the process of machine learning allows access to data science tools for people who arestarting in data analysis and modeling. At the same time, it is an improvement and speeds up thework of experienced data scientists, who can make at least baseline models using a single line ofcode.In this paper, we present the AutoML package written for the R (R Core Team, 2022) to createmodels for regression and binary classification tasks on tabular data. The main goals of the packageare: making the package easy to use, fully automating all the necessary steps inside the ML pipeline,and providing results that are easy to create, understand and allow diagnostics of the models.The availability of responsible machine learning methods in the solution allows the results ofcomplex models to be interpreted. Changing the focus from obtaining the best possible outcomesto the interpretability of the results is a novelty for the AutoML tools. The implementation of theforester package can be found in our GitHub repository1. The software is open source and containscomprehensive documentation with examples of use.2 Related worksPackages for AutoML are prevalent in Python. The first AutoML solutions like Auto-WEKA(Thornton et al., 2013), was followed by Auto-Sklearn (Feurer et al., 2015, 2022) and TPOT (Tree-Based Pipeline Optimization Tool) (Olson et al., 2016) which was one of the very first AutoMLmethods and open-source software packages developed for the data science community in Python.But in R, there are few approaches. One of them is the H2O package (LeDell et al., 2022). It isan open-source library that is an in-memory, distributed, fast, and scalable machine learningand predictive analytics platform that creates a ranked list of models easily exported for use ina production environment. The authors have created an easy-to-use interface that automates thetraining of multiple candidate models. H2O’s AutoML is also designed for more advanced users byproviding a simple wrapper function that performs many modeling tasks. H2O’s AutoML processautomatically trains models and tunes them at user-specified times. To better understand the qualityof models in H2O, we can rely on metrics such as R2and mean square error (MSE). For comparison,in the forester package, we can compare models using the most commonly used metrics or evendefine a new custom metric. What particularly distinguishes the forester package from H2O isthe preprocessing. In the latter’s case, it only includes target encoding and is in the experimentalstage. In the forester package, we have more accurate and extensive preprocessing. In addition,H2O always requires Java to work, so the user must also install it.The second widely-used framework is the mlr3 package (Lang et al., 2019) which provides a frame-work for classification, regression, survival analysis, and other ML tasks such as cluster analysis.It provides the ability to perform hyperparameter tuning and feature selection. The package iswell-documented, contains many functions and models, and provides many capabilities. However,it is different from a typical package for AutoML, as creating models requires knowledge of how todo it and some time to assemble such a model. It also has its drawbacks, such as the need for morepreprocessing, which would help to use it more easily, for example, the XGBoost model, whichhas to have only numerical data without factors. There is also no way to divide the collection intotraining, testing, and validation subsets. The mlr3 package provides functionality that builds onthe basic components of machine learning. It can be extended to include preprocessing, pipelining,visualization, additional learners, additional task types, and more. To create these properties, weneed to install many other libraries. In the forester package, we provide these components at once,and with a single function, we can perform preprocessing, prepare visualization of the results1https://github.com/ModelOriented/forester2Model training and tuningData checkData preparationDecisionmakingforesterfeaturesModel evaluationMissing values,Correlated features, Irrelevant columnsData splitting,Preprocessing,Data imputationDefault parameters,Random search,Bayesian OptimizationRanked list,Customizable metricssave(),report(),explain()(1)(2)(3)(4)Raw dataFigure 1: A diagram presenting the forester pipeline. The forester analyses poor-quality data with thein-built data check (1), which points to possible issues, and later data preparation (2) handlesthem during the preprocessing. In the next step, the models are trained with default andrandom searched parameters and tuned with a Bayesian optimization algorithm (3). In theend, trained models are evaluated (4) and presented as a ranked list. In addition, the packageoffers the user additional features.and generate a report. A more detailed comparison of the forester package with H2O andmlr3 ispresented in Appendix F.3forester AutoMLTheforester is an AutoML package automating the machine learning pipeline, starting from the datapreparation, through model training, to the interpretability of the results. This way, we minimize theuser’s time performing basic and often repetitive activities related to the machine-learning process.Despite the high automation of the pipeline shown in Figure 1, we expose multiple parameterswhich advanced data scientists can use to customize the model creation. The whole package relieson the four pillars described in this section.1.Data checkThe first one, called data check, concerns a data preparation phase. Data preparation is a crucialpart of the modeling process (Rutkowski et al., 2010), so we cannot blindly assume a single wayof transforming the data for all cases. Appropriate data preprocessing is crucial to buildinga model with a small error rate. To face that issue, we introduce a data check report summarizingthe dataset with some basic information and pointing out possible problems. Data problems canaffect the following modeling stages and be relevant to any model. The data check report pointsout id-like, duplicated, static, or highly correlated columns. Moreover, it points out the outliers,missing values, and the imbalance of the target. This way we can propose some simple heuristicdata preprocessing methods, yet more advanced users are able to fight the issues mentioned bystudying the data check report on their own.32.Data preparationPreparing the data for modeling is another crucial aspect after checking the data. It can bedone using a dedicated tool, but the forester package offers two general-purpose preprocessingmethods, basic and advanced. The main purpose of this function is to remove the need toprepare data manually differently for different types of models. The basic preparation consistsof the actions that are necessary for the package to work that is: the removal of static columns,binarization of the target variable, and imputation of the missing data using the MICE algorithm(Buuren and Groothuis-Oudshoorn, 2011). The advanced method additionally includes theremoval of id-like columns (features suspected of being id), removal of highly correlated columns(Spearman’s rank for the numerical features, and Crammer’s V rank for categorical features) aswell as feature selection with the BORUTA algorithm (Kursa and Rudnicki, 2010). Additionally,every model in the forester package requires a different data format which is also prepared insidethe main function.3.Model training and tuningTheforester package’s third and most important pillar is model training and tuning. Our solutionfocuses on the tree-based model family because of their high-quality performance for varioustabular data tasks. We’ve limited ourselves to 5 well-known engines with different strong andweak points, so they complement each other.We have included the basic decision tree from partykit package (Hothorn and Zeileis, 2015)as an extremely light engine, but mostly, we have focused on the ensemble models. The onlybagging representative is the random forest from the ranger package (Wright and Ziegler, 2017),which is reluctant to overfit.We have also considered three different boosting algorithms. The XGBoost model (Chen andGuestrin, 2016) is highly effective, but due to the need for one hot encoding, it suffers from theabundance of categorical features. However, the LightGBM model (Ke et al., 2017), which worksbest for medium and large datasets, has problems with the small ones. The last engine is theCatBoost (Prokhorenkova et al., 2018) which can achieve superior performance but requires theJava environment installed, which is a minor inconvenience.The models are trained with three approaches: using the default parameters, performing therandom search algorithm within the predefined parameter space, and running an advancedBayesian Optimization algorithm for fine-grained tuning. The first method is the baselinefor other models. With the second one, we can cheaply create multiple models and explorevarious parameter combinations. The best and most time-consuming method is the BayesianOptimization from the ParBayesianOptimization package. However, it is extremely useful forcomplex tasks.4.Model evaluationThe last pillar is the automatic evaluation of the trained models. The forester package assessesevery trained model by various metrics, such as accuracy, area under the receiver operatingcharacteristic curve (AUC), and F1 for the binary classification tasks, and Root Mean SquaredError (RMSE), Mean Absolute Error (MAE), or R2for the regression tasks. The results are laterpresented as a ranked list sorted by the outcomes (for example, ascending order for RMSE, anddescending for AUC). Moreover, the user can define their metrics and provide them for theevaluation phase.4forester featuresOne of the most important goals for the forester package is the convenience of use and helping theusers to focus more on analyzing the results instead of writing the code. To obtain such a user-friendly environment, the forester offers plenty of additional features useful for data scientists.44.1 Model explanationsIn recent years, interpretable machine learning has become a significant trend in machine learning.The tools providing interpretability such as DALEX (Biecek, 2018) or iml(Molnar et al., 2020)allow data scientists to explain how the models they create work, making it easier to detecttheir misbehavior. Models’ explainability also enhances trust in such tools, even in demandingenvironments like medical researchers. To support using explainable methods for the modelstrained by the forester , we have created a wrapper for the DALEX explainer compatible with ourpackage. This way, the user can easily create various explanations for the trained models.4.2 Saving the outcomesAnother crucial feature is the save function, which lets the user save the training output. Returnedforester object contains lots of information, such as preprocessed dataset, split datasets, split indexes,ranked lists for training, testing, and validation datasets, the predictions of the model, and muchmore. The abundance of objects makes it incredibly important to save the outcomes after thetime-consuming training process.4.3 Automated reportLast but not least, our solution offers an automatically generated report that helps users quicklyand easily analyze the training results. The main goal of this feature is to ensure that every useris able to easily assess the quality of the trained models. The report consists of basic informationabout the dataset, a data check report, a ranked list of the best ten models, and visualizationsconcerning model quality. An example report for the blood-transfusion-service-center dataset (fromthe OpenML-CC18 benchmark (Bischl et al., 2021)) is provided in Appendix G.The plots are divided into two groups; the first one compares the outcomes of different models,which helps to decide which model is the best. For example, guided by the radar chart comparisonplot, we can choose the model with slightly worse accuracy, but better AUC and F1 values.The second type of plots concentrates on the model with the best performance, and its mostprominent feature is providing a feature importance plot. This visualization lets us understandwhich variables are the most important for the model; thus, we can evaluate its correctness.It is worth noticing that the reports, mostly visualizations, are different for binary classificationand regression tasks as we measure their performance differently.5 User interface5.1 Training functionThe forester ’s main train() function runs the entire AutoML pipeline, including the data prepa-ration, model training, and evaluation. To keep the package as simple as possible, the functionrequires only the dataset and target column name (Listing 1); however, to keep the tool versatile,there are lots of custom parameters for more advanced users (Listing 2). With the latter option, theuser can specify the amount of Bayesian optimization iterations, the number of random searchevaluations, proportions of the train, test, and validation subsets, change the preprocessing methodsor even add their evaluation metric.train _ output←train ( data = lisbon , y = 'Price ')Listing 1: Training models with the forester package and default parameters.5train _ output←train ( data = lisbon ,y = 'Price ',verbose = TRUE ,engine = c( 'ranger ','xgboost ','decision _tree ','lightgbm ','catboost '),train _ test _ split = c(0.6 , 0.2 , 0.2) ,bayes _ iter = 10,random _ evals = 3,advanced _ preprocessing = FALSE ,metrics = 'auto ',sort _by = 'auto ',metric _ function = NULL ,metric _ function _ name = NULL ,metric _ function _ decreasing = TRUE ,best _ model _ number = 5)Listing 2: Training models with the forester package and custom parameters.5.2 Extensive featuresApart from the train() function, the user can utilize additional functions, which is helpful duringthe modeling process. The check_data() function (Listing 3) enables printing a data check reportoutside of the train() function. The save() function (Listing 4) lets us save the outcome of thetraining process, whereas the report() function (Listing 5) creates a training report. The lastextension is the explain() function (Listing 6), which creates a DALEX explainer that can be usedto generate multiple visualizations concerning the model interpretability with the DALEX package.check _ data ( data = `blood - transfusion - service - center `, y = 'Class ')Listing 3: Generating a data check report.save ( train _ output , name = 'train _ output .RData ')Listing 4: Saving the train output.report ( train _ output , 'report .pdf ')Listing 5: Generating a report from the train output.exp←explain ( models = train _ output $ best _ models [[1]] ,test _ data = train _ output $data ,y = train _ output $y,verbose = FALSE )Listing 6: Creating a model explainer, that lets us use functions from the DALEX package.6 PerformanceTo evaluate the performance of the package, we’ve decided to compare it to the H2O framework onthe binary classification tasks from the OpenML-CC18 benchmark (Bischl et al., 2021) and regressiontasks from OpenML (Vanschoren et al., 2013). Due to the limited computational resources, we havechosen a subset of 8 datasets for classification and 7 for regression described in Table 1 and Table2, respectively. The binary classification datasets consisted mainly of categorical variables andcontained many missing values, a significant obstacle for both solutions, whereas the regressiontasks had no missing values and mostly numeric or binary values.6During the experiment, we trained the forester package three times for each dataset with randomseeds provided for the data splitting function inside the forester . The same splits were later usedfor the H2O framework. A singular training iteration was executed for the decision tree, randomforest, LightGBM, and CatBoost engines with ten iterations of the Bayesian optimization and tenrandom search evaluations. For the regression task we’ve additionally added an XGboost engine.To ensure that both frameworks had the same amount of time, we have measured it for every forestertraining iteration, and provided it to the respective H2O AutoML runs. This H2O functionalitydidn’t work as supposed, and finally this framework had two times longer training time on average.This factor definitely improved the H2Os results, and we have to bear that in mind during theoutcomes comparison. For further details see Appendix E. Additionally, to ensure the same datasplit, we have used the indexes saved during the forester training. The source codes are included inAppendix A.The comparison of performance for both frameworks is presented in Figure 2 and Figure 3. Forthe raw results, as well as aggregated tabular ones, see Appendix C. As one can see, for thebinary classification task, the forester outperformed the H2O framework on five datasets: banknote-authentication ,blood-transfusion-service-centre ,credit-approval ,credit-g , and diabetes . The outcomesfor very simple datasets kr-vs-kp andbreast-w were similar, and H2O obtained better performancefor the phoneme data. For the regression tasks, the results were comparable to the H2O’s for mosttasks or slightly worse, as for the poldataset. The results show that the forester creates high-qualitymodels that are competitive with the existing solutions.However, our conclusions cannot be too far-fetched since we tested the package for only a few setsfor binary classification and regression tasks. We cannot say that the forester package’s predictivepower is better than H2O, but they clearly are competitive.Table 1: A subset of OpenML-CC18 benchmark datasets used during the evaluation process of theforester package, which are tabular data objects presenting the binary classification tasks.The features are mostly categorical, and they contain lots of missing values.Name Number of columns Number of rowskr-vs-kp 37 3196breast-w 10 699credit-approval 16 690credit-g 21 1000diabetes 9 768phoneme 6 5404banknote-authentication 5 1372blood-transfusion-service-center 5 748Table 2: A subset of OpenML datasets used during the evaluation process of the forester package,which are tabular data objects presenting the regression tasks. In this case there were nomissing values, and the features were mostly numerical or binary.Name Number of columns Number of rowsbank32nh 33 8192wine_quality 12 6497Mercedes_Benz_Greener_Manufacturing 378 4209kin8nm 9 8192pol 49 150002dplanes 11 40768elevators 19 165997banknoteauthenticationbloodtransfusionservicecenterbreastwcreditapprovalcreditgdiabeteskr vs kpphoneme0.5 0.6 0.7 0.8 0.9 1.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainAccuracyDatasetFramework forester for the binary classification taskPerformance comparison of forester and H2OH2OFigure 2: Performance comparison for forester and H2O frameworks for the datasets described inTable 1. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot clearlyshows us that the forester performs better than the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.2dplanesbank32nhelevatorskin8nmMercedes_Benz_Greener_Manufacturingpolwine_quality0.0 2.5 5.0 7.5 10.0testvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtraintestvalidtrainRMSEDatasetfor the regression taskPerformance comparison of forester and H2OFramework forester H2OFigure 3: Performance comparison for forester and H2O frameworks for the datasets described inTable 2. Every experiment is conducted 3 times, which results in three observations visibleon the plot for each dataset. Note that in some cases the dots might overlap. This plot showsus that the forester performs comparably to the H2O package on the provided tasks, whichconfirms that it is a highly competitive framework.7 Limitations and Broader Impact StatementThe forester package has limitations in the availability of models. The library contains only tree-based models, but this family proves to be extremely versatile. Only binary classification andregression are available in the current version of the package. Preparing models for multi-criteriaclassification, cluster analysis, or survival analysis is currently impossible. However, these featurescan be easily implemented in the future. The package currently performs better with smallerdatasets; a large allocation of memory and time is needed for large and complex data.8One of the strongest points of the forester package is being incredibly easy to use, even if we donot have broad machine learning expertise. This approach, however, raises the risk that the modelstrained with the package will be of poor quality, for example, due to the training on a low-qualitydataset, or that the outcomes will be misunderstood or incorrectly interpreted by the inexperienceduser. The reporting module addresses all of these responsible machine learning concerns, whichinforms about possible issues with the data, measures the quality of the models, and provides theirexplanations.8 ConclusionsThis paper presents an R package for AutoML, creating models for regression and binary classifica-tion tasks conducted on tabular data. Our solution addresses the needs we have observed in AutoMLtools in various programming languages. The main goals of the package are to keep the packagestable and easy to use, to automate all the necessary steps inside the ML pipeline, and to provideresults that are easy to create, understand and allow for diagnostics of the models. To achieve theseresults, we have focused only on the best representatives from the family of tree-based modelsthat show superiority over other methods on tabular data. Furthermore, we provide additionalfunctions that allow the user to save the models, create explanations and create a report describingthe learning process and explaining the developed models. Experiments carried out tentativelyindicate that more predictive power is obtained using our solution than currently existing solutionsin R.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] We introduced the forester package and described itspotential. The Section 3 and Section 4 describe the various features.(b) Did you describe the limitations of your work? [Yes] See Section 7.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 7.(d)Have you read the ethics author’s and review guidelines and ensured that your paperconforms to them? https://automl.cc/ethics-accessibility/ [Yes] We believe thatour paper conforms to the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] We have notheoretical results.(b)Did you include complete proofs of all theoretical results? [N/A] We have no theoreticalresults.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimentalresults, including all requirements (e.g., requirements.txt with explicit version), an in-structive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] See Appendix A.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] The most important results analyzed in this paper are presented or mentioned(via a link) in the Appendix C.9(c)Did you include scripts and commands that can be used to generate the figures and tables inyour paper based on the raw results of the code, data, and instructions given? [Yes] The codeis available on the package’s GitHub repository in the form of R Markdown notebook, seeAppendix A.(d)Did you ensure sufficient code quality such that your code can be safely executed andthe code is properly documented? [Yes] The code is available on the package’s GitHubrepository in the form of R Markdown notebook, see Appendix A.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces,fixed hyperparameter settings, and how they were chosen)? [Yes] The training details arementioned in the main paper Section 6, as well as in the source code described in AppendixA.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] The methods were compared on the same train, test,and validation subsets, and the hyperparameter search space was the default one for eachAutoML framework.(g)Did you run ablation studies to assess the impact of different components of your approach?[No] The package at this point is pretty straightforward and doesn’t contain many com-ponents that could alter the outcomes. A possible ablation study could be applied to theadvanced preprocessing method, however, we did not have enough computational powerfor running the benchmark again.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] The modelswere compared by the same metrics for classification: accuracy, AUC and F1 and forregression: RMSE, MSE, R2i MAE.(i)Did you compare performance over time? [No] We did not have enough resources formultiple experiments executions.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]As described in the Section 6, we’ve performed three runs of the forester and H2O trainingwith the random seeds set for the train, test, and validation splits as the values 123, 2137,and 21.(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [N/A] We do not have error bars on the visualizations, but we provideexact values without any statistical aggregations.(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [Yes] We useda tabular benchmark consisting of 8 datasets describing the binary classification tasks fromthe OpenML-CC18 benchmark, as described in Section 6.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] See Appendix B.(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [N/A] During the experiments, all computa-tions were conducted by the AutoML frameworks, and no additional tuning was included.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] A full list of the citedpapers/tools is described in the references.10(b)Did you mention the license of the assets? [Yes] Used assets, mostly R packages, aredescribes in the Appendix D.(c)Did you include any new assets either in the supplemental material or as a url? [Yes]The forester package is a new asset https://github.com/ModelOriented/forester .(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [Yes] See Section 6, we are using OpenML-CC18 and its data. We cited alldata sources according to the guidelines of datasets on OpenML (and in OpenML-CC18).(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] Our data does not contain personally identifiableinformation or offensive content.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not do research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not do research with human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not do research with human subjects.Acknowledgements. We would like to thank Adrianna Grudzień and Patryk Słowakiewicz for theirdevelopment work on the forester package. We also thank Katarzyna Woźnica, Hubert Baniecki,Mikołaj Spytek, and Mateusz Krzyziński for their valuable comments about the study.ReferencesBavarian, M., Jun, H., Tezak, N., Schulman, J., McLeavey, C., Tworek, J., and Chen, M. (2022).Efficient training of language models to fill in the middle. arXiv preprint arXiv:2207.14255 .Biecek, P. (2018). DALEX: Explainers for Complex Predictive Models in R. Journal of MachineLearning Research , 19(84):1–5.Biecek, P. and Burzykowski, T. (2021). Explanatory Model Analysis . Chapman and Hall/CRC, NewYork.Bischl, B., Casalicchio, G., Feurer, M., Gijsbers, P., Hutter, F., Lang, M., Mantovani, R. G., van Rijn,J. N., and Vanschoren, J. (2021). OpenML benchmarking suites. In Thirty-fifth Conference onNeural Information Processing Systems Datasets and Benchmarks Track (Round 2) .Buuren, S. and Groothuis-Oudshoorn, C. (2011). MICE: Multivariate Imputation by ChainedEquations in R. Journal of Statistical Software , 45.Caruana, R., Karampatziakis, N., and Yessenalina, A. (2008). An empirical evaluation of supervisedlearning in high dimensions. Proceedings of the 25th International Conference on Machine Learning ,pages 96–103.Chen, T. and Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD ’16,page 785–794.11Fararni, K. A., Nafis, F., Aghoutane, B., Yahyaouy, A., Riffi, J., and Sabri, A. (2021). Hybrid recom-mender system for tourism based on big data and AI: A conceptual framework. Big Data Miningand Analytics , 4(1):47–55.Feurer, M., Eggensperger, K., Falkner, S., Lindauer, M., and Hutter, F. (2022). Auto-Sklearn 2.0:Hands-free AutoML via Meta-Learning. Journal of Machine Learning Research , 23(261):1–61.Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. In Advances in Neural Information Processing Systems ,volume 28.Grinsztajn, L., Oyallon, E., and Varoquaux, G. (2022). Why do tree-based models still outperformdeep learning on typical tabular data? In Thirty-sixth Conference on Neural Information ProcessingSystems Datasets and Benchmarks Track .Hothorn, T. and Zeileis, A. (2015). partykit: A Modular Toolkit for Recursive Partytioning in R.Journal of Machine Learning Research , 16(118):3905–3909.Jorge, C. C., Antonio, O. A. J., Hugo, G. M. V., and Hugo, O. P. D. (2022). Machine Learning forPersonal Credit Evaluation: A Systematic Review. WSEAS TRANSACTIONS ON COMPUTERRESEARCH , 10:62–73.Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). LightGBM: AHighly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information ProcessingSystems , volume 30.Kursa, M. B. and Rudnicki, W. R. (2010). Feature Selection with the Boruta Package. Journal ofStatistical Software , 36(11):1–13.Lang, M., Binder, M., Richter, J., Schratz, P., Pfisterer, F., Coors, S., Au, Q., Casalicchio, G., Kotthoff,L., and Bischl, B. (2019). mlr3: A modern object-oriented machine learning framework in R.Journal of Open Source Software , 4(44):1903.LeDell, E., Gill, N., Aiello, S., Fu, A., Candel, A., Click, C., Kraljevic, T., Nykodym, T., Aboyoun, P.,Kurka, M., and Malohlava, M. (2022). h2o: R Interface for the ’H2O’ Scalable Machine LearningPlatform . R package version 3.38.0.1.Molnar, C., Casalicchio, G., and Bischl, B. (2020). Interpretable machine learning – a brief history,state-of-the-art and challenges. In ECML PKDD 2020 Workshops , pages 417–431.Olson, R. S., Bartley, N., Urbanowicz, R. J., and Moore, J. H. (2016). Evaluation of a Tree-basedPipeline Optimization Tool for Automating Data Science. In Proceedings of the Genetic andEvolutionary Computation Conference 2016 , GECCO ’16, pages 485–492.Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., and Gulin, A. (2018). CatBoost:unbiased boosting with categorical features. In Advances in Neural Information ProcessingSystems , volume 31.R Core Team (2022). R: A Language and Environment for Statistical Computing . R Foundation forStatistical Computing, Vienna, Austria.Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L., and Zurada, J. (2010). Artificial Intelligenceand Soft Computing, Part II: 10th International Conference, ICAISC 2010 .12Shimizu, H. and Nakayama, K. I. (2020). Artificial intelligence in oncology. Cancer Science ,111(5):1452–1460.Snoek, J., Larochelle, H., and Adams, R. P. (2012). Practical bayesian optimization of machinelearning algorithms. In Advances in Neural Information Processing Systems , volume 25.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Vanschoren, J. (2019). Meta-Learning , pages 35–61. Springer International Publishing, Cham.Vanschoren, J., van Rijn, J. N., Bischl, B., and Torgo, L. (2013). Openml: networked science inmachine learning. SIGKDD Explorations , 15(2):49–60.Vilalta, R., Giraud-Carrier, C., Brazdil, P., and Soares, C. (2004). Using meta-learning to supportdata mining. International Journal of Computer Science Applications , 1.Wirth, R. and Hipp, J. (2000). CRISP-DM: Towards a standard process model for data mining.Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discoveryand Data Mining .Woźnica, K. and Biecek, P. (2022). Towards explainable meta-learning. In Machine Learning andPrinciples and Practice of Knowledge Discovery in Databases: International Workshops of ECMLPKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part I , pages 505–520.Wright, M. N. and Ziegler, A. (2017). ranger: A Fast Implementation of Random Forests for HighDimensional Data in C++ and R. Journal of Statistical Software , 77(1):1–17.A Source CodeThe source code of the experiments, prepared visualizations, and tables from Appendix C isavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments as the forester_benchmark.Rmd file. The markdown notebook file describesthe installation process, and it can be safely executed with the guidance of our remarks betweenthe code chunks.B ResourcesAs mentioned in the Section 6, our team was limited in computational power. The experiment wasconducted on our private PC with 32GB of RAM, CPU: 11th Gen Intel(R) Core(TM) i7-11700KF @3.60GHz (16 cores), and the GPU: NVIDIA GeForce RTX 3070 Ti, however as the forester is not yetimplemented to work on the GPU, only the CPU was used.C Raw resultsIn this section we provide information about the raw results mentioned in the Section 6 which wereused in the Figure 2. Raw results for train, test, and validation datasets are available in the GitHubrepository https://github.com/ModelOriented/forester/tree/main/misc/experiments/raw_training_results . In this section we offer the results aggregated as the mean values of the metricswhich are presented in the Table 3, Table 4, and Table 5 for the binary classification tasks. Thesetables also broaden our perspective by providing AUC and F1 values. The results for the regressiontasks are presented in the Table 6, Table 7, and Table 8. These tables also broaden our perspectiveby providing MSE, R2, and MAE values.13Table 3: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification training datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.929 0.923 0.905blood-transfusion-service-center forester 0.77 0.752 1blood-transfusion-service-center H2O 0.7 0.682 0.519breast-w forester 1 1 1breast-w H2O 0.998 0.998 0.997credit-approval forester 0.999 1 1credit-approval H2O 0.961 0.959 0.955credit-g forester 0.967 0.998 1credit-g H2O 0.906 0.855 0.938diabetes forester 0.991 0.999 1diabetes H2O 0.874 0.871 0.826kr-vs-kp forester 1 1 1kr-vs-kp H2O 0.999 0.999 0.965phoneme forester 1 1 1phoneme H2O 1 1 1Table 4: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification testing datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 0.995 0.995 1banknote-authentication H2O 0.933 0.927 0.915blood-transfusion-service-center forester 0.796 0.772 0.976blood-transfusion-service-center H2O 0.713 0.707 0.54breast-w forester 0.976 0.984 0.986breast-w H2O 0.971 0.97 0.959credit-approval forester 0.885 0.931 0.942credit-approval H2O 0.882 0.882 0.87credit-g forester 0.733 0.79 0.865credit-g H2O 0.743 0.64 0.829diabetes forester 0.768 0.823 0.799diabetes H2O 0.753 0.727 0.643kr-vs-kp forester 0.994 0.999 0.991kr-vs-kp H2O 0.991 0.991 0.991phoneme forester 0.909 0.96 0.867phoneme H2O 0.904 0.895 0.84214Table 5: This table provides mean accuracy, AUC, and F1 values for the forester andH2O frameworkfor all binary classification validation datasets used in the benchmark.task_name framework accuracy auc f1banknote-authentication forester 1 1 1banknote-authentication H2O 0.916 0.908 0.887blood-transfusion-service-center forester 0.775 0.773 0.833blood-transfusion-service-center H2O 0.675 0.68 0.509breast-w forester 0.938 0.968 0.956breast-w H2O 0.967 0.97 0.953credit-approval forester 0.855 0.908 0.939credit-approval H2O 0.867 0.862 0.842credit-g forester 0.705 0.788 1credit-g H2O 0.758 0.635 0.846diabetes forester 0.747 0.803 0.866diabetes H2O 0.755 0.735 0.656kr-vs-kp forester 0.99 0.999 0.99kr-vs-kp H2O 0.99 0.99 0.99phoneme forester 0.901 0.954 0.851phoneme H2O 0.9 0.896 0.839Table 6: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression training datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.697 0.5 0.974 0.4232dplanes H2O 0.984 0.969 0.95 0.785bank32nh forester 0.001 0 1 0.001bank32nh H2O 0.054 0.003 0.806 0.037elevators forester 0.001 0 0.978 0.001elevators H2O 0.002 0 0.942 0.001kin8nm forester 0.012 0 0.997 0.009kin8nm H2O 0.066 0.004 0.937 0.051Mercedes_Benz_Greener_Manufacturing forester 2.456 6.13 0.963 0.775Mercedes_Benz_Greener_Manufacturing H2O 7.806 61.115 0.625 4.935pol forester 1.139 1.483 0.999 0.699pol H2O 1.803 3.251 0.998 0.829wine_quality forester 0.071 0.005 0.993 0.031wine_quality H2O 0.161 0.027 0.965 0.12415Table 7: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression testing datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 1.003 1.007 0.948 0.8022dplanes H2O 1.004 1.008 0.948 0.802bank32nh forester 0.08 0.006 0.548 0.053bank32nh H2O 0.076 0.006 0.599 0.05elevators forester 0.002 0 0.884 0.002elevators H2O 0.002 0 0.911 0.001kin8nm forester 0.113 0.013 0.816 0.087kin8nm H2O 0.084 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 7.554 57.195 0.626 5.039Mercedes_Benz_Greener_Manufacturing H2O 7.583 57.598 0.623 5.222pol forester 4.739 22.508 0.987 2.242pol H2O 3.198 10.278 0.994 1.3wine_quality forester 0.614 0.377 0.505 0.451wine_quality H2O 0.604 0.365 0.521 0.43Table 8: This table provides mean RMSE, MSE, R2, and MAE values for the forester andH2O frameworkfor all regression validation datasets used in the benchmark.task_name framework rmse mse r2 mae2dplanes forester 0.999 0.997 0.948 0.7992dplanes H2O 1 0.999 0.948 0.8bank32nh forester 0.082 0.007 0.544 0.053bank32nh H2O 0.078 0.006 0.591 0.052elevators forester 0.002 0 0.875 0.002elevators H2O 0.002 0 0.907 0.001kin8nm forester 0.111 0.012 0.822 0.085kin8nm H2O 0.083 0.007 0.899 0.065Mercedes_Benz_Greener_Manufacturing forester 8.464 73.039 0.559 5.261Mercedes_Benz_Greener_Manufacturing H2O 8.458 72.911 0.56 5.373pol forester 4.379 19.256 0.989 1.885pol H2O 3.01 9.087 0.995 1.213wine_quality forester 0.632 0.399 0.478 0.466wine_quality H2O 0.624 0.389 0.492 0.447D Used assetsIn this section we describe the packages used for both forester , and the experiments. The packagesoutside of the forester required for the experiments are listed in the Table 9. Additional requirementfor the catboost andH2O packages is installed Java. The packages required by the forester as wellas their versions used during the experiment are presented in the Table 10.16Table 9: The packages and their versions under which the experiments were executed and supplementalmaterials were created.package version licensexlsx 0.6.5 GPL-3stringr 1.5.0 MITggbeeswarm 0.6.0 GPL (>= 2)dplyr 1.0.10 MITggplot2 3.4.0 MITtictoc 1.1 Apache License (== 2.0)H2O 3.38.0.1 Apache License (== 2.0)forester 1.2.1 GPL-3OpenML 1.12 BSD_3_clauseTable 10: The forester package’s dependencies and their versions used during the experiments.package version licenceBoruta 7.0.0 GPL (>= 2)catboost 1.1.1 Apache License (== 2.0)crayon 1.5.2 MITDALEX 2.4.2 GPLdata.table 1.14.2 MPL-2.0ggplot2 3.4.0 MITggradar 0.2 GPLggrepel 0.9.3 GPL-3knitr 1.40 GPLlightgbm 3.3.2 MITmice 3.14.0 GPL-2 | GPL-3mltools 0.3.5 MITParBayesianOptimization 1.2.4 GPL-2partykit 1.2-16 GPL-2 | GPL-3pROC 1.18.0 GPL (>= 3)ranger 0.14.1 GPL-3rcompanion 2.4.18 GPL-3rmarkdown 2.16 GPL-3splitTools 0.3.2 GPL (>= 2)testthat 3.1.6 MITtibble 3.1.8 MITtinytex 0.43 MITvarhandle 2.0.5 GPL (>= 2)xgboost 1.6.0.1 Apache License (== 2.0)stats 4.1.2 Part of R 4.1.217E Execution times comparisonIn this section we briefly explore the times needed for every experiment execution for both frame-works. The results presented in Table 11, and Table 12 show that final execution times differ, despitesetting exactly the same times for H2O experiment as the forester had. Our empirical results showthat the H2O runs lasted two times longer on average than the forester , which puts a differentlight on the comparison of the frameworks performance. Raw results needed for these tables areavailable in the GitHub repository https://github.com/ModelOriented/forester/tree/main/misc/experiments/execution_times .Table 11: The comparison of mean execution times in seconds for the forester andH2O for binaryclassification experiments.task_name forester H2O difference relative differencebanknote-authentication 818.33 2521.33 -1703 0.28blood-transfusion-service-center 155.67 555.67 -400 0.26breast-w 451.33 797.33 -346 0.57credit-approval 805 1513 -708 0.53credit-g 2453 4234 -1781 0.58diabetes 1645.67 2643.67 -998 0.62kr-vs-kp 451.33 806.67 -355.33 0.57phoneme 2748.33 3695.33 -947 0.67Table 12: The comparison of mean execution times in seconds for the forester andH2O for regressionexperiments.task_name forester H2O difference relative difference2dplanes 401 1050.67 -649.67 0.38bank32nh 708.67 1214.67 -506 0.58elevators 720.33 1435.33 -715 0.5kin8nm 544.67 1564 -1019.33 0.35Mercedes_Benz_Greener_Manufacturing 848 1371.67 -523.67 0.61pol 756 1548.33 -792.33 0.49wine_quality 1317.33 2130 -812.67 0.63F Package comparisonWe have prepared a notebook showing the differences between the packages described in therelated work section. The document includes a comparison of package installation, a descriptionof available preprocessing, variable selection options, and model tuning. In addition, visual-izations, methods of explainable machine learning, report preparation, and reference to avail-able package documentation are described. We do not give a final assessment of the best pack-age because it could be subjective, but we expose the reader to criticism. Notebook is avail-able in the GitHub repository https://github.com/ModelOriented/forester/blob/main/misc/experiments/framework_comparison.Rmd .18Forester reportversion 1.2.12023-05-20 01:36:36This report contains details about the best trained model, table with metrics for every trained model, scatterplot for chosen metric and info about used data.The best modelsThis is the binary_clf task.The best model is: xgboost_RS_5 .The names of the models were created by a pattern Engine_TuningMethod_Id , where:•Engine describes the engine used for the training (random_forest, xgboost, decision_tree, lightgbm,catboost),•TuningMethod describes how the model was tuned (basic for basic parameters, RS for random search,bayes for Bayesian optimization),•Id for separating the random search parameters sets.More details about the best model are present at the end of the report.no. name accuracy auc f113 xgboost_RS_5 0.7919 0.8088 0.27917 ranger_RS_4 0.7785 0.6965 0.153818 lightgbm_RS_5 0.7785 0.7361 0.42112 xgboost_model 0.7718 0.7090 0.413814 lightgbm_RS_1 0.7718 0.7578 0.37044 ranger_RS_1 0.7651 0.7930 NaN6 ranger_RS_3 0.7651 0.7228 NaN10 xgboost_RS_2 0.7651 0.7801 NaN11 xgboost_RS_3 0.7651 0.7367 NaN16 lightgbm_RS_3 0.7651 0.7690 NaN21 lightgbm_bayes 0.7651 0.7340 0.36368 ranger_RS_5 0.7584 0.7579 0.052612 xgboost_RS_4 0.7517 0.6609 0.372919 ranger_bayes 0.7517 0.7333 0.244920 xgboost_bayes 0.7517 0.7409 0.24491 ranger_model 0.7450 0.7063 0.32143 lightgbm_model 0.7450 0.6842 0.38719 xgboost_RS_1 0.7450 0.6619 0.366715 lightgbm_RS_2 0.7181 0.6058 0.382417 lightgbm_RS_4 0.7181 0.6058 0.38241G Report exampleno. name accuracy auc f15 ranger_RS_2 0.7114 0.6929 0.2712Plots for all modelsxgboost_modellightgbm_RS_1xgboost_RS_5ranger_RS_4lightgbm_RS_500.51Metricaccuracyaucf1Model comparisonPlots for the best model - xgboost_RS_50.000.250.500.751.000.00 0.25 0.50 0.75 1.00specificitysensitivityROC Curve (AUC = 0.8088)6229112010 1TargetPredictionConfusion Matrix2Feature Importance for the best model - xgboost_RS_5xgb.Booster0.880 0.882 0.884 0.886 0.888V4V3V2V1Root mean square error (RMSE) loss after permutationscreated for the xgb.Booster modelFeature ImportanceDetails about data——————– CHECK DATA REPORT ——————–The dataset has 748 observations and 5 columns which names are:V1; V2; V3; V4; Class;With the target value described by a column: Class.No static columns.No duplicate columns.No target values are missing.No predictor values are missing.No issues with dimensionality.Strongly correlated, by Spearman rank, pairs of numerical values are:V2 - V3: 1;These observations migth be outliers due to their numerical columns values:1 10 116 342 496 497 498 499 5 500 501 503 504 505 506 518 529 747 748 ;Dataset is unbalanced with: 3.202247 proportion with 1 being a dominating class.3Columns names suggest that none of them are IDs.Columns data suggest that none of them are IDs.——————– CHECK DATA REPORT END ——————–The best model details------------ Xgboost model ------------Parametersniter: 20evaluation_log:iter : train_auc1 :2 :3 :4 :5 :6 :7 :8 :9 :10 :11 :12 :13 :14 :15 :16 :17 :18 :19 :20 :4 |
XHIY3cQ8Tew | AutoGluon–TimeSeries:AutoML for Probabilistic Time Series ForecastingOleksandr Shchur1Caner Turkmen1Nick Erickson1Huibin Shen2Alexander Shirkov1Tony Hu1Yuyang Wang21Amazon Web Services2AWS AI LabsAbstract We introduce AutoGluon–TimeSeries—an open-source AutoML library for probabilistic timeseries forecasting.1Focused on ease of use and robustness, AutoGluon–TimeSeries enablesusers to generate accurate point and quantile forecasts with just 3 lines of Python code. Builton the design philosophy of AutoGluon, AutoGluon–TimeSeries leverages ensembles ofdiverse forecasting models to deliver high accuracy within a short training time. AutoGluon–TimeSeries combines both conventional statistical models, machine-learning basedforecasting approaches, and ensembling techniques. In our evaluation on 29 benchmarkdatasets, AutoGluon–TimeSeries demonstrates strong empirical performance, outperforminga range of forecasting methods in terms of both point and quantile forecast accuracy, andoften even improving upon the best-in-hindsight combination of prior methods.1 IntroductionTime series (TS) forecasting is a fundamental statistical problem with applications in diversedomains such as inventory planning (Syntetos et al., 2009), smart grids (Hong et al., 2020), andepidemiology (Nikolopoulos et al., 2021). Decades of research led to development of variousforecasting approaches, from simple statistical models (Hyndman and Athanasopoulos, 2018) toexpressive deep-learning-based architectures (Benidis et al., 2022). Despite the availability of variousforecasting approaches, practitioners often struggle with selecting the most appropriate methodand adhering to best practices when implementing and evaluating forecasting pipelines.AutoML aims to mitigate these challenges by providing tools that enable practitioners to developaccurate and efficient predictive models without extensive domain knowledge. While traditionalAutoML methods have focused primarily on classification and regression tasks for tabular data(Thornton et al., 2013; Feurer et al., 2015; Olson and Moore, 2016; Erickson et al., 2020; LeDell andPoirier, 2020; Zimmer et al., 2021), automated time series forecasting has received comparativelyless attention, with only a few open-source AutoML forecasting frameworks having been proposed(Deng et al., 2022; Catlin, 2022). Furthermore, existing automated forecasting frameworks tend togenerate point forecasts without considering uncertainty, which is a crucial factor in many practicalapplications (Gneiting and Katzfuss, 2014).To close this gap, we introduce AutoGluon–TimeSeries (AG–TS), an open-source AutoML frame-work for probabilistic time series forecasting written in Python. AG–TS can generate both pointand probabilistic forecasts for collections of univariate time series. Together with support for staticand time-varying covariates, this makes AG–TS applicable to most real-world forecasting tasks.As part of the AutoGluon framework (Erickson et al., 2020; Shi et al., 2021), AG–TS adheres tothe principles of ease of use and robustness, empowering users with limited expertise in the targetdomain to generate highly accurate predictions with minimal coding effort. The architecture is1https://github.com/autogluon/autogluonAutoML 2023 Apps, Benchmarks, Challenges, and Datasets Track ©2023 the authors, released under CC BY 4.0Figure 1: Point forecast (left) and quantile forecast (right) for a univariate time series.capable of handling failures of individual models when necessary, producing a valid result as longas any single model was trained successfully.We evaluate the performance of AG–TS against other established forecasting methods andAutoML systems using 29 publicly available benchmark datasets. The results demonstrate AG–TS’s strong performance, outperforming various competing approaches in terms of both pointand probabilistic forecast accuracy. This highlights the potential of AG–TS as a valuable tool forpractitioners and researchers seeking an automated and versatile solution for time series forecasting.2 Probabilistic Time Series ForecastingThe probabilistic time series forecasting problem can be formally stated as follows. The dataD={yi,1:Ti}Ni=1is a collection of Nunivariate time series, where yi,1:Ti=(yi,1,...,yi,T i),yi,tis thevalue of the i-th time series at time t, andTiis the length of the i-th time series.2For example,yi,tmay correspond to the number of units of product isold on day t. The goal of time seriesforecasting is to predict the future Hvalues for each time series in D. The parameter His knownasprediction length orforecast horizon .Each time series yi,1:Tmay additionally be associated with covariates Xi,1:T+H. These includeboth static covariates (e.g., location of the store, product ID) and time-varying covariates . Thetime-varying covariates may, in turn, be known in the future (e.g., day of the week, promotions) oronly known in the past (e.g., weather, sales of other products).In the most general form, the goal of probabilistic forecasting is to model the conditionaldistribution of the future time series values yi,T+1:T+Hgiven the past values yi,1:Tand the relatedcovariates Xi,1:T+Hp(yi,T+1:T+H|yi,1:T,Xi,1:T+H).In practice, we are rarely interested in the full predictive distribution and rather represent therange of possible outcomes with quantile forecasts ˆyqi,T+1:T+Hfor chosen quantile levels q∈(0,1).The quantile forecast implies that the future time series value yi,T+his predicted to exceed ˆyqi,T+hwith probability q(Wen et al., 2017; Lim et al., 2021).If the uncertainty is of no interest, we can instead report a point forecast of the future timeseries values. For example, we can summarize the prediction using the conditional meanˆyi,T+1:T+H=Ep[yi,T+1:T+H|yi,1:T,Xi,1:T+H].Figure 1 demonstrates the difference between a point forecast and a quantile forecast. Finally, notethat here we consider the problem of forecasting multiple univariate time series, also known aspanel data, which is different from multivariate forecasting (Benidis et al., 2022).2To reduce clutter in notation, we assume that all time series have the same length T(even though AG–TS supportsthe case when time series have different lengths).23 AutoGluon–TimeSeriesAutoGluon–TimeSeries enables users to generate probabilistic time series forecasts in a few linesof code, as shown by the following minimal example.1from autogluon . timeseries import TimeSeriesDataFrame , TimeSeriesPredictor23train_data = TimeSeriesDataFrame . from_path (" train . csv ")4predictor = TimeSeriesPredictor ( prediction_length =30) . fit ( train_data )5predictions = predictor . predict ( train_data ) # forecast next 30 time stepsLoading the data. ATimeSeriesDataFrame object stores a collection of univariate time series andprovides utilities such as loading data from disk and train-test splitting. Internally, time series datais represented as a pandas.DataFrame (pandas development team, 2020) in long format (Table 1),but loaders are also available for other formats. Besides the target time series that need to beforecast, TimeSeriesDataFrame can also store the static and time-varying covariates.Table 1: Collection of univariate time series stored as a TimeSeriesDataFrame . Each row containsunique ID of the time series, timestamp, and the value of the target time series.item_id timestamp targetT1 2020-03-02 23T1 2020-03-03 43·········T999 2020-08-29 15T999 2020-08-31 27Defining the task. Users can specify the forecasting task by creating a TimeSeriesPredictorobject. Task definition includes information such as prediction length , list of quantile levels tobe predicted, and the evaluation metric . The evaluation metric should be chosen based on thedownstream application. For example, mean weighted quantile loss (wQL) measures the accuracy ofquantile forecasts, and mean absolute scaled error (MASE) reports the accuracy of the point forecastrelative to a naive baseline. When creating the predictor, users can also specify what time-varyingcovariates are known in the future—the remainder will be treated as past-only covariates.Fitting the predictor. Inside the fit() method, the predictor preprocesses the data, fits andevaluates various models using cross-validation, optionally performs hyperparameter optimization(HPO) on selected models, and trains an ensemble of the individual forecasting models. By default,AG–TS provides user-friendly presets users can choose from to manage the training time–accuracytradeoff. Advanced users can also explicitly specify the models to use and their hyperparameters,or specify search spaces in which optimal hyperparameters will be searched.Making predictions. After the predictor has been fit, the predict() method can be used to generatepredictions on new data—including time series that haven’t been seen during training. Like theinput data, the predictions are stored in a long-format data frame, where the columns contain themean (expected value) and quantile forecasts at the desired quantile levels (Table 2).Documentation. We provide various additional resources on the official website auto.gluon.ai.These include installation instructions, tutorials, and a cheatsheet summarizing the main features.3.1 Design ConsiderationsAG–TS was launched as a part of the AutoGluon suite (Erickson et al., 2020) in v0.5, building onthe foundation of AutoGluon and borrowing some design elements from other forecasting librarieslike GluonTS (Alexandrov et al., 2020). Since then, AG–TS has evolved into a full solution for timeseries forecasting. Below, we highlight some of AG–TS’s key design principles.3Table 2: Mean and quantile forecasts generated by a TimeSeriesPredictor . The forecasts include thenext prediction_length many time steps of each time series in the dataset.item_id timestamp mean 0.1 0.5 0.9T1 2020-09-01 17 10 16 23T1 2020-09-02 25 15 23 31··················T999 2020-09-29 33 21 33 36T999 2020-09-30 30 24 28 34Ensembles over HPO. AG–TS follows the AutoGluon philosophy, relying on ensembling techniquesinstead of HPO or neural architecture search. The library features a broad selection of modelswhose probabilistic forecasts are combined in an ensemble selection step (Caruana et al., 2004).AG–TS favors broadening the portfolio of forecasters over exploring the hyperparameter space ofany particular model. While AG–TS does support HPO techniques, HPO is excluded from mostpreset configurations to reduce training time and minimize overfitting on the validation data.Presets and default hyperparameters. In order to provide defaults that work well out of the box forusers that are not familiar with forecasting, AG–TS includes various presets —high-level configura-tion options that allow users to trade off between fast training and higher accuracy. AG–TS followsthe convention-over-configuration principle: all models feature default configurations of hyperpa-rameters that are expected to work well given the selected preset. At the same time, advanced usershave an option to manually configure individual models and use the TimeSeriesPredictor as aunified API for training, evaluating and combining various forecasting models (see documentationfor details).Model selection. Time series forecasting introduces unique challenges in model validation andselection. Importantly, as the main aim of the model is to generalize into the future , special carehas to be taken to define validation sets that are held out across time . The AG–TS API is designedwith this consideration. If the user does not explicitly specify a validation set, the library holds thewindow with last prediction_length time steps of each time series as a validation set. Optionally,multiple windows can be used to perform so-called backtesting .3.2 Forecasting ModelsThere are two families of approaches to forecasting in large panels of time series. The first approachis to fit local classical parametric statistical models to each individual time series. A second approachis built on expressive machine-learning-based approaches that are fit globally on all time series atonce. AG–TS features both approaches, incorporating forecasting models from both families andcombining them in an ensemble.Local models. This category contains conventional methods that capture simple patterns liketrend and seasonality. Examples include ARIMA (Box et al., 1970), Theta (Assimakopoulos andNikolopoulos, 2000) and ETS(Hyndman et al., 2008), as well as simple baselines like Seasonal Naive(Hyndman and Athanasopoulos, 2018). AG–TS relies on implementations of these provided byStatsForecast (Garza et al., 2022).The defining characteristic of local models is that a separate model is fit to each individualtime series in the dataset (Januschowski et al., 2020). This means that local models need to be re-fitwhen making predictions for new time series not seen during training. To mitigate this limitation,AG–TS caches the model predictions and parallelizes their fitting across CPU cores using Joblib(Joblib Development Team, 2020).4Global models. Unlike local models, a single global model is fitted to the entire dataset and usedto make predictions for all time series. Global models used by AG–TS can be subdivided intotwo categories: deep learning and tabular models. Deep-learning models such as DeepAR (Salinaset al., 2020), PatchTST (Nie et al., 2023), and Temporal Fusion Transformer (Lim et al., 2021) useneural networks to generate probabilistic forecasts for future data. AG–TS uses PyTorch-baseddeep learning models from GluonTS (Alexandrov et al., 2020). Tabular models like LightGBM (Keet al., 2017) operate by first converting the time series forecasting task into a tabular regressionproblem. This can be done either recursively —by predicting future time series values one at atime—or by directly forecasting all future values simultaneously (Januschowski et al., 2022). AG–TSrelies on regression models provided by AutoGluon–Tabular and uses MLForecast (Nixtla, 2023)for converting them into tabular forecasters.Global models typically provide faster inference compared to local models, since there isno need for re-training at prediction time. This, however, comes at the cost of longer trainingtimes since more parameters need to be estimated. Global models also naturally handle varioustypes of covariates and utilize information present across different time series, which is known ascross-learning (Semenoglou et al., 2021).Ensembling. After AG–TS finishes sequentially fitting the individual models, they are combinedusing 100 steps of the forward selection algorithm (Caruana et al., 2004). The output of the ensembleis a convex combination of the model predictions:ˆyensemblei,T+1:T+H=M∑︁m=1wm·ˆy(m)i,T+1:T+Hsubject towm≥0,M∑︁m=1wm=1,where ˆy(m)i,T+1:T+Hare either point or quantile forecasts generated by each of the Mtrained models.Note that in case of probabilistic forecasting, the ensemble computes a weighted average of thequantile forecasts of the individual models—method known as Vincentization (Ratcliff, 1979).The ensemble weights wmare tuned to optimize the chosen evaluation metric (e.g., wQL,MASE) on the out-of-fold predictions generated using time series cross-validation (Hyndman andAthanasopoulos, 2018). The main advantages of the forward selection algorithm are its simplicity,compatibility with arbitrary evaluation metrics, and the sparsity of the final ensemble.4 Related workTime series forecasting is a challenging task, and the idea of automated forecasting has long intriguedstatistics and ML researchers. An early influential work on automated forecasting was the Rpackageforecast (Hyndman and Khandakar, 2008) that introduced the AutoETS and AutoARIMA models.These models automatically tune their parameters (e.g., trend, seasonality) for each individual timeseries using an in-sample information criterion.The following decade saw the growing focus on deep learning models for time series (Benidiset al., 2022; Wen et al., 2017; Salinas et al., 2020; Lim et al., 2021; Oreshkin et al., 2020). Several workshave explored how such neural-network-based models can be combined with AutoML techniques togenerate automated forecasting solutions (Van Kuppevelt et al., 2020; Shah et al., 2021; Javeri et al.,2021). Another line of research focused on optimizing the entire forecasting pipeline—includingdata preprocessing and feature engineering—not just hyperparameter tuning for individual models(Dahl, 2020; Kurian et al., 2021; da Silva et al., 2022). A recent survey by Meisenbacher et al. (2022)provides an overview of such automated pipelines.Even though AutoML for forecasting is becoming an active research topic, few of the recentdevelopments have found their way from academic papers to software packages. Available open-source AutoML forecasting libraries include AutoPyTorch–Forecasting (Deng et al., 2022), AutoTS(Catlin, 2022) and PyCaret (Ali, 2020). In contrast to these frameworks, AG–TS supports probabilisticforecasting and focuses on ease of use, allowing users to generate forecasts in a few lines of code.55 Experiments5.1 SetupThe goal of our experiments is to evaluate the point and probabilistic forecast accuracy of AG–TS.As baselines, we use various statistical and ML-based forecasting methods.Baseline methods. AutoARIMA ,AutoETS , and AutoTheta are established statistical forecastingmodels that automatically tune model parameters for each time series individually based on aninformation criterion (Hyndman et al., 2008). This means, such models do not require a validationset and use in-sample statistics for model tuning. StatEnsemble is defined by taking the median ofthe predictions of the three statistical models. Such statistical ensembles, despite their simplicity,have been shown to achieve competitive results in forecasting competitions (Makridakis et al.,2018). We use Python implementations of all these methods provided by the StatsForecast library(Garza et al., 2022). We additionally use Seasonal Naive as a sanity-check baseline that all othermethods are compared against (Hyndman and Athanasopoulos, 2018).For ML-based methods, we include two established deep learning forecasting models, DeepAR(Salinas et al., 2020) and Temporal Fusion Transformer (TFT) (Lim et al., 2021). We use the PyTorchimplementations of these models provided by GluonTS (Alexandrov et al., 2020). Finally, we includethe AutoML forecasting framework AutoPyTorch–Forecasting (Deng et al., 2022) to our comparison.AutoPyTorch builds deep learning forecasting models by combining neural architecture search (e.g.,by trying various encoder modules) and hyperparameter optimization (e.g., by tuning the learningrate). The search process is powered by a combination of Bayesian and multi-fidelity optimization.Similar to AutoGluon, the models are combined using ensemble selection (Caruana et al., 2004).Datasets. In our evaluation we use 29 publicly available forecasting benchmark datasets providedvia GluonTS. These include datasets from the Monash Forecasting Repository (Godahewa et al.,2021), such as the M1, M3 and M4 competition data (Makridakis and Hibon, 2000; Makridakis et al.,2018). We selected the datasets from the Monash Repository that contain more than a single timeseries and fewer than 15M total time steps. Our selection of datasets covers various scenarios thatcan be encountered in practice—from small datasets (M1 and M3), to datasets with a few long timeseries (Electricity, Pedestrian Counts) and large collections of medium-sized time series (M4). Acomprehensive list of dataset statistics are provided in Table 8 in the appendix.Configuration. We train the TimeSeriesPredictor from AG–TS with best_quality presets, asthese are designed to produce the most accurate forecasts, and set the time_limit to 4 hours. Notethat the presets were fixed a priori and not optimized using the benchmark datasets. DeepAR andTFT are also trained for up to 4 hours with early stopping on validation loss with patience set to200. For these models, the model checkpoint achieving the best validation loss is used to generatethe test predictions. The time limit for AutoPyTorch is similarly set to 4 hours. We set no time limitfor the remaining statistical models, as they do not support such functionality. In case the runtimeof a single experiment exceeds 6 hours, the job is interrupted and the result is marked as failure.More details about the configuration are available in Appendix A.3.All models are trained using AWS m6i.4xlarge cloud instances (16 vCPU cores, 64 GB RAM). Weuse CPU instances to fairly evaluate the CPU-only baselines, though AG–TS additionally supportsGPU training. Each run is repeated 5 times using different random seeds for non-deterministicmodels. We run all experiments using AutoMLBenchmark (Gijsbers et al., 2022). In the supplement,we provide full configuration details and the scripts for reproducing all experiments.5.2 Forecasting AccuracyWe measure the accuracy of the point forecasts by reporting the mean absolute scaled error(MASE) of all forecasting methods on all benchmark datasets. AG–TS and AutoPyTorch are trained6Table 3: Point forecast accuracy comparison of baseline methods with AutoGluon (based on the MASEmetric) on 29 datasets. Listed are the number datasets where each method produced: lowererror than AutoGluon (Wins), higher error (Losses), error within 0.001 (Ties), error duringprediction (Failures), or the lowest error among all methods (Champion). Average rank andaverage error are computed using the datasets where no method failed. We rescale the errorsfor each dataset between [0,1]to ensure that averaging is meaningful. The final columnreports the win rate versus the Seasonal Naive baseline. Individual results are given in Table 9.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (MASE) - - - 0 19 2.08 0.073 100.0%StatEnsemble 6 20 0 3 3 3.12 0.238 82.8 %AutoPyTorch (MASE) 4 25 0 0 2 4.12 0.257 93.1%AutoETS 4 25 0 0 1 4.64 0.374 75.9 %AutoTheta 4 23 0 2 0 4.92 0.427 72.4 %DeepAR 4 24 0 1 2 5.08 0.434 93.1 %AutoARIMA 4 22 0 3 1 5.92 0.612 79.3 %TFT 2 27 0 0 1 6.12 0.635 75.9 %Table 4: Probabilistic forecast accuracy comparison of each baseline method with AutoGluon (based onthe wQL metric) on 29 datasets. The columns are defined as in Table 3. Results for individualmodels and datasets are given in Table 10.Framework Wins Losses Ties Failures ChampionAveragerankAveragerescaled errorWin rate vs.baselineAutoGluon (wQL) - - - 0 19 1.80 0.086 100.0%StatEnsemble 3 23 0 3 0 3.36 0.330 86.2%DeepAR 5 23 0 1 1 4.08 0.455 89.7%TFT 5 24 0 0 5 4.24 0.487 89.7%AutoETS 3 26 0 0 2 4.40 0.489 69.0%AutoTheta 2 25 0 2 1 5.00 0.545 69.0%AutoARIMA 4 22 0 3 1 5.12 0.641 82.8%to optimize the MASE metric, while all other models are trained using their normal trainingprocedure. We report the aggregate statistics in Table 3, and provide the full results for individualmodels and datasets in Table 9 in the appendix.We measure the accuracy of the probabilistic (quantile) forecasts by reporting the meanweighted quantile loss (wQL) averaged over 9 quantile levels q∈{0.1,0.2,...,0.9}. AG–TS isconfigured to optimize the wQL metric. We exclude AutoPyTorch from this comparison since thisframework does not support probabilistic forecasting. We report the aggregate statistics in Table 4,and provide the full results for individual models and datasets in Table 10 in the appendix.Some of the frameworks failed to generate forecasts on certain datasets. AutoARIMA, AutoThetaand StatEnsemble did not finish training on some datasets (Electricity–Hourly, KDD Cup 2018,and Pedestrian Counts) within 6 hours. This is caused by the poor scaling of these models to verylong time series. DeepAR model fails on one dataset (Web Traffic Weekly) due to numerical errorsencountered during training.Discussion. The results demonstrate that AG–TS outperforms all other frameworks, achieving thebest average rank and rescaled error for both point and probabilistic forecasts, and even beatingthe best-in-hindsight competing method on 19 out of 29 datasets.StatEnsemble places second after AG–TS. The statistical ensemble performs especially well onsmall datasets such as M1 and M3. This demonstrates that in the low-data regime simple approaches,7Figure 2: Total runtime of each framework across all datasets. AutoGluon always completes trainingand prediction under the time limit and achieves a mean runtime of 33 minutes. AutoPyTorchis always trained for the full 4 hour time limit. Statistical models train faster in most cases,but may take an extremely long time to train on datasets with long time series. The runtimesfor individual models and datasets are provided in Table 11.like ensembling by taking the median, may perform better than the learned ensemble selectionstrategy employed by both AutoML frameworks.AutoPyTorch achieves similar performance to StatEnsemble in point forecasting across mostperformance indicators. Interestingly, AG–TS tends to outperform AutoPyTorch on larger datasetslike M4. This means that AG–TS’s strategy of training various light-weight models performs wellin this setting under the limited time budget. Also note, configuring AutoPyTorch requires morecode and domain knowledge, compared to the 3 lines of code necessary to reproduce the aboveresults by AG–TS.Deep learning models DeepAR and TFT perform well in terms of probabilistic forecasting, butfall behind simple statistical approaches in point forecasts. This makes sense, since the objectivefunctions optimized by these deep learning models are designed for probabilistic forecasting.5.3 Runtime ComparisonHigh accuracy is not the only important property of an AutoML system—the ability to generatepredictions in a reasonable amount of time is often necessary in practice. To evaluate the efficiency ofAG–TS, we compare its runtime with other frameworks. We visualize the runtime of each frameworkacross all datasets in Figure 2. Note that here we compare the total runtime defined as the sumof training and prediction times. This reflects the typical forecasting workflow in practice, wherethe forecast is generated once for each time series. Moreover, it’s hard to distinguish between thetraining and prediction time for local models, where a new model is trained for each new time series.AG–TS completes training and prediction under the 4-hour time limit for all 29 datasets, andachieves mean runtime of 33 minutes. While statistical models are faster on average, they can beextremely slow to train on datasets consisting of long time series. For instance, the runtimes ofAutoARIMA, AutoTheta and StatEnsemble exceed 6 hours for 3 datasets with long time series. Thedeep learning models DeepAR and TFT have higher median runtime compared to the statisticalmodels, but never reach the 4 hour time limit due to early stopping. Finally, AutoPyTorch alwaysconsumes the entire 4 hour time budget due to its design.To summarize, AG–TS is able to produce accurate forecasts under mild time budgets. While, onaverage, AG–TS takes more time than the individual models, it produces more accurate forecastsand avoids the extremely long runtimes sometimes exhibited by local models. The results alsodemonstrate that limited training time is better spent training and ensembling many diverse models(as done by AG–TS), rather than hyperparameter tuning a restricted set of models (as done byAutoPyTorch).8Table 5: Ablation study. We compare the point forecast accuracy of AutoGluon, where certain compo-nent models are removed, ensembling is disabled, or the time limit is reduced. All versionsexcept AutoGluon-1h and AutoGluon-10m are trained for 4 hours. The columns are definedand the scores are computed as in Table 3.Framework Champion Average rank Average rescaled errorAutoGluon-1h 19 2.04 0.070AutoGluon-4h 19 2.08 0.073NoStatModels 16 2.12 0.094NoTabularModels 15 2.12 0.085NoDeepModels 15 2.28 0.124AutoGluon-10m 14 2.50 0.099NoEnsemble 7 3.52 0.1775.4 AblationsFinally, we perform ablations to understand the effect of different components on the final perfor-mance. We compare the point forecast accuracy of the TimeSeriesPredictor trained for 4 hourswith MASE evalauation metric (Section 5.2) against several variations with certain disabled com-ponents. First, we exclude some base models from the presets: statistical models ( NoStatModels ),deep learning models ( NoDeepModels ), and tabular models ( NoTabularModels ). We also considerreducing the time limit to 1 hour ( AutoGluon-1h ) or 10 minutes ( AutoGluon-10m ), as well disablingthe final ensembling step ( NoEnsemble ). In the latter case, AG–TS predicts using the model withthe best validation score. The rest of the setup is identical to Section 5.2.Table 5 shows the metrics for the different model variations, each compared to the baselinesfrom Section 5.2. AutoGluon-4h and AutoGluon-1h produce nearly identical results. This isnot surprising, as the 4-hour version finishes training under 1 hour for most datasets (Figure 2).Interestingly, AutoGluon achieves strong results even with a 10-minute time limit, achieving thebest average rank and outperforming the best-in-hindsight model on 14 out of 29 datasets.Removing the ensembling step has the most detrimental effect on the overall accuracy. Thishighlights the importance of ensembling, confirming the findings of other works (Makridakis et al.,2018; Borchert et al., 2022). The ablations also show that all 3 classes of models used by AutoGluonare important for the overall performance, deep learning models being the most critical component.6 Future WorkOur experiments demonstrate the strong forecasting accuracy achieved by AG–TS. Despite theseencouraging initial results, we aim to continue developing the library, adding new functionalityand further boost the forecasting performance. This includes incorporating the various ideas in thespace of AutoML for forecasting (Meisenbacher et al., 2022), with focus on the following directions.Ensembling. Advanced ensembling strategies, such as stacking (Ting and Witten, 1997), lie at thecore of modern high-performing AutoML systems (Erickson et al., 2020). How to best generalizethese techniques to probabilistic forecasting is an active, but still open research question (Gastingeret al., 2021; Wang et al., 2022).Calibration. Many practical tasks require guarantees on the uncertainty estimates associated withthe forecasts. Conformal prediction methods (Stankeviciute et al., 2021; Xu and Xie, 2021) provideone way to obtain such guarantees, and we plan to incorporate them into AG–TS in the future.New problem types. AG–TS supports the most common types of forecasting tasks, such as proba-bilistic forecasting or handling covariates. However, there are several settings that are currently (as9of v0.8) not supported. These include so-called cold-start forecasting (where little historic data isavailable) and generating forecast explanations (Rojat et al., 2021). Another interesting potentialapplication for AG–TS is assisting judgemental forecasting. In this context, AG–TS could serve as a“tool” queried by a large language model (LLM) (Schick et al., 2023) to generate qualitative forecasts.More generally, combinations of LLM with AutoML frameworks are an exciting direction for futurework (Tornede et al., 2023).Scalability. In our experiments we consider datasets with up to ≈107time steps across all time series.Modern applications, however, sometimes require operating on even larger scales. This wouldrequire improving efficiency of existing models and developing new efficient AutoML techniques.7 ConclusionsIn this work, we introduced AutoGluon–TimeSeries, a powerful and user-friendly open-sourceAutoML library for probabilistic time series forecasting. By combining statistical models and deeplearning forecasting approaches with ensembling techniques, AutoGluon–TimeSeries is able toachieve strong empirical results on a range of benchmark datasets. With the ability to generateaccurate point and quantile forecasts with just 3 lines of Python code, this framework is poised tomake time series forecasting more accessible and efficient for a wide range of users.8 Broader Impact StatementAutoGluon–TimeSeries enables users to generate accurate forecasts in a few lines of code. Thisdemocratizes machine learning, lowering the barrier to entry to forecasting for non-experts. Atthe same time, AutoGluon–TimeSeries can be used by experienced users to design highly accurateforecasting pipelines. More accurate forecasts can directly translate to real-world impact in variousdomains. For example, forecasting renewable energy generation is a crucial component of smartgrid management (Tripathy and Prusty, 2021); accurately predicting demand leads to more efficientinventory management and increased revenue (Makridakis et al., 2022).The potential negative impacts of the proposed approach are similar to those of other forecastingmodels. One such danger arises when the limitations of forecasting methods are not taken intoaccount in the context of decision making (e.g., when guiding policy decisions). As forecastingmodels only capture statistical dependencies, they may be misleading when trying to estimateeffects of actions or interventions.9 Submission Checklist1. For all authors. . .(a)Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope? [Yes] All claims are supported by the experimental evaluation inSection 5.(b) Did you describe the limitations of your work? [Yes] See Section 6.(c)Did you discuss any potential negative societal impacts of your work? [Yes] See Section 8.(d)Have you read the ethics author’s and review guidelines and ensured that your paper con-forms to them? https://automl.cc/ethics-accessibility/ [Yes] The paper conformsto the guidelines.2. If you are including theoretical results. . .(a)Did you state the full set of assumptions of all theoretical results? [N/A] The paper containsno theoretical results.10(b)Did you include complete proofs of all theoretical results? [N/A] The paper contains notheoretical results.3. If you ran experiments. . .(a)Did you include the code, data, and instructions needed to reproduce the main experimen-tal results, including all requirements (e.g., requirements.txt with explicit version), aninstructive README with installation, and execution commands (either in the supplementalmaterial or as a url)? [Yes] All of the above included in the supplementary material.(b)Did you include the raw results of running the given instructions on the given code anddata? [Yes] Results are provided in CSV format.(c)Did you include scripts and commands that can be used to generate the figures and tablesin your paper based on the raw results of the code, data, and instructions given? [No]We provide the raw data and describe the procedure in the paper, which should makereproducing the results and figures straightforward.(d)Did you ensure sufficient code quality such that your code can be safely executed and thecode is properly documented? [Yes] The code is properly documented and we made surethat it can be executed in a fresh environment.(e)Did you specify all the training details (e.g., data splits, pre-processing, search spaces, fixedhyperparameter settings, and how they were chosen)? [Yes] We use the standard evaluationprotocol: For all datasets, the last prediction_length time steps of each time series areheld out and used to evaluate the forecasts produced by each method. For hyperparameters,see Section A.3.(f)Did you ensure that you compared different methods (including your own) exactly onthe same benchmarks, including the same datasets, search space, code for training andhyperparameters for that code? [Yes] We carefully made sure that this is the case.(g)Did you run ablation studies to assess the impact of different components of your approach?[Yes] See Section 5.4.(h)Did you use the same evaluation protocol for the methods being compared? [Yes] Allmethods use an identical evaluation protocol.(i)Did you compare performance over time? [Yes] We allocate the same runtime budget of 4hours to all methods. An ablation study is performed where the time limit is reduced to 1hour and 10 minutes for AutoGluon.(j)Did you perform multiple runs of your experiments and report random seeds? [Yes]For all non-deterministic methods, the experiments are repeated with five random seeds:1,2,3,4,5 .(k)Did you report error bars (e.g., with respect to the random seed after running experimentsmultiple times)? [Yes] Error metrics produced by all non-deterministic methods include themean and the standard deviation (see Tables 9 and 10).(l)Did you use tabular or surrogate benchmarks for in-depth evaluations? [No] These are notavailable for probabilistic time series forecasting.(m) Did you include the total amount of compute and the type of resources used (e.g., type ofgpus, internal cluster, or cloud provider)? [Yes] The compute infrastructure is describedin Section 5.1. The total runtime of all experiments equals approximately 6000 hours ( ≈#models×# seeds×# of datasets).11(n)Did you report how you tuned hyperparameters, and what time and resources this required(if they were not automatically tuned by your AutoML method, e.g. in a nasapproach; andalso hyperparameters of your own method)? [Yes] We describe the hyperparameter settingsin Appendix A.3, in addition to providing the code that can be used to reproduce the results.4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets. . .(a)If your work uses existing assets, did you cite the creators? [Yes] References for all useddatasets and methods are provided in Section 5.1.(b)Did you mention the license of the assets? [Yes] This paper does not introduce any newpublic assets. The AutoGluon library is released under the Apache 2.0 License.(c)Did you include any new assets either in the supplemental material or as a url? [No] Thispaper does not introduce any new public assets.(d)Did you discuss whether and how consent was obtained from people whose data you’reusing/curating? [N/A] The evaluation was performed using public benchmark datasets.(e)Did you discuss whether the data you are using/curating contains personally identifiableinformation or offensive content? [N/A] The evaluation was performed using publicbenchmark datasets.5. If you used crowdsourcing or conducted research with human subjects. . .(a)Did you include the full text of instructions given to participants and screenshots, if appli-cable? [N/A] We did not use crowdsourcing or conduct research with human subjects.(b)Did you describe any potential participant risks, with links to Institutional Review Board(irb) approvals, if applicable? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.(c)Did you include the estimated hourly wage paid to participants and the total amount spenton participant compensation? [N/A] We did not use crowdsourcing or conduct researchwith human subjects.ReferencesAlexandrov, A., Benidis, K., Bohlke-Schneider, M., Flunkert, V., Gasthaus, J., Januschowski, T.,Maddix, D. C., Rangapuram, S., Salinas, D., Schulz, J., et al. (2020). GluonTS: Probabilistic andneural time series modeling in Python. The Journal of Machine Learning Research , 21(1):4629–4634.Ali, M. (2020). PyCaret: An open source, low-code machine learning library in Python. https://www.pycaret.org .Assimakopoulos, V. and Nikolopoulos, K. (2000). The Theta model: A decomposition approach toforecasting. International journal of forecasting , 16(4):521–530.Benidis, K., Rangapuram, S. S., Flunkert, V., Wang, Y., Maddix, D., Turkmen, C., Gasthaus, J.,Bohlke-Schneider, M., Salinas, D., Stella, L., et al. (2022). Deep learning for time series forecasting:Tutorial and literature survey. ACM Computing Surveys , 55(6):1–36.Borchert, O., Salinas, D., Flunkert, V., Januschowski, T., and Günnemann, S. (2022). Multi-objectivemodel selection for time series forecasting. arXiv preprint arXiv:2202.08485 .Box, G. E., Jenkins, G. M., Reinsel, G. C., and Ljung, G. M. (1970). Time series analysis: forecastingand control . John Wiley & Sons.12Caruana, R., Niculescu-Mizil, A., Crew, G., and Ksikes, A. (2004). Ensemble selection from librariesof models. In Proceedings of the twenty-first international conference on Machine learning , page 18.Catlin, C. (2022). AutoTS: Automated time series forecasting. https://github.com/winedarksea/AutoTS .da Silva, F. R., Vieira, A. B., Bernardino, H. S., Alencar, V. A., Pessamilio, L. R., and Barbosa, H.J. C. (2022). Automated machine learning for time series prediction. In 2022 IEEE Congress onEvolutionary Computation (CEC) , pages 1–7. IEEE.Dahl, S. M. J. (2020). TSPO: an autoML approach to time series forecasting . PhD thesis.Deng, D., Karl, F., Hutter, F., Bischl, B., and Lindauer, M. (2022). Efficient automated deep learningfor time series forecasting. In Machine Learning and Knowledge Discovery in Databases: EuropeanConference, ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part III , pages664–680. Springer.Erickson, N., Mueller, J., Shirkov, A., Zhang, H., Larroy, P., Li, M., and Smola, A. (2020). AutoGluon-Tabular: Robust and accurate AutoML for structured data. arXiv preprint arXiv:2003.06505 .Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. (2015). Efficientand robust automated machine learning. Advances in neural information processing systems , 28.Garza, F., Mergenthaler Canseco, M., Challu, C., and Olivares, K. G. (2022). StatsForecast: Light-ning fast forecasting with statistical and econometric models. https://github.com/Nixtla/statsforecast (v1.15.0).Gastinger, J., Nicolas, S., Stepić, D., Schmidt, M., and Schülke, A. (2021). A study on ensemblelearning for time series forecasting and the need for meta-learning. In 2021 International JointConference on Neural Networks (IJCNN) , pages 1–8. IEEE.Gijsbers, P., Bueno, M. L., Coors, S., LeDell, E., Poirier, S., Thomas, J., Bischl, B., and Vanschoren, J.(2022). AMLB: An AutoML benchmark. arXiv preprint arXiv:2207.12560 .Gneiting, T. and Katzfuss, M. (2014). Probabilistic forecasting. Annual Review of Statistics and ItsApplication , 1:125–151.Godahewa, R., Bergmeir, C., Webb, G. I., Hyndman, R. J., and Montero-Manso, P. (2021). Monashtime series forecasting archive. In Neural Information Processing Systems Track on Datasets andBenchmarks .Hong, T., Pinson, P., Wang, Y., Weron, R., Yang, D., and Zareipour, H. (2020). Energy forecasting: Areview and outlook. IEEE Open Access Journal of Power and Energy , 7:376–388.Hyndman, R., Koehler, A. B., Ord, J. K., and Snyder, R. D. (2008). Forecasting with exponentialsmoothing: the state space approach . Springer Science & Business Media.Hyndman, R. J. and Athanasopoulos, G. (2018). Forecasting: principles and practice . OTexts.Hyndman, R. J. and Khandakar, Y. (2008). Automatic time series forecasting: the forecast packagefor R. Journal of statistical software , 27:1–22.Januschowski, T., Gasthaus, J., Wang, Y., Salinas, D., Flunkert, V., Bohlke-Schneider, M., and Callot,L. (2020). Criteria for classifying forecasting methods. International Journal of Forecasting ,36(1):167–177.13Januschowski, T., Wang, Y., Torkkola, K., Erkkilä, T., Hasson, H., and Gasthaus, J. (2022). Forecastingwith trees. International Journal of Forecasting , 38(4):1473–1481.Javeri, I. Y., Toutiaee, M., Arpinar, I. B., Miller, J. A., and Miller, T. W. (2021). Improving neuralnetworks for time-series forecasting using data augmentation and AutoML. In 2021 IEEE SeventhInternational Conference on Big Data Computing Service and Applications (BigDataService) , pages1–8. IEEE.Joblib Development Team (2020). Joblib: Running Python functions as pipeline jobs. https://joblib.readthedocs.io/ (v1.2.0).Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). Lightgbm:A highly efficient gradient boosting decision tree. Advances in Neural Information ProcessingSystems , 30.Kurian, J. J., Dix, M., Amihai, I., Ceusters, G., and Prabhune, A. (2021). BOAT: A Bayesian optimiza-tion autoML time-series framework for industrial applications. In 2021 IEEE Seventh InternationalConference on Big Data Computing Service and Applications (BigDataService) , pages 17–24. IEEE.LeDell, E. and Poirier, S. (2020). H2O AutoML: Scalable automatic machine learning. In Proceedingsof the AutoML Workshop at ICML , volume 2020.Lim, B., Arık, S. Ö., Loeff, N., and Pfister, T. (2021). Temporal fusion transformers for interpretablemulti-horizon time series forecasting. International Journal of Forecasting , 37(4):1748–1764.Makridakis, S. and Hibon, M. (2000). The M3 competition: Results, conclusions and implications.International journal of forecasting , 16(4):451–476.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 competition: Results, findings,conclusion and way forward. International Journal of Forecasting , 34(4):802–808.Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2022). The M5 competition: Background,organization, and implementation. International Journal of Forecasting , 38(4):1325–1336.Meisenbacher, S., Turowski, M., Phipps, K., Rätz, M., Müller, D., Hagenmeyer, V., and Mikut, R.(2022). Review of automated time series forecasting pipelines. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery , 12(6):e1475.Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. (2023). A time series is worth 64 words:Long-term forecasting with transformers. International Conference on Learning Representations .Nikolopoulos, K., Punia, S., Schäfers, A., Tsinopoulos, C., and Vasilakis, C. (2021). Forecasting andplanning during a pandemic: COVID-19 growth rates, supply chain disruptions, and governmen-tal decisions. European journal of operational research , 290(1):99–115.Nixtla (2023). MLForecast scalable machine learning for time series forecasting. v0.7.2.Olson, R. S. and Moore, J. H. (2016). TPOT: A tree-based pipeline optimization tool for automatingmachine learning. In Workshop on automatic machine learning , pages 66–74. PMLR.Oreshkin, B. N., Carpov, D., Chapados, N., and Bengio, Y. (2020). N-beats: Neural basis expansionanalysis for interpretable time series forecasting.pandas development team (2020). pandas-dev/pandas: Pandas. https://doi.org/10.5281/zenodo.3509134 (v1.5.3).14Ratcliff, R. (1979). Group reaction time distributions and an analysis of distribution statistics.Psychological bulletin , 86(3):446.Rojat, T., Puget, R., Filliat, D., Del Ser, J., Gelin, R., and Díaz-Rodríguez, N. (2021). Explainableartificial intelligence (XAI) on timeseries data: A survey. arXiv preprint arXiv:2104.00950 .Salinas, D., Flunkert, V., Gasthaus, J., and Januschowski, T. (2020). DeepAR: Probabilistic forecastingwith autoregressive recurrent networks. International Journal of Forecasting , 36(3):1181–1191.Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., Cancedda, N., andScialom, T. (2023). Toolformer: Language models can teach themselves to use tools. arXiv preprintarXiv:2302.04761 .Semenoglou, A.-A., Spiliotis, E., Makridakis, S., and Assimakopoulos, V. (2021). Investigating theaccuracy of cross-learning time series forecasting methods. International Journal of Forecasting ,37(3):1072–1084.Shah, S. Y., Patel, D., Vu, L., Dang, X.-H., Chen, B., Kirchner, P., Samulowitz, H., Wood, D., Bramble,G., Gifford, W. M., et al. (2021). AutoAI-TS: AutoAI for time series forecasting. In Proceedings ofthe 2021 International Conference on Management of Data , pages 2584–2596.Shi, X., Mueller, J., Erickson, N., Li, M., and Smola, A. (2021). Multimodal AutoML on structuredtables with text fields. In 8th ICML Workshop on Automated Machine Learning (AutoML) .Stankeviciute, K., M Alaa, A., and van der Schaar, M. (2021). Conformal time-series forecasting.Advances in Neural Information Processing Systems , 34:6216–6228.Syntetos, A. A., Boylan, J. E., and Disney, S. M. (2009). Forecasting for inventory planning: a 50-yearreview. Journal of the Operational Research Society , 60:S149–S160.Thornton, C., Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2013). Auto-WEKA: Combinedselection and hyperparameter optimization of classification algorithms. In Proceedings of the 19thACM SIGKDD international conference on Knowledge discovery and data mining , pages 847–855.Ting, K. M. and Witten, I. H. (1997). Stacking bagged and dagged models.Tornede, A., Deng, D., Eimer, T., Giovanelli, J., Mohan, A., Ruhkopf, T., Segel, S., Theodorakopoulos,D., Tornede, T., Wachsmuth, H., et al. (2023). AutoML in the age of large language models:Current challenges, future opportunities and risks. arXiv preprint arXiv:2306.08107 .Tripathy, D. S. and Prusty, B. R. (2021). Forecasting of renewable generation for applications insmart grid power systems. In Advances in Smart Grid Power System , pages 265–298. Elsevier.Van Kuppevelt, D., Meijer, C., Huber, F., van der Ploeg, A., Georgievska, S., and van Hees, V. T.(2020). Mcfly: Automated deep learning on time series. SoftwareX , 12:100548.Wang, X., Hyndman, R. J., Li, F., and Kang, Y. (2022). Forecast combinations: an over 50-year review.International Journal of Forecasting .Wen, R., Torkkola, K., Narayanaswamy, B., and Madeka, D. (2017). A multi-horizon quantilerecurrent forecaster. arXiv preprint arXiv:1711.11053 .Xu, C. and Xie, Y. (2021). Conformal prediction interval for dynamic time-series. In InternationalConference on Machine Learning , pages 11559–11569. PMLR.Zimmer, L., Lindauer, M., and Hutter, F. (2021). Auto-PyTorch: Multi-fidelity metalearning forefficient and robust AutoDL. IEEE Transactions on Pattern Analysis and Machine Intelligence ,43(9):3079–3090.15A Supplementary MaterialsA.1 Evaluation MetricsMASE. Mean absolute scaled error is the standard metric for evaluating the accuracy of pointforecasts.MASE =1NN∑︁i=11HÍHh=1|yi,T+h−ˆyi,T+h|ÍT−st=1|yi,t+s−yi,t|MASE is scale-invariant and does not suffer from the limitations of other metrics, such as beingundefined when the target time series equals zero (Hyndman and Athanasopoulos, 2018). Wecompute the metric using the median (0.5 quantile) forecast produced by each model.wQL. Weighted quantile loss for a single quantile level qis defined aswQL[q]=2ÍNi=1ÍHh=1hq·max(yi,T+h−ˆyqi,T+h,0)+(1−q)·max(ˆyqi,T+h−yi,T+h,0)iÍNi=1ÍHh=1|yi,T+h|In our experiments, we report the mean wQL averaged over 9 quantile levels Q={0.1,0.2,...,0.9}.wQL =1|Q|∑︁q∈QwQL[q]A.2 ReproducibilityWe ran all experiments using AutoMLBenchmark (Gijsbers et al., 2022). We provide afork of AMLB that includes all scripts necessary to reproduce the results from our pa-per in the following GitHub repository https://github.com/shchur/automlbenchmark/tree/autogluon-timeseries-automl23/autogluon_timeseries_automl23 .A.3 Model ConfigurationWe trained the baseline models DeepAR, TFT, AutoARIMA, AutoETS, AutoTheta with the defaulthyperparameter configurations provided by the respective libraries. For DeepAR and TFT, thelastprediction_length time steps of each time series were reserved as a validation set. Bothmodels were trained for the full duration of 4 hours, saving the parameters and evaluating thevalidation loss at each epoch. The parameters achieving the lowest validation loss were then usedfor prediction. No HPO was performed for these two models, as AutoPyTorch already trains similardeep learning models with HPO.For AutoPyTorch, we used the reference implementation by the authors.3We set the tar-get metric to "mean_MASE_forecasting" ,budget_type="epochs" ,min_budget=5 ,max_budget=50 ,and resampling_strategy=HoldoutValTypes.time_series_hold_out_validation . We also settorch_num_threads to 16 (the number of vCPU cores).In our experiments, we used AG–TS v0.8.2, the latest release at the time of publication. Weused the "best_quality" presets and set eval_metric to either "MASE" or"mean_wQuantileLoss" ,depending on the experiment. All other parameters of the TimeSeriesPredictor were set totheir default values. The "best_quality" presets include the following models: AutoETS, Au-toARIMA, Theta (from StatsForecast), DeepAR, PatchTST, TFT (from GluonTS), DirectTabular,RecursiveTabular (wrappers around AutoGluon–Tabular and MLForecast), plus the baseline meth-ods Naive and SeasonalNaive. The non-default hyperparameters of the individual models used bythebest_quality presets are provided in Table 6.3https://github.com/dengdifan/Auto-PyTorch/blob/ecml22_apt_ts/examples/APT-TS/APT_task.py16The guiding principle for developing the presets for AG–TS can be summarized as “keep defaultswhenever possible, except the cases where the defaults are clearly suboptimal”. For example, wesetallowmean=True for AutoARIMA to allow this model to handle time series with non-zeromean. For deep learning models, we increase the batch size from 32 to 64 since larger batch sizestypically lead to faster convergence for all deep learning models. The context_length is capped ata minimum value because the default setting context_length=prediction_length can result inmodels that ignore most of the history if prediction_length is very short. For PatchTST, we setthecontext_length to the value used in the respective publication (Nie et al., 2023).The versions of frameworks used in our experiments are listed in Table 7.Table 6: Non-default hyperparameters that AutoGluon sets for the underlying models. The remainingparameters are all set to their defaults in the respective libraries. Models not listed here(Naive, SeasonalNaive, AutoETS, DirectTabular, Theta) have all their hyperparameters set tothe default values.Model Hyperparameter ValueAutoARIMA allowmean Trueapproximation TrueDeepAR batch_size 64context_length max(10, 2 * prediction_length)num_samples 250PatchTST batch_size 64context_length 96TFT batch_size 64context_length max(64, 2 * prediction_length)RecursiveTabular tabular_hyperparameters {"GBM", "NN_TORCH"}Table 7: Versions of the frameworks used during evaluation.Framework VersionAutoGluon 0.8.2AutoPyTorch 0.2.1GluonTS 0.13.2MLForecast 0.7.3StatsForecast 1.5.0Python 3.9PyTorch 1.13.1+cpu17Table 8: Statistics of the benchmark datasets used in our experimental evaluation. Frequency isrepresented by pandas offset aliases. Seasonality depends on the frequency, and is used toconfigure statistical models and compute the MASE metric.Dataset # series # time steps Prediction length Frequency SeasonalityCar Parts 2,674 104,286 12 M 12CIF 2016 72 6,244 12 M 12COVID 266 48,412 30 D 7Electricity Hourly 321 8,428,176 48 H 24Electricity Weekly 321 47,508 8 W 1FRED-MD 107 76,612 12 M 12Hospital 767 55,224 12 M 12KDD Cup 2018 270 2,929,404 48 H 24M1 Monthly 617 44,892 18 M 12M1 Quarterly 203 8,320 8 Q 4M1 Yearly 181 3,429 6 Y 1M3 Monthly 1,428 141,858 18 M 12M3 Other 174 11,933 8 Q 1M3 Quarterly 756 30,956 8 Q 4M3 Yearly 645 14,449 6 Y 1M4 Daily 4,227 9,964,658 14 D 7M4 Hourly 414 353,500 48 H 24M4 Monthly 48,000 10,382,411 18 M 12M4 Quarterly 24,000 2,214,108 8 Q 4M4 Weekly 359 366,912 13 W 1M4 Yearly 22,974 707,265 6 Y 1NN5 Daily 111 81,585 56 D 7NN5 Weekly 111 11,655 8 W 1Pedestrian Counts 66 3,129,178 48 H 24Tourism Monthly 366 100,496 24 M 12Tourism Quarterly 427 39,128 8 Q 4Tourism Yearly 518 10,685 4 Y 1Vehicle Trips 262 45,253 7 D 7Web Traffic Weekly 145,063 15,376,678 8 W 118Table 9: Point forecast accuracy, as measured by MASE (lower is better). For non-deterministic methods(DeepAR, TFT, AutoPyTorch, AutoGluon) we report the mean and standard deviation of thescores computed over 5 random seeds. "d.n.f." denotes cases where a method did not generatea forecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 1.127 1.118 1.133 1.208 1.052 0.749 (0.001) 0.751 (0.002) 0.746 (0.0) 0.747 (0.0)CIF 2016 1.289 1.069 0.898 1.006 0.945 1.278 (0.088) 1.372 (0.085) 1.023 (0.069) 1.073 (0.006)COVID 8.977 6.029 5.907 7.719 5.884 7.166 (0.334) 5.192 (0.211) 4.911 (0.086) 5.805 (0.0)Electricity Hourly 1.405 d.n.f. 1.465 d.n.f. d.n.f. 1.251 (0.006) 1.389 (0.025) 1.420 (0.123) 1.227 (0.003)Electricity Weekly 3.037 3.009 3.076 3.113 3.077 2.447 (0.211) 2.861 (0.122) 2.322 (0.277) 1.892 (0.0)FRED-MD 1.101 0.478 0.505 0.564 0.498 0.634 (0.038) 0.901 (0.086) 0.682 (0.058) 0.656 (0.0)Hospital 0.921 0.820 0.766 0.764 0.753 0.771 (0.008) 0.814 (0.012) 0.770 (0.003) 0.741 (0.001)KDD Cup 2018 0.975 d.n.f. 0.988 1.010 d.n.f. 0.841 (0.036) 0.844 (0.065) 0.764 (0.047) 0.709 (0.026)M1 Monthly 1.314 1.152 1.083 1.092 1.045 1.117 (0.029) 1.534 (0.063) 1.278 (0.115) 1.235 (0.001)M1 Quarterly 2.078 1.770 1.665 1.667 1.622 1.742 (0.028) 2.099 (0.108) 1.813 (0.056) 1.615 (0.0)M1 Yearly 4.894 3.870 3.950 3.659 3.769 3.674 (0.161) 4.318 (0.122) 3.407 (0.078) 3.371 (0.007)M3 Monthly 1.146 0.934 0.867 0.855 0.845 0.960 (0.017) 1.062 (0.04) 0.956 (0.083) 0.822 (0.0)M3 Other 3.089 2.245 1.801 2.009 1.769 2.061 (0.182) 1.926 (0.028) 1.871 (0.024) 1.837 (0.004)M3 Quarterly 1.425 1.419 1.121 1.119 1.096 1.198 (0.037) 1.176 (0.036) 1.180 (0.032) 1.057 (0.002)M3 Yearly 3.172 3.159 2.695 2.608 2.627 2.694 (0.096) 2.818 (0.019) 2.691 (0.026) 2.520 (0.002)M4 Daily 1.452 1.153 1.228 1.149 1.145 1.145 (0.026) 1.176 (0.018) 1.152 (0.009) 1.156 (0.0)M4 Hourly 1.193 1.029 1.609 2.456 1.157 1.484 (0.151) 3.391 (0.442) 1.345 (0.404) 0.807 (0.001)M4 Monthly 1.079 0.812 0.803 0.834 0.780 0.933 (0.01) 0.947 (0.005) 0.851 (0.025) 0.782 (0.0)M4 Quarterly 1.602 1.276 1.167 1.183 1.148 1.367 (0.171) 1.277 (0.015) 1.176 (0.022) 1.139 (0.0)M4 Weekly 2.777 2.355 2.548 2.608 2.375 2.418 (0.026) 2.625 (0.038) 2.369 (0.177) 2.035 (0.001)M4 Yearly 3.966 3.720 3.077 3.085 3.032 3.858 (0.694) 3.220 (0.097) 3.093 (0.041) 3.019 (0.001)NN5 Daily 1.011 0.935 0.870 0.878 0.859 0.812 (0.01) 0.789 (0.004) 0.807 (0.021) 0.761 (0.004)NN5 Weekly 1.063 0.998 0.980 0.963 0.977 0.915 (0.085) 0.884 (0.012) 0.865 (0.025) 0.860 (0.0)Pedestrian Counts 0.369 d.n.f. 0.553 d.n.f. d.n.f. 0.309 (0.005) 0.373 (0.01) 0.354 (0.024) 0.312 (0.009)Tourism Monthly 1.631 1.585 1.529 1.666 1.469 1.461 (0.025) 1.719 (0.08) 1.495 (0.009) 1.442 (0.0)Tourism Quarterly 1.699 1.655 1.578 1.648 1.539 1.599 (0.062) 1.830 (0.047) 1.647 (0.034) 1.537 (0.002)Tourism Yearly 3.552 4.044 3.183 2.992 3.231 3.476 (0.165) 2.916 (0.197) 3.004 (0.053) 2.946 (0.007)Vehicle Trips 1.302 1.427 1.301 1.284 1.203 1.162 (0.016) 1.227 (0.02) 1.162 (0.019) 1.113 (0.0)Web Traffic Weekly 1.066 1.189 1.207 1.108 1.068 N/A 0.973 (0.022) 0.962 (0.01) 0.938 (0.0)19Table 10: Probabilistic forecast accuracy, as measured by wQL (lower is better). For non-deterministicmethods (DeepAR, TFT, AutoGluon) we report the mean and standard deviation of the scorescomputed over 5 random seeds. "d.n.f." denotes cases where a method did not generate aforecast in 6 hours. "N/A" denotes model failure.SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoGluonCar Parts 1.717 1.589 1.338 1.367 1.324 0.963 (0.009) 0.878 (0.004) 0.923 (0.0)CIF 2016 0.031 0.017 0.039 0.027 0.028 0.114 (0.024) 0.010 (0.002) 0.019 (0.0)COVID 0.140 0.030 0.046 0.094 0.046 0.072 (0.02) 0.031 (0.003) 0.030 (0.0)Electricity Hourly 0.108 d.n.f. 0.100 d.n.f. d.n.f. 0.081 (0.002) 0.097 (0.001) 0.076 (0.0)Electricity Weekly 0.141 0.138 0.144 0.146 0.141 0.123 (0.041) 0.118 (0.011) 0.088 (0.0)FRED-MD 0.104 0.056 0.050 0.057 0.054 0.054 (0.021) 0.114 (0.011) 0.056 (0.0)Hospital 0.062 0.058 0.053 0.055 0.053 0.053 (0.001) 0.054 (0.001) 0.051 (0.0)KDD Cup 2018 0.489 d.n.f. 0.550 0.553 d.n.f. 0.363 (0.014) 0.488 (0.054) 0.323 (0.014)M1 Monthly 0.153 0.146 0.163 0.159 0.152 0.136 (0.008) 0.224 (0.016) 0.135 (0.0)M1 Quarterly 0.119 0.088 0.081 0.082 0.083 0.084 (0.003) 0.093 (0.006) 0.090 (0.0)M1 Yearly 0.184 0.160 0.139 0.137 0.142 0.142 (0.029) 0.127 (0.004) 0.134 (0.001)M3 Monthly 0.124 0.102 0.093 0.095 0.092 0.098 (0.001) 0.109 (0.003) 0.089 (0.0)M3 Other 0.047 0.035 0.032 0.035 0.031 0.036 (0.002) 0.033 (0.001) 0.031 (0.0)M3 Quarterly 0.083 0.079 0.069 0.070 0.068 0.073 (0.001) 0.071 (0.001) 0.065 (0.0)M3 Yearly 0.141 0.162 0.129 0.128 0.128 0.117 (0.002) 0.133 (0.001) 0.114 (0.0)M4 Daily 0.030 0.023 0.025 0.023 0.023 0.023 (0.0) 0.023 (0.0) 0.022 (0.0)M4 Hourly 0.039 0.036 0.070 0.041 0.037 0.065 (0.03) 0.038 (0.002) 0.030 (0.001)M4 Monthly 0.109 0.085 0.085 0.088 0.082 0.092 (0.003) 0.089 (0.001) 0.081 (0.0)M4 Quarterly 0.099 0.082 0.079 0.079 0.076 0.084 (0.005) 0.083 (0.001) 0.075 (0.0)M4 Weekly 0.073 0.050 0.052 0.053 0.050 0.046 (0.001) 0.049 (0.001) 0.041 (0.0)M4 Yearly 0.138 0.130 0.111 0.115 0.109 0.124 (0.006) 0.116 (0.004) 0.104 (0.0)NN5 Daily 0.292 0.169 0.162 0.188 0.164 0.148 (0.002) 0.145 (0.001) 0.140 (0.0)NN5 Weekly 0.142 0.090 0.088 0.090 0.089 0.084 (0.007) 0.085 (0.001) 0.078 (0.0)Pedestrian Counts 0.675 d.n.f. 0.764 d.n.f. d.n.f. 0.230 (0.006) 0.261 (0.008) 0.238 (0.013)Tourism Monthly 0.088 0.095 0.101 0.091 0.085 0.086 (0.005) 0.103 (0.01) 0.083 (0.0)Tourism Quarterly 0.099 0.098 0.070 0.061 0.070 0.068 (0.002) 0.083 (0.005) 0.072 (0.0)Tourism Yearly 0.170 0.156 0.157 0.176 0.155 0.141 (0.016) 0.102 (0.006) 0.152 (0.0)Vehicle Trips 0.112 0.100 0.115 0.120 0.103 0.090 (0.002) 0.099 (0.005) 0.087 (0.0)Web Traffic Weekly 0.936 0.475 8·10130.503 0.474 N/A 0.223 (0.011) 0.225 (0.0)20Table 11: Average run time of each method (in minutes).Dataset SeasonalNaive AutoARIMA AutoETS AutoTheta StatEnsemble DeepAR TFT AutoPyTorch AutoGluonCar Parts 0.1 2.4 0.6 0.7 3.3 6.9 9.2 240.3 17.4CIF 2016 0.1 0.4 0.5 0.6 1.3 4.1 6.2 240.2 16.7COVID 0.1 1.4 0.5 0.7 2.3 7.9 8.8 240.4 29.3Electricity Hourly 0.2 >360 21.6 >360 >360 10.4 19.5 240.4 61.2Electricity Weekly 0.2 0.3 0.4 0.5 1.0 3.1 6.6 240.2 14.9FRED-MD 0.1 2.4 0.7 0.6 3.4 6.8 5.5 240.2 16.8Hospital 0.1 0.9 0.7 0.7 2.1 4.6 7.6 240.2 17.4KDD Cup 2018 0.1 >360 16.3 22.8 >360 12.4 11.9 240.3 56.0M1 Monthly 0.1 1.5 0.8 0.7 2.7 5.5 6.2 240.2 21.6M1 Quarterly 0.1 0.3 0.5 0.7 1.3 5.9 5.4 240.2 15.6M1 Yearly 0.1 0.3 0.4 0.4 0.9 4.2 5.2 240.2 12.9M3 Monthly 0.1 4.0 1.0 0.8 5.8 5.1 5.9 240.3 24.2M3 Other 0.1 0.3 0.4 0.4 0.9 5.0 6.0 240.2 13.6M3 Quarterly 0.1 0.5 0.6 0.7 1.6 4.6 6.0 240.3 15.7M3 Yearly 0.1 0.4 0.5 0.4 1.0 5.9 5.4 240.2 12.7M4 Daily 0.2 28.5 33.0 25.3 82.3 6.8 8.4 240.3 68.7M4 Hourly 0.1 84.9 1.8 0.8 89.5 9.2 10.9 240.2 51.2M4 Monthly 0.3 296.0 37.6 7.7 340.3 4.9 7.9 242.0 112.1M4 Quarterly 0.2 15.7 6.2 1.6 23.2 4.7 7.6 240.9 62.3M4 Weekly 0.1 0.6 0.5 1.3 2.2 5.6 7.8 240.3 20.8M4 Yearly 0.2 4.3 0.8 0.7 5.6 4.2 6.1 240.8 35.6NN5 Daily 0.1 2.5 0.5 0.6 3.3 7.3 10.9 240.3 37.4NN5 Weekly 0.1 0.3 0.4 0.4 1.0 3.6 6.4 240.2 13.7Pedestrian Counts 0.1 >360 4.9 >360 >360 13.5 16.7 240.7 56.4Tourism Monthly 0.1 10.2 0.8 0.7 13.1 4.4 7.6 240.2 26.0Tourism Quarterly 0.1 0.9 0.6 0.7 1.8 3.6 6.3 240.2 14.6Tourism Yearly 0.1 0.3 0.4 0.4 1.0 3.5 5.8 240.3 12.4Vehicle Trips 0.1 1.1 0.6 0.7 2.2 5.1 7.3 240.2 16.0Web Traffic Weekly 0.2 42.3 3.7 6.2 52.8 N/A 8.3 260.5 106.021 |
n1ODM_LrRs | Mobility data improve forecasting of COVID-19 incidence trendsusing Graph Neural Networks (Extended Abstract)Simon Witzkesimon.witzke@hpi.deHasso Plattner Institute, Digital Engineering Faculty,University of PotsdamNoel Danznoel.danz@hpi.deHasso Plattner Institute, Digital Engineering Faculty,University of PotsdamKatharina Baumkatharina.baum@hpi.deDepartment of Mathematics and Computer Science, FreeUniversity BerlinHasso Plattner Institute, Digital Engineering Faculty,University of PotsdamBernhard Y. Renardbernhard.renard@hpi.deHasso Plattner Institute, Digital Engineering Faculty,University of PotsdamABSTRACTThe COVID-19 pandemic has had a considerable global impactover the last few years. Many efforts were made to understandand estimate its development. The availability of large amounts ofdata, including mobility data, has led to numerous Graph NeuralNetworks (GNN) being proposed to leverage this data and forecastcase numbers for the short-term future. However, information abouttrend developments, especially where trends reverse directions, iscrucial in informing decisions. GNNs may be able to use informationfrom regions where trends change first to improve predictionsfor locations with delays. We consider the first omicron wave inGermany at the end of 2021 and compare a heterogeneous GNNusing mobility data with a model without spatial information. Weobserve that, for this period, mobility data significantly improveforecasts and specifically that improvements occur earlier in time.Using GNNs and mobility data enables leveraging information fromcounties affected earlier to improve forecasts for counties affectedlater. We conclude that such performance improvements could betransferred to counties with earlier change points by also includingneighboring nations in the graph structure. Further, we emphasizethe need for systematic contextual evaluation of GNN-based modelsfor forecasting pandemic trends.KEYWORDSmobility data, trend estimation, graph neural networks, covid-19ACM Reference Format:Simon Witzke, Noel Danz, Katharina Baum, and Bernhard Y. Renard. 2023.Mobility data improve forecasting of COVID-19 incidence trends usingGraph Neural Networks (Extended Abstract). In epiDAMIK 2023: 6th epi-DAMIK ACM SIGKDD International Workshop on Epidemiology meets DataMining and Knowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 5 pages.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).1 INTRODUCTIONSpreading from Wuhan, China, in late 2019, the COVID-19 pan-demic has held humanity in its grasp until recently [35]. The pan-demic has had drastic consequences, with estimates of almost fifteenmillion excess deaths only in 2020 and 2021 [20] and considerableeconomic and social damages [5]. The global scale of the pandemicled to large amounts of data on different modalities related to epi-demic spread being shared, such as mobility and sequencing data.These have been made available to support the development offorecasting methods intended to inform decision makers concern-ing potential interventions [21, 23]. Human mobility is a centraldriver in the geographical spread of epidemics caused by air-bornediseases [3], enabling the virus to travel between regions and, inthe case of COVID-19, rapidly infecting most of the world. Dur-ing the pandemic, researchers have combined mobility networkswith mechanistic models to understand the influences of changedmobility behavior and further highlight its importance for the pan-demic’s development [4, 30]. Schlosser et al.[30] have shown thatlockdowns strongly impacted mobility structures during the firstCOVID-19 wave in Germany and that the associated reduction inmobility can slow the virus’ geographical spread.Various spatio-temporal approaches using Recurrent Neural Net-works and EXtreme Gradient Boosting have been proposed to fore-cast county-level COVID-19 metrics [11, 18, 22, 34]. However, recentadvances in deep graph learning have led to Graph Neural Networks(GNNs) gaining popularity in domains as diverse as traffic forecast-ing [12] or computational chemistry [26]. Human mobility betweengeographical regions can naturally be represented as graphs, wherenodes represent locations, such as counties, and edges movementsbetween them. Consequently, numerous approaches that try lever-aging the power of GNNs to forecast COVID-19-related metrics,such as cases, deaths, and hospitalizations, have been proposed[9, 10, 13, 24]. These approaches have shown promising results inproviding insights into the short-term development of the COVID-19 pandemic. However, informing decision makers about a trendforecast rather than exact numbers might be more beneficial. Com-municating trends can be easier than directly communicating casesor deaths. Trends are strong indicators of relevant changes in theepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Witzke et al.pandemic development and a need for interventions, and their in-terpretation is straightforward. For example, the US Governmentused a 14-day downward trend in COVID-19 cases as a conditionfor potential re-openings [6]. For this purpose, systematically eval-uating GNN-based methods’ ability to correctly forecast trends isessential. Accurate forecasts are especially relevant for phases withchange points, where locations successively experience a changein their trend, such as the peak of a wave.There are secondary time series modalities, such as Googlesearch trends and smart body temperature sensors. These modali-ties potentially reflect changes in trends faster than case numbers.This has been successfully leveraged by Kogan et al.[15] and Stol-erman et al.[31] to develop early-warning systems in the UnitedStates that detect such trend signals up to weeks in advance. Simi-larly, GNNs may utilize nodes with leading time series to improveforecasts for nodes with lagging time series by passing informationvia the underlying graph, i.e., information from locations wherechanges occur earlier might be beneficial for forecasting locationswhere similar changes are delayed.In this work, we investigate whether mobility data can improveforecasts of 14-day linear trends of the COVID-19 incidence. Weevaluate county-level forecasts of a heterogeneous GNN for loca-tions experiencing a change point during the second half of the firstomicron wave at the end of 2021 in Germany [19], where cases arebeginning to decline. We further analyze whether our GNN can uti-lize information from counties with leading changes for forecastingcounties that experience similar changes later. Finally, we discussthe implications for developing and evaluating future GNN-basedmethods for pandemic forecasting.2 MATERIALS AND METHODS2.1 Graph ConstructionInspired by Kapoor et al.[13], we construct heterogeneous spatio-temporal graph samples with distinct edge types for spatial andtemporal connections. We design each graph sample to contain 15weighted mobility subgraphs, representing movements betweenthe 400 German counties as nodes at successive points in time,t−14,...,t . We use spatial edges to express these mobility graphs.The directed but unweighted temporal edges then link each countyat a time point t−14,...,t to its representations on up to sevenprevious days, connecting the spatial components of the graph.Therefore, each graph sample represents a single point in timewhile still including historical information from previous days.We use mobility data [16, 28] to build the spatial edges. The useddataset contains the daily movements of nearly one million mobilephone users in Germany and is non-public due to privacy concerns.The number of mobile phones sending location information variesdaily, so we normalize the movements by the daily device count andthen re-scale all movements with the average daily device count.We find that the daily mobility networks’ adjacency matrices areprimarily symmetric, i.e., the opposing edges are highly similar.Therefore, we convert the directed into undirected graphs by sum-ming the weights of the edges in both directions. Finally, we denoisethe mobility graphs by removing 30% of the non-zero edges withthe lowest edge weights, where edges on the thresholding boundaryare removed randomly.The node features of our graph consist of dynamic and staticfeatures. We obtain data on the COVID-19 case numbers startingin January 2020 from the Robert Koch Institute [27] and aggregatethe data on the county level, resulting in a total of 400 time series.Countering reporting inaccuracies, we calculate the county-level7-day incidence, a right-aligned 7-day moving sum normalized bythe county population and then scaled by 100,000. Each node attimethas the 7-day incidence of the previous seven days until dayt−6as node features. Additionally, we include a cyclical sine/cosineencoding [33] for the weekday and month. This cyclical encodingaims to improve the learning of short and long-term seasonal effects.Lastly, we use the population density of each county as the onlystatic feature. We collect the census data, such as population sizeand population density, from the German Federal Office of Statistics[17].As prediction targets, we use 14-day trends in the COVID-19incidence obtained from linear approximations. A linear approxi-mation has the advantage that it allows us to estimate the strengthof a trend and not only its direction compared to converting theproblem to a classification task. For this purpose, we smooth the7-day incidence time series for the whole dataset to remove remain-ing artifacts, using a center-aligned 7-day moving average. For eachcounty and time point t, we perform a linear regression on thissmoothed time series with the known time series values at timepointst+1,...,t+14as the dependent variable and the number ofdays from time tinto the future h∈1,...,14as the independentvariable. We then use the slope of this regression, representing alinear trend of the COVID-19 incidence over the next 14 days fromtime pointt, as the ground truth for our forecasts.2.2 Graph Neural NetworkOur GNN is similar to the network used by Kapoor et al.[13] andbased on Kipf and Welling’s[14] graph convolutional layer. Weextend this architecture by using relational graph convolutionallayers (R-GCN), an extension for heterogeneous graphs proposed bySchlichtkrull et al.[29] that allows feature updates via multiple edgetypes, where each edge type has its own set of learned parameters.First, the node features are passed through an initial encoding layerfollowed by a dropout with a probability of 0.2. Next is a three-layer GNN, each with a dropout probability of 0.5. Like Kapooret al.[13], we add skip-connections and concatenate the output ofthe initial encoding layer to the output of each R-GCN layer topreserve local information and counter over-smoothing. Lastly, weuse a multi-layer perceptron with a single hidden layer to producethe final prediction. We note that for each graph sample, we onlyuse the embeddings of the most recent spatial subgraph to obtain asingle forecast for all 400 counties. All layers have 32 hidden unitsand use a ReLU as the non-linear activation function, except forthe last linear layer, which has 16 hidden units. The output layeruses no activation function, allowing positive and negative trendpredictions. We implement our GNN in PyTorch [25] and PyTorchGeometric [7].2.3 Training setupWe use a mean squared error (MSE) regression loss and an ADAMoptimizer with a learning rate of 1.33e−4and weight decay of 1e−5.Mobility data improve forecasting of COVID-19 incidence trends using Graph Neural Networks (Extended Abstract) epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAWe employ a batch size of 128 and train for a maximum of 250epochs with early stopping, with a patience of 10 epochs withoutimprovement.We adopt a rolling-origin evaluation approach [32] where weextend the training set by the test sample of the previous iteration.We test from November 10, 2021, until December 19, 2021, with allprevious data being used for training and validation. We use alldata from January 15, 2020, for training and validation. Therefore,the training and validation set contains 665 samples for the firsttest sample and grows to 704 samples for the last test sample. Ourvalidation set consists of the day after the last training sample andis used for early stopping and model selection. We always havea 17-day gap between the validation and test samples to avoidinformation leakage to the test sample while also mimicking areal-world situation where we use all the available data to make aforecast.To counter the sparseness of training data and avoid conditioningour model too strongly on periods that contain limited information,such as summer periods with low incidences, we oversample thetraining set by multiplicating specific samples. We combine theglobal German COVID-19 incidence time series with an exponentialfunction, assigning higher importance to more recent dates. Weconvert the result into a discrete probability distribution whereeach sample is assigned a probability. We then draw from thisdistribution with replacement. We use an oversampling rate of 10.2.4 Evaluation ScenarioWhile we train our models using an MSE regression loss, this metricis not optimal for evaluating our models’ performance. Differentcounties experience the considered phase of the pandemic differ-ently and a metric dependent on the range of the trend values couldbias our evaluation.Therefore we evaluate the models’ performance using the MeanAbsolute Percentage Error (MAPE) (Appendix A.1) and the sym-metric Mean Absolute Percentage Error (sMAPE) (Appendix A.2).Further, while MAPE and sMAPE provide insight into the error inthe magnitude of the trend, we are also interested in the model’sability to predict the direction of the trend. For this purpose, weevaluate our models with an adaption of the Mean DirectionalAccuracy (MDA) (Appendix A.3).To investigate if our models can leverage mobility data to im-prove predictions in counties with lagging change points, we con-sider the first omicron wave at the end of 2021, from November 10to December 19. For this period, we extract the date on which eachcounty’s corresponding smoothed COVID-19 7-day incidence timeseries has its maximum, i.e., its peak. We consider this the pointwhen the trend will likely change from positive to negative as theincidence begins to decline.After obtaining the peak for each county, we use a 7-day movingwindow to evaluate how the prediction performance develops asmore counties reach their peak. For each window, we collect allcounties that have their peak inside the current window. We thencompute all metrics for these counties using the forecast and groundtruth of their peak date and shift the window by one day.We conduct additional experiments with the same evaluationsetup but replace the adjacency matrices of the mobility subgraphswith identity matrices to verify that difference in performance canbe accounted to the mobility data. Thus, we train models with thesame number of parameters but do not include spatial information.3 RESULTSFor all experiments, there is a clear performance improvement asmore counties reach their peak over time that is consistent acrossall metrics. This improvement is more pronounced for models withmobility data than those without spatial information (see Figure 1).To verify that our findings that models with mobility data performbetter than models without spatial information are significant, weconduct paired one-tail Wilcoxon signed-rank tests with signifi-cance level α=0.05for all metrics. After correcting for multipletesting using the Benjamini-Hochberg method [1], we find that forMAPE ( p-value≈0.021), sMAPE ( p-value≈2.738e−6), and MDA(p-value≈6.661e−6) the mobility-conditioned models significantlyoutperform the models without spatial information.Nov 15 Dec 01 Dec 150.000.250.500.751.00sMAPEModels with mobility data Models without spatial informationA0.000.250.500.751.00Nov 15 Dec 01 Dec 15Date (2021)MDABFigure 1: (A) sMAPE (lower is better) for peaks in 7-day mov-ing windows. The performance improves over time for bothexperiments before declining. The effect occurs earlier and isgreater for models with mobility data. (B) The MDA (higheris better) almost mirrors the sMAPE’s behavior. This suggeststhat while more recent training data improve predictions,this effect is amplified by mobility data.Figure 1 (A, B) clearly shows that the improvements in sMAPEand MDA happen earlier and are more extreme for the modelswith mobility data. This difference indicates that the improvementscannot solely be attributed to the fact that the models have seenmore recent and relevant data and are therefore conditioned better.Furthermore, due to the 17-day gap to avoid information leakage,the model is unlikely to have seen any recent negative trends for acounty before its peak during training. However, as earlier coun-ties are already past their peak and are experiencing decreasingepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Witzke et al.incidences, they can share this information with counties wherepeaks occur later.4 DISCUSSION AND CONCLUSIONWe find that mobility data significantly improve forecasting perfor-mance compared to experiments without spatial information. Wehave two hypotheses for our observations. Firstly, the structuralinformation in the mobility networks and their variation over timemight lead to improved predictions. Secondly, our GNN model canpick up information from counties that experience changes, suchas beginning downtrends in incidences, earlier and use them forforecasts of counties where these changes occur delayed. With ourcurrent experimental setup, we are unable to disentangle these hy-potheses. However, further experiments, for example, using staticspatial connections, could provide insights.Counties that are the first to experience a change in trend seemunable to benefit from mobility data. However, these counties mightbe of the highest interest as changes occur earlier and are likelymore vital indicators of the need for interventions. Therefore itcould be valuable to include additional nodes representing neighbor-ing nations in our graph to leverage potentially leading informationfrom them.Our analysis suggests that systematically analyzing models’ ca-pabilities of making accurate trend forecasts during times of interestis highly valuable. Different components, such as the magnitudeand direction of a trend, are relevant for providing a holistic un-derstanding in an epidemiological context. It could be helpful toextend evaluations by applying post-hoc explainability methodsfor graph-based models to understand better how the models maketheir predictions. Such explanations could provide insights for epi-demiologists to construct hypotheses regarding the pandemic’scurrent state and spreading behavior.We showed the capabilities of a heterogeneous spatio-temporalGNN in leveraging mobility data to improve forecasts for countieswith lagging time series directly after a change in trend. We suggestthat including more global information via nodes representing othernations could extend this effect to leading counties where changesoccur first. Currently, we evaluate single rolling-origin evaluationexperiments for the change point of the COVID-19 pandemic inGermany. To substantiate our findings, we will consider differentphases of the pandemic, including change points with a switch toupward trends. Furthermore, we will run experiments repeatedlyto verify the robustness of our results and establish confidencebounds.ACKNOWLEDGMENTSThis work was supported by the German BMWK through the DAKI-FWS project [01MK21009E to B.Y.R.].REFERENCES[1] Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate:a practical and powerful approach to multiple testing. Journal of the RoyalStatistical Society. Series B (Methodological) , 57, 1, 289–300. Retrieved May 30,2023 from https://www.jstor.org/stable/2346101.[2] Oliver Blaskowitz and Helmut Herwartz. 2009. On economic evaluation ofdirectional forecasts, (Oct. 29, 2009). Publisher: Humboldt-Universität zu Berlin,Wirtschaftswissenschaftliche Fakultät. doi: 10.18452/4217.[3] Dirk Brockmann and Dirk Helbing. 2013. The hidden geometry of complex,network-driven contagion phenomena. Science , 342, 6164, (Dec. 13, 2013), 1337–1342. doi: 10.1126/science.1245200.[4] Serina Chang, Emma Pierson, Pang Wei Koh, Jaline Gerardin, Beth Redbird,David Grusky, and Jure Leskovec. 2021. Mobility network models of COVID-19explain inequities and inform reopening. Nature , 589, 7840, (Jan. 2021), 82–87.Number: 7840 Publisher: Nature Publishing Group. doi: 10.1038/s41586-020-2923-3.[5] Orestis Delardas, Konstantinos S. Kechagias, Pantelis N. Pontikos, and Panagi-otis Giannos. 2022. Socio-economic impacts and challenges of the coronaviruspandemic (COVID-19): an updated review. Sustainability , 14, 15, (Aug. 6, 2022),9699. doi: 10.3390/su14159699.[6] Romney B. Duffey and Enrico Zio. 2020. CoVid-19 pandemic trend modelingand analysis to support resilience decision-making. Biology , 9, 7, (July 2020),156. Number: 7 Publisher: Multidisciplinary Digital Publishing Institute. doi:10.3390/biology9070156.[7] Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learningwith PyTorch geometric. In ICLR Workshop on Representation Learning onGraphs and Manifolds . arXiv, (Apr. 25, 2019). doi: 10.48550/arXiv.1903.02428.[8] Benito E Flores. 1986. A pragmatic view of accuracy measurement in forecasting.Omega , 14, 2, (Jan. 1986), 93–98. doi: 10.1016/0305-0483(86)90013-7.[9] Cornelius Fritz, Emilio Dorigatti, and David Rügamer. 2022. Combining graphneural networks and spatio-temporal disease models to improve the predictionof weekly COVID-19 cases in germany. Scientific Reports , 12, 1, (Mar. 10, 2022),3930. doi: 10.1038/s41598-022-07757-5.[10] Junyi Gao, Rakshith Sharma, Cheng Qian, Lucas M Glass, Jeffrey Spaeder, JustinRomberg, Jimeng Sun, and Cao Xiao. 2021. STAN: spatio-temporal attentionnetwork for pandemic prediction using real-world evidence. Journal of theAmerican Medical Informatics Association , 28, 4, (Apr. 1, 2021), 733–743. doi:10.1093/jamia/ocaa322.[11] Murtadha D. Hssayeni, Arjuna Chala, Roger Dev, Lili Xu, Jesse Shaw, BorkoFurht, and Behnaz Ghoraani. 2021. The forecast of COVID-19 spread risk atthe county level. Journal of Big Data , 8, 1, (July 7, 2021), 99. doi: 10.1186/s40537-021-00491-1.[12] Weiwei Jiang and Jiayun Luo. 2022. Graph neural network for traffic forecasting:a survey. Expert Systems with Applications , 207, (Nov. 2022), 117921. arXiv: 2101.11174[cs]. doi: 10.1016/j.eswa.2022.117921.[13] Amol Kapoor, Xue Ben, Luyang Liu, Bryan Perozzi, Matt Barnes, Martin Blais,and Shawn O’Banion. 2020. Examining COVID-19 forecasting using spatio-temporal graph neural networks. (July 6, 2020). arXiv: 2007.03113[cs]. doi:10.48550/arXiv.2007.03113.[14] Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification withgraph convolutional networks. In 5th International Conference on LearningRepresentations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference TrackProceedings . OpenReview.net. Retrieved May 19, 2023 from https://openreview.net/forum?id=SJU4ayYgl.[15] Nicole E. Kogan et al. 2021. An early warning approach to monitor COVID-19activity with multiple digital traces in near real time. Science Advances , 7, 10,(Mar. 5, 2021), eabd6989. doi: 10.1126/sciadv.abd6989.[16] [n. d.] Kontaktindex von NET CHECK – kontaktindex von NET CHECK. Re-trieved May 25, 2023 from https://contactindex.netcheck.de/.[17] 2021. Kreisfreie städte und landkreise nach fläche, bevölkerung und bevölkerungs-dichte am 31.12.2020. de. https://www.destatis.de/DE/Themen/Laender-Regionen/Regionales/Gemeindeverzeichnis/Administrativ/04-kreise.html. Accessed:2021-10-25. (Sept. 2021).[18] Benjamin Lucas, Behzad Vahedi, and Morteza Karimzadeh. 2023. A spatiotem-poral machine learning approach to forecasting COVID-19 incidence at thecounty level in the USA. International Journal of Data Science and Analytics , 15,3, (Apr. 1, 2023), 247–266. doi: 10.1007/s41060-021-00295-9.[19] Benjamin F Maier, Angelique Burdinski, Marc Wiedermann, Annika H Rose,Frank Schlosser, Matthias An Der Heiden, Ole Wichmann, Thomas Harder,and Dirk Brockmann. 2023. Modeling the impact of the omicron infection wavein germany. Biology Methods and Protocols , 8, 1, (Jan. 10, 2023), bpad005. doi:10.1093/biomethods/bpad005.[20] William Msemburi, Ariel Karlinsky, Victoria Knutson, Serge Aleshin-Guendel,Somnath Chatterji, and Jon Wakefield. 2023. The WHO estimates of excessmortality associated with the COVID-19 pandemic. Nature , 613, 7942, (Jan. 5,2023), 130–137. doi: 10.1038/s41586-022-05522-2.[21] Anatol-Fiete Näher et al. 2023. Secondary data for global health digitalisation.The Lancet Digital Health , 5, 2, (Feb. 2023), e93–e101. doi: 10.1016/S2589-7500(22)00195-9.[22] Behnam Nikparvar, Md Mokhlesur Rahman, Faizeh Hatami, and Jean-ClaudeThill. 2021. Spatio-temporal prediction of the COVID-19 pandemic in US coun-ties: modeling with a deep LSTM neural network. Scientific Reports , 11, 1,(Nov. 5, 2021), 21715. Number: 1 Publisher: Nature Publishing Group. doi:10.1038/s41598-021-01119-3.Mobility data improve forecasting of COVID-19 incidence trends using Graph Neural Networks (Extended Abstract) epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA[23] Nuria Oliver et al. 2020. Mobile phone data for informing public health actionsacross the COVID-19 pandemic life cycle. Science Advances , 6, 23, (June 5, 2020),eabc0764. doi: 10.1126/sciadv.abc0764.[24] George Panagopoulos, Giannis Nikolentzos, and Michalis Vazirgiannis. 2021.Transfer graph neural networks for pandemic forecasting. Proceedings of theAAAI Conference on Artificial Intelligence , 35, 6, (May 18, 2021), 4838–4845.Number: 6. doi: 10.1609/aaai.v35i6.16616.[25] Adam Paszke et al. 2019. PyTorch: an imperative style, high-performance deeplearning library. In Proceedings of the 33rd International Conference on NeuralInformation Processing Systems . Number 721. Curran Associates Inc., Red Hook,NY, USA, (Dec. 8, 2019), 8026–8037. Retrieved May 26, 2023 from.[26] Patrick Reiser et al. 2022. Graph neural networks for materials science andchemistry. Communications Materials , 3, 1, (Nov. 26, 2022), 93. doi: 10.1038/s43246-022-00315-6.[27] Robert Koch-Institut. 2022. SARS-CoV-2 Infektionen in Deutschland, (Aug.2022). doi: 10.5281/zenodo.6994808.[28] Sten Rüdiger, Stefan Konigorski, Alexander Rakowski, Jonathan Antonio Edel-man, Detlef Zernick, Alexander Thieme, and Christoph Lippert. 2021. Predictingthe SARS-CoV-2 effective reproduction number using bulk contact data frommobile phones. Proceedings of the National Academy of Sciences of the UnitedStates of America , 118, 31, (Aug. 3, 2021), e2026731118. doi: 10.1073/pnas.2026731118.[29] Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg,Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convo-lutional networks. (Oct. 26, 2017). arXiv: 1703.06103[cs,stat]. doi: 10.48550/arXiv.1703.06103.[30] Frank Schlosser, Benjamin F. Maier, Olivia Jack, David Hinrichs, Adrian Zachariae,and Dirk Brockmann. 2020. COVID-19 lockdown induces disease-mitigatingstructural changes in mobility networks. Proceedings of the National Academyof Sciences , 117, 52, (Dec. 29, 2020), 32883–32890. doi: 10.1073/pnas.2012326117.[31] Lucas M. Stolerman, Leonardo Clemente, Canelle Poirier, Kris V. Parag, AtreyeeMajumder, Serge Masyn, Bernd Resch, and Mauricio Santillana. 2023. Usingdigital traces to build prospective and real-time county-level early warning sys-tems to anticipate COVID-19 outbreaks in the united states. Science Advances ,9, 3, (Jan. 20, 2023), eabq0199. doi: 10.1126/sciadv.abq0199.[32] Leonard J. Tashman. 2000. Out-of-sample tests of forecasting accuracy: ananalysis and review. International Journal of Forecasting , 16, 4, (Oct. 2000),437–450. doi: 10.1016/S0169-2070(00)00065-0.[33] Sean J. Taylor and Benjamin Letham. 2018. Forecasting at scale. The AmericanStatistician , 72, 1, (Jan. 2, 2018), 37–45. doi: 10.1080/00031305.2017.1380080.[34] Behzad Vahedi, Morteza Karimzadeh, and Hamidreza Zoraghein. 2021. Spa-tiotemporal prediction of COVID-19 cases using inter- and intra-county proxiesof human interactions. Nature Communications , 12, 1, (Nov. 8, 2021), 6440. Num-ber: 1 Publisher: Nature Publishing Group. doi: 10.1038/s41467-021-26742-6.[35] World Health Organization. 2023. Statement on the fifteenth meeting of the IHR(2005) emergency committee on the COVID-19 pandemic. (May 2023). RetrievedMay 26, 2023 from https://www.who.int/news/item/05-05-2023-statement-on-the-fifteenth-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-coronavirus-disease-(covid-19)-pandemic.A EVALUATION METRICSA.1 Mean Absolute Percentage ErrorThe Mean Absolute Percentage Error (MAPE) [8]:MAPE =1nn∑︁i=1|ˆyi−yi||yi|,wherenis the number of counties, is a relative error independent ofthe range of values. The MAPE is highly susceptible to observationsclose to zero causing the metric to explode. A smaller MAPE isbetter.A.2 symmetric Mean Absolute Percentage ErrorThe symmetric Mean Absolute Percentage Error (sMAPE) [8]:sMAPE =1nn∑︁i=1|ˆyi−yi||ˆyi|+|yi|,wherenis the number of counties, is another relative metric thattakes on values between 0and1and is therefore protected againstexploding values. A smaller sMAPE is better.A.3 Mean Directional AccuracyWe use an adaption of the Mean Directional Accuracy (MDA) [2]. Aswe only forecast a single value, the MDA can be simplified, yieldingthe rate at which the models can identify the trend correctly:MDA =1nn∑︁i=11sign(ˆyi)=sign(yi),wherenis the number of counties and 1the indicator function. Alarger MDA is better. |
J8Gc5acxME | Unlocking the Potential of Public Datasets: Wastewater-BasedEpidemiological Forecasting During COVID-19Zhicheng Zhangzczhang@cmu.eduCarnegie Mellon UniversityPittsburgh, PA, USASonja Neumeistersneumeister@ucdavis.eduUniversity of California DavisDavis, CA, USAAngel Desaiandesai@ucdavis.eduUniversity of California DavisSacramento, CA, USAMaimuna Shahnaz Majumdermaimuna.majumder@childrens.harvard.eduBoston Children’s Hospital, HarvardMedical SchoolBoston, MA, USAFei Fangfeifang@cmu.eduCarnegie Mellon UniversityPittsburgh, PA, USAABSTRACTThe COVID-19 pandemic has emphasized the necessity for effectivetools to monitor and predict epidemiological trends. Traditionalapproaches to disease surveillance possess certain limitations, lead-ing to the emergence of wastewater-based epidemiology (WBE) asa complementary approach. WBE has demonstrated a strong cor-relation with traditional epidemiological indicators (e.g., numberof clinical cases and hospitalization), which makes it a valuableasset in informing public health decision-making processes. De-spite the promising prospects of WBE, it faces two main challenges,restricted data accessibility and high intrinsic noise and distributionshift in the data. In this study, we examine the feasibility of utiliz-ing exclusively two publicly available data, specifically aggregatedwastewater data and reported case counts, for epidemiological fore-casting in the COVID-19 pandemic. We incorporate a variety ofstatistical and machine learning models in an attempt to addressthe inherent volatility and bias of the data. We further introduce theusage of the segmentation method during the evaluation phase as abetter evaluation metric. Our empirical results show that, even withlimited data, performing epidemiological forecasting is possible,and its performance is comparable with methods that use morediverse data sources, suggesting its potential for broader healthapplications. Additionally, we utilize the insights from results onthe length of the forecasting horizon to provide practical guidelinesregarding real-world prediction.KEYWORDSCOVID-19, Disease Surveillance, Wastewater-Based Epidemiology,Time-Series ForecastingACM Reference Format:Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna ShahnazMajumder, and Fei Fang . 2023. Unlocking the Potential of Public Datasets:Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).Wastewater-Based Epidemiological Forecasting During COVID-19. In epi-DAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop on Epi-demiology meets Data Mining and Knowledge Discovery, August 7, 2023, LongBeach, CA, USA. ACM, New York, NY, USA, 8 pages.1 INTRODUCTIONThe COVID-19 pandemic has emphasized the importance of reliabletools for monitoring and forecasting epidemiological trends. Tradi-tional disease surveillance approaches, based on clinical data, havelimitations in both timeliness and coverage. Wastewater-based epi-demiology (WBE) has thus emerged as a complementary approachto track the spread of infectious diseases in communities [ 8]. WBEhas demonstrated significant potential in the monitoring and fore-casting of epidemics, particularly during the COVID-19 pandemic.Several studies have utilized wastewater data to forecast clinicalcases, hospitalizations, and ICU admissions, as well as to evaluatethe effectiveness of governmental policies in containing COVID-19 transmission [ 10,12,13,27]. Studies have found a strong linkbetween data from wastewater surveillance and disease indica-tors. This link can help make better health decisions, use resourceswisely, and put interventions in place quickly.However, despite the promising results of WBE, there are twomain challenges that need to be addressed for broader practicalapplications, which haven’t been thoroughly explored in the ex-isting literature. First, current approaches in using WBE mainlyrely on small-scale, privately collected data, such as those fromuniversity campuses [ 36], or inaccessible private-sector wastew-ater data [ 10,12]. Often, methods supplement wastewater datawith additional data sources, including Community VulnerabilityIndex (CCVI) and vaccination records [ 13]. In a broader context,the sharing of wastewater data is restricted, and its coverage isgeographically skewed towards economically developed areas thathave a greater number of wastewater monitoring facilities [ 18,23].Second, the real-world epidemiological data is inherently noisydue to various factors such as sampling errors and challenges inattributing causes [ 24]. This issue is further exacerbated duringglobal pandemics like COVID-19, where the temporal correlationswithin the data can drastically shift over the course of the pandemic,undermining the accuracy of predictions. Such drastic shifts canoccur when a new variant emerges and rapidly becomes dominantor when vaccination rates significantly increase, both of whichepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fangcause distinct changes in epidemiological trends. These shifts un-derscore the need for robust forecasting models capable of adaptingto evolving pandemic dynamics.In this study, we focus on two publicly available datasets: ag-gregated wastewater data and reported case counts, both at thecountry level. This selection of datasets is driven by the ready ac-cessibility and reliability of these data sources: wastewater data isregularly published not only by the CDC’s National WastewaterSurveillance System (NWSS) but also by other agencies adhering toCDC protocols, while case count numbers are widely reported. Thiswidespread adoption of consistent data-gathering protocols ensuresthe broad availability and comparability of these datasets. It alsoaims to alleviate volatility and mitigate biases inherent in smalleror less developed regions. The COVID-19 pandemic’s landscapehas been constantly changing, influencing how we assess its spreadand impact. Initially, the case count data, encompassing both severeand mild cases, offered valuable insight into the pandemic’s trajec-tory. This metric was particularly comprehensive during periods ofwidespread testing and reporting. However, as the pandemic hasprogressed, testing methods and reporting practices have evolved,with an increase in home testing and a decrease in reports to gov-ernmental agencies. While these changes present challenges, casecount still servers as a strong signal of disease prevalence. Our coreobjective here is to investigate the feasibility of using only thesetwo publicly available data sources, case counts and wastewaterdata, for epidemiological forecasting.To evaluate this feasibility, we model the problem as a time-seriesforecasting problem characterized by significant distribution shiftsin the data over time. We employ data preprocessing techniques tomanage misaligned time-series data and introduce a segmentationalgorithm during the evaluation phase to account for temporalshifts. This segmentation method enhances evaluation accuracy byensuring that the test data spans only one wave so that the test errorwould no longer be masked by the results in other waves, and weempirically evaluate it to be a better evaluation criterion. To balanceinterpretability, simplicity, and prediction accuracy, we implementa variety of statistical and machine learning models, includinglinear regression, ARIMAX, Gaussian Process Regression, multi-layer perceptron (MLP), and Long Short-Term Memory (LSTM)networks. The diversity of these modeling techniques enables us tocompare the efficiency of simpler models with their more complex,deep-learning counterparts. Finally, our analysis shows that byonly using aggregated wastewater data and reported case counts,we can achieve comparable performance with a random-forestmodel trained on diverse data sources, including CCVI indexes, andvaccination records in [ 13]. We further empirically demonstratethat the segmentation method provides a more accurate evaluation,particularly during volatile periods such as the case count peak inearly 2022. Based on the empirical results on the effect of forecastinghorizon of different lengths, we provide a practical recommendationfor selecting the forecasting horizon in order to optimize the balancebetween reaction time and prediction accuracy.2 RELATED WORKWastewater-based epidemiology. Wastewater-based epidemiol-ogy (WBE) has become an important tool for monitoring and fore-casting epidemiological trends over the past two decades [ 8]. Dur-ing the recent outbreak of COVID-19 [ 6], wastewater data wasused to forecast clinical cases, hospitalizations, and ICU admis-sions, as well as to evaluate the effectiveness of governmental poli-cies [ 10,12,12,13,27]. Galani et al . [10] , Kaplan et al . [12] , Stephenset al. [27] measured the wastewater for a number of monitoringsites and empirically demonstrated a strong correlation betweenhospitalizations and wastewater surveillance data using regressionmodels. Kaplan et al . [12] used wastewater data to estimate repro-ductive numbers. Li et al . [13] used data from 100USA counties topredict hospital and ICU admission numbers using random forestmodels.However, despite its effectiveness in predicting epidemiologicaltrends, wastewater data were not widely shared with the public oraccessible to researchers, making it infeasible to perform additionalanalyses [ 18]. Current works often rely on small-scale, privatelycollected dataset [ 36], or supplement the dataset with other diversesources of data, like vaccination records and CCVI indexes [ 13].In addition, the coverage of wastewater data is severely biasedtoward economically more developed geographic regions with morewastewater monitoring facilities [ 18,23]. In an attempt to addressthese challenges, our approach differs from previous work in that weaim to assess the promise of using exclusively two publicly availabledata sources: aggregated wastewater data and the reported casecount data that are easily accessible to the public for epidemiologicalforecasting. Specifically, we focus on data within the United Stateswhile averaging it across the country to minimize bias in wastewaterdata from smaller or less-developed counties and states.Time-series forecasting. Time series forecasting has been a long-standing problem in the fields of statistics and machine learning, at-tracting significant research attention. Classical methods [ 3,16] pro-vide a comprehensive understanding of time series analysis and fore-casting and offer both theoretical insights and statistical guarantees.The advent of deep learning-based methods, particularly recurrentnetworks, has substantially improved the ability to capture temporalcorrelations in training data, as demonstrated by works including re-current neural networks (RNNs) [ 22] and long short-term memory(LSTM) networks [ 11]. In recent years, long-term series forecasting(LSTF) research has focused on transformer-based models [ 30] dueto their remarkable success in various application domains, suchas natural language processing (NLP) [ 20] and computer vision(CV) [ 15]. Transformer-based LSTF models [ 14,32,34,37,38] havedemonstrated impressive forecasting performance while also priori-tizing prediction efficiency. However, recent criticism by Zeng et al .[35] suggests that the self-attention mechanism in transformersinevitably leads to temporal information loss, and their empiricalresults indicate that these models may not even outperform simpleone-layer linear models in certain experiments.In the domain of time series forecasting with scarce data, deeplearning models frequently adopt less complicated architectures toenhance model performance. Tsaur [29] employed fuzzy grey re-gression models, while Abdulmajeed et al . [1] utilized an ensembleUnlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAof several auto-regressive models to improve accuracy and robust-ness in predicting COVID-19 cases in Nigeria. Informed by theseinsights, our approach emphasizes the use of simpler and moreinterpretable models when working with limited wastewater andcase count data aggregated across the country. Specifically, we em-ployed linear regression models, ARIMAX models, and Gaussianprocess regression models with a combination of kernels to addressthe problem of noise in the data. Additionally, we conducted a com-parative analysis with deep learning models, including multi-layerperceptron (MLP) and LSTM models, to evaluate the effectivenessof our chosen methodology in the context of limited data.3 PRELIMINARIESTime-series forecasting. The primary objective of time-series fore-casting [ 19,25] is to make accurate predictions of future values in asequence, utilizing historical observations as a basis. Consider a setof observed data points x1,..., xt, where xi∈X, the aim is to fore-cast the corresponding labels y1,..., ytfor each timestep, rangingfrom 1tot, with yi∈Y. Lethrepresent the look-back window size;when predicting the label yi, the prediction model can take as inputH={xi−h+1,..., xi}orH={xi−h+1,..., xi,yi−h+1,..., yi−1}.This constraint ensures that predictions rely solely on informationavailable within the specified historical context.Wastewater-based Epidemiology. Wastewater-based epidemiol-ogy (WBE) is an approach to public health surveillance that lever-ages the detection of biological or chemical markers present insewage to reflect the health status of a region [ 21]. In the case ofCOVID-19, the wastewater data measures genetic fragments ofSARS-CoV-2 virus excreted in stool, specifically targeting the N1and N2 regions of the nucleocapsid gene, to determine COVID-19concentrations.4 METHODIn this section, we detail our data preprocessing steps, modelingtechniques, and evaluation methods. Our focus of the trainingmethod lies in aligning misaligned time-series data, computinginput embeddings, and employing models that strike a balancebetween simplicity, interpretability, and predictive accuracy. Wealso introduce a wave-based segmentation approach for evaluation,arguing its effectiveness as a more accurate metric and discussingits calibration using expert-identified waves.4.1 Data ProcessingTo ensure the quality and consistency of the data used for train-ing and evaluation, we first address the challenge of misalignedtime series data and then segment the data into waves based onthe observed distribution shifts. These preprocessing steps aim toimprove the model’s reliability and adaptability to changes in theunderlying data distribution over time.4.1.1 Handling Misaligned Time-Series Data. Dealing with incon-sistent time intervals or irregular timestamps in time-series fore-casting is a common challenge. In our study, the primary issuearises from weekly updates of wastewater data ( xi) and the dailyupdates of case count data ( yi). There are two main strategies toaddress this: removing data points without corresponding labels orutilizing all available data, for instance, through interpolation [ 31].Our approach is to associate each element xtin the wastewaterdatasetXwith all elements that fall within the interval betweentwo successive wastewater data updates. Specifically, for each xtinthe datasetX, we define:xt={xt}∪{yi|Txt−1<Tyi<Tyt} (1)whereTxdenotes the timestamp of the event x, andytis treatedas the ground truth label. The augmented xtnow includes thewastewater data point at time tand all case count data pointswhose timestamps Tyiare strictly greater than the timestamp Txt−1of the preceding wastewater data point and strictly less than thetimestampTxtof the current wastewater data point. The reasonbehind this decision is to maximize data utilization. However, itmay not always reflect real-world scenarios, where all data mightnot be up-to-date, or future trends a few days from now need to bepredicted. We empirically evaluate the impact of such delays whendoing forecasting in Section 5.5.4.1.2 Embedding of input data. As shown in Figure 1, there existsa lead-lag relationship [ 4,13] between the wastewater data andthe case count data. Specifically, signals in the wastewater dataoften precede signals in the case count data by a span of severaldays or weeks. To accommodate this time-shifted relationship, weimplement a sliding window approach for both the wastewater andcase count data inputs.Formally, for a selected time point i, and a window size hwforwastewater data and hcfor case count data, we generate inputsequencesXwastewateriandXcasecountirespectively, as:Xwastewateri=[wi−hw,...,wi−lw]Xcasecounti=[ci−hc,...,ci−lc],(2)wherewjdenotes the wastewater data and cjdenotes the casecount data at time j.lcandlware used to simulate the informationavailable at the time of prediction in the real-world. lw=lc=1means that the prediction model is given all the data up-to-date.To maintain scale consistency across all data points, we nor-malize the case count data using a min-max scaler, deriving thescaling parameters from historical data. This process ensures thedata maintains its inherent trend and distribution characteristicswhile being compatible with the model input, especially the deeplearning models.4.2 Modeling Techniques for Time-series DataIn the context of limited data, the ideal model to capture tempo-ral correlations should balance simplicity, interpretability, and alower parameter count. More complex models, while potentiallyimproving performance, might overfit the data and compromiseinterpretability and deployability. Therefore, in this study, our em-phasis is on methodologies that ensure adequate predictive accuracywhile maintaining computational feasibility and transparency ininterpreting data patterns.(1)Linear Regression Model [ 17]: Used as a benchmark, thissimple model provides a baseline for performance compari-son.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fang(2)ARIMAX Model [ 2]: Serving as a robust statistical model,ARIMAX extends the traditional ARIMA model by incorpo-rating exogenous inputs, which helps in modeling complextemporal structures in the presence of influential externalfactors, which suits our dataset with a lead-lag relationship.(3)Gaussian Process Regression (GPR) Model: This model lever-ages a custom kernel for handling non-linear relationshipsand noisy data. Our kernel construction, formulated as below,involves a multiplicative interaction of Constant and RBF ker-nels, along with an additive incorporation of a White kernelfor noise management and a Matern kernel for smoothness.(4)Multi-layer perceptron (MLP): A widely employed neuralnetwork for regression problems, our implementation fea-tures two hidden layers with 128 units each and ReLU as theactivation function.(5)Long Short-Term Memory (LSTM) model [ 11]: As a type ofrecurrent neural network, LSTMs are capable of capturingtemporal dependencies in data, making them well-suited fortime series forecasting tasks. LSTMs can learn to filter outnoise by selectively retaining valuable information throughgating mechanisms. To mitigate overfitting, we incorporatea dropout [ 26] rate of 0.5 after each layer in the model andadded anL2regularization.4.3 Wave-based SegmentationOne important observation for pandemic-related data is the dy-namic nature of the underlying distribution over time. This variabil-ity can be attributed to several factors, including the emergence ofdifferent viral variants [ 5], changes in vaccination status among thepopulation [ 7], and the implementation of varied government poli-cies [ 33]. The presence of these distribution shifts significantly com-plicates the prediction process. To address this issue, we proposesplitting the data into waves, where each wave is assumed to havea relatively stable distribution. We employ Binary Change Point De-tection [ 9] for identifying time-series data change points, chosen forits multiple change point detection, no predetermined change pointrequirement, and computationally efficient O(Cnlogn)complexity.4.3.1 Hyperparameter Calibration. Once the waves are identified,we calibrate the model’s hyperparameters, including the cost func-tion, penalty term, and minimal distance between two changepoints, to fit the waves recognized by domain experts. We for-mulate a scoring function and select the optimal hyperparame-ters on the validation data. Given a set of detected change pointsCP={cp1,cp2,...,cpn}and a set of expert-identified waves W={w1,w2,...,wm}, we define a score function asS(CP,W,α,β)=m∑︁i=1exp(−αd(wi,CP))−β|n−m|,(3)whereαis the decay factor for the impact of the distance betweenthe detected change points and the actual waves, βis the penalty co-efficient that penalizes the absolute difference between the numberof detected waves and the number of actual waves, d(wi,CP)de-notes the closest distance between wave wiand the set of detectedchange points in CP. The objective is to find hyperparameters thatminimize this score:CP★=arg minα,βS(CP,W,α,β). (4)Minimizing this metric allows us to select the hyperparametersthat optimally align the detected change points with the expert-identified waves while balancing proximity and the penalty for thedifference in the number of change points and waves.4.3.2 Evaluation using Wave-based Segmentation. Our approachleverages wave-based segmentation for evaluation. Once we sepa-rate our dataset Dinto training, Dtrain, and testing sets, Dtest, we re-strict the test data to have just one segment. Mathematically, if Stestrepresents all segments in Dtest, we would ensure that |Stest|=1.This methodology mirrors real-world conditions more accurately,as predicting data of new waves often requires substantial additionalinformation. We avoid using wave-based segmentation in trainingdue to potential data leakage issues, as it commonly uses globaldata to determine segmentation, which could inadvertently affectthe results.5 EXPERIMENTSIn this section, we outline the experimental setup, including datavisualization and segmentation results, and present the empiricalresults obtained by evaluating the five models for the task of pre-dicting case counts.5.1 Experimental SetupOur experiments exclusively use publicly available data, namelywastewater data1and case count data2, count and death which areoriginally aggregated at the county or state level and therefore, poseinherent challenges due to their noisy nature. The case count dataserve as ground truth for our prediction task. Owing to variabilityin the collection of country/state-level data, we aggregate all data atthe national level and utilize the nationwide average for our analysis.Composed of wastewater data and case count data, our datasetspans from January 15, 2020, to February 15, 2023. Wastewater datais reported on a weekly basis (162 data points), while case countdata are collected daily (1128 data points). For all the experiments,we report the mean and standard deviation of 6runs.To better understand the correlation between wastewater dataand the case counts, we visualize the trends in the data in Figure 1.We aggregate the data at the national level due to the high variabil-ity and statistical noise inherent in the state-wise data, as evidencedin Figure 1(b). As shown in Figure 1 with the shifted wastewatercurve, a strong association exists between the trend of virus con-centration levels in wastewater and that of the number of cases,with wastewater data trends slightly preceding that of case counts.However, it is important to underscore that despite the exhibitedassociation between the two trends, the relationship between theirabsolute numbers is not straightforward.1https://github.com/biobotanalytics/covid19-wastewater-data2https://usafacts.org/visualizations/coronavirus-covid-19-spread-map/Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) Aggregated trend of the nation(b) Trend in Georgia and MississippiFigure 1: Temporal Correlation between Wastewater ViralConcentrations and Case Counts per 100k population. Thex-axis shows the dates ranging from 2020-01-15 to 2023-02-15, and the y-axis denotes the values of the viral wastewaterconcentrations and the number of cases per 100k population.Subfigure (a) describes the aggregated trend of the nation,and (b) describes two randomly picked states of Georgia andMississippi.5.2 Visualization of Segmentation ResultAfter calibrating the hyperparameters on the expert-identifiedwaves from March 2020 to February 2022 [ 28], we use the BinaryChange Point algorithm [ 9] to detect the change points in thewastewater virus concentration level data. In our case, the expertdata segmentation consists of five points, forming six distinct waves.As a result, we opted to include all of these points for the calcula-tion of the score function during the calibration process. Figure 2demonstrates that the detected change points closely align withthe expert-identified waves and that our method can accuratelydetect change points even in areas not covered by the expert datasegmentation.5.3 Evaluation across Varied End DatesTo assess the accuracy of our models, we evaluate their performancethroughout the course of the pandemic. Figure 3 represents theNormalized Root Mean Square Error (NRMSE) of each model overthe different end dates, allowing for a comparative analysis of modelconsistency and adaptability across time. We compare our resultsFigure 2: Segmentation results using Binary Change PointDetection. The green dotted lines represent expert-identifiedchange points, while the red dotted lines indicate our de-tected change points. The x-axis denotes the days passedsince 2020-01-15, and the y-axis shows the viral wastewaterconcentration level. Our model’s detected change points ex-hibit close correspondence with expert-identified points.with a random forest model developed by Li et al . [13] . Their modelwas trained on diverse data, including hospitalization and ICUadmission records, CCVI indexes, and vaccination records, amongothers. Notably, their work does not clearly delineate the date rangefor the test data—a factor that could significantly impact the model’saccuracy.Figure 3 shows that the models perform relatively poorly inthe early stages of the pandemic but improve significantly in thelater stages, even during a sudden peak in early 2022. In the laterstages of the pandemic (after July 2021), as shown in Figure 3, allfive models reach performance on par with the baseline model,indicating an NRMSE below 1.0. This suggests that, on average,the model’s prediction error is less than the standard deviation ofthe observed data, which is over 200cases during the peak. Theperformance at the early stages is worse, possibly due to the lackof sufficient data to learn the inherent temporal correlation.5.4 Impact of Segmentation on EvaluationIn addition to evaluating the performance on different dates, wealso conduct an experiment to understand how wave segmentationimpacts the evaluation of our models. Figure 4 shows model perfor-mance with and without segmentation for the models. Performancedifferences are more noticeable during peak periods, likely due torapid trend shifts that make the prediction task difficult.We remark that this experiment highlights the importance ofsegmentation in this task of predicting case counts, particularlyduring volatile periods. The omission of this segmentation method,as is the case in [ 13], could lead to inaccuracies in the NormalizedRoot Mean Square Error (NRMSE) as multiple waves in the testdata may mask inaccuracies with one particular wave. Therefore,we present the results with the segmentation evaluation methodfor all subsequent experiments. It is also worth noting that theseepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei FangFigure 3: Performance comparison of models across end dates.The x-axis denotes the end date of the test period, while they-axis represents the normalized root mean square error(NRMSE) of the prediction for the number of cases. The greycurve denotes the actual number of cases. The dotted linedenotes the reported performance of the model in [13].Figure 4: Prediction accuracy comparison for each modelwith and without segmentation. The x-axis is the end date,and the y-axis is the normalized root mean square error(NRMSE) of the prediction for the number of cases. Thedotted lines denote evaluation results with segmentationperformed, and the solid lines denote evaluation withoutsegmentation.results are based on the assumption of perfect up-to-date knowledge.Results based on more relaxed assumptions are discussed in thefollowing subsection.5.5 Prediction Accuracy across VariedForecasting HorizonWe further examine our models’ prediction accuracy consideringvarying forecasting horizons (the number of days in advance whenmaking the prediction) at three distinct end dates. These datesare selected based on the previous empirical results to be repre-sentatives of the different waves. This setting mirrors the real-lifecontext where decisions are often needed to be made several daysin advance.The outcome, displayed in Figures 5(a), 5(b), and 5(c), showsan expected trend: an increased forecasting horizon generally cor-responds to decreased prediction accuracy. This trend can be at-tributed to the increased challenges introduced by longer responsetimes. However, there are instances where model accuracy improveswith an increased forecasting horizon, likely due to the inherentvariability in the data. Notably, on all three different dates, GPR andMLP models perform the best likely due to their smaller parametercount and simpler structure. Based on the results, we make the rec-ommendation that 6to12days is a good trade-off between a longerforecasting horizon and better prediction accuracy as the predictionerror generally does not increase much during this period.6 CONCLUSIONSIn this study, we explored the feasibility of utilizing publicly avail-able wastewater data to forecast the number of COVID-19 cases.We employed five representative time-series prediction methodsto capture the temporal associations within the viral wastewaterconcentration levels and case count data. Our empirical resultsshow that the resulting models performed comparably with thosetrained on a more diverse range of data sources, underscoring theviability of this approach even with restricted data access.Furthermore, our research underscores the importance of datasegmentation during evaluation to better comprehend the inherentrelationship between wastewater data and COVID-19 case count.This segmentation approach addresses the complexities posed bytesting data spanning multiple waves, which can influence modelevaluation metrics. Grounded in our empirical findings, we alsopropose practical guidelines regarding the forecasting horizon forcase count prediction.We hope that the findings of this study contribute to the growingbody of research on wastewater-based epidemiology and providevaluable insights into the challenges and potential solutions foraccurate epidemic forecasting using wastewater data, which canbe applied in real-world scenarios to improve public health surveil-lance and inform decision-making processes. We acknowledge thecomplexities introduced by evolving testing and reporting practicesduring the COVID-19 pandemic, which make it increasingly hardto acquire ground truth data, and therefore alternative metrics likemortality data may gain prominence in different stages of epidemi-ological forecasting. We also acknowledge the existence of otherpublicly accessible data sources of varying types that may be uti-lized, including reproductive number[ 12], hospitalization numbers,and mortality rates[ 10,36]. These additional data sources presentample opportunities for future research directions, broadening thescope of our current understanding and forecasting capabilities ofpublic health scenarios.Unlocking the Potential of Public Datasets: Wastewater-Based Epidemiological Forecasting During COVID-19 epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) Performance comparison w.r.t. #days to react on 2021-12-15(b) Performance comparison w.r.t. #days to react on 2022-07-03(c) Performance comparison w.r.t. #days to react on 2022-10-11Figure 5: Prediction accuracy corresponding to different leadtimes at three different dates. The x-axis indicates the fore-casting horizon, and the y-axis denotes the normalized rootmean square error (NRMSE) of the prediction of the numberof cases. The three different dates are chosen to illustrate themodels’ performance at distinct waves during the pandemic.ACKNOWLEDGMENTSZhicheng Zhang, Fei Fang, Angel Desai, and Sonja Neumeister weresupported in part by grant SES2200228 from the National ScienceFoundation. Maimuna Shahnaz Majumder was supported in part bygrant R35GM146974 from the National Institute of General MedicalSciences, National Institutes of Health. The funders had no role instudy design, data collection and analysis, decision to publish, orpreparation of the manuscript. Zhicheng Zhang was supported inpart by SCS Dean’s Fellowship.REFERENCES[1]Kabir Abdulmajeed, Monsuru Adeleke, and Labode Popoola. 2020. Online fore-casting of COVID-19 cases in Nigeria using limited data. Data in Brief 30 (2020),105683.[2]George EP Box, Gwilym M Jenkins, Gregory C Reinsel, and Greta M Ljung. 2015.Time series analysis: forecasting and control . John Wiley & Sons.[3]Peter J Brockwell and Richard A Davis. 2009. Time series: theory and methods .Springer science & business media.[4]Kalok Chan. 1992. A further analysis of the lead–lag relationship between thecash market and stock index futures market. The Review of Financial Studies 5, 1(1992), 123–152.[5]Santenna Chenchula, Padmavathi Karunakaran, Sushil Sharma, and MadhavraoChavan. 2022. Current evidence on efficacy of COVID-19 booster dose vaccinationagainst the Omicron variant: A systematic review. Journal of Medical Virology94, 7 (2022), 2969–2976.[6]Marco Ciotti, Massimo Ciccozzi, Alessandro Terrinoni, Wen-Can Jiang, Cheng-Bin Wang, and Sergio Bernardini. 2020. The COVID-19 pandemic. Critical reviewsin clinical laboratory sciences 57, 6 (2020), 365–388.[7]Diego F Cuadros, Claudia M Moreno, Godfrey Musuka, F DeWolfe Miller, PhillipCoule, and Neil J MacKinnon. 2022. Association between vaccination coveragedisparity and the dynamics of the COVID-19 Delta and Omicron waves in theUS.Frontiers in Medicine 9 (2022).[8]Christian G Daughton. 2020. Wastewater surveillance for population-wide Covid-19: The present and future. Science of the Total Environment 736 (2020), 139631.[9]Piotr Fryzlewicz. 2014. Wild binary segmentation for multiple change-pointdetection. (2014).[10] Aikaterini Galani, Reza Aalizadeh, Marios Kostakis, Athina Markou, NikiforosAlygizakis, Theodore Lytras, Panagiotis G Adamopoulos, Jordan Peccia, David CThompson, Aikaterini Kontou, et al .2022. SARS-CoV-2 wastewater surveil-lance data can predict hospitalizations and ICU admissions. Science of The TotalEnvironment 804 (2022), 150151.[11] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neuralcomputation 9, 8 (1997), 1735–1780.[12] Edward H Kaplan, Dennis Wang, Mike Wang, Amyn A Malik, Alessandro Zulli,and Jordan Peccia. 2021. Aligning SARS-CoV-2 indicators via an epidemic model:application to hospital admissions and RNA detection in sewage sludge. Healthcare management science 24 (2021), 320–329.[13] Xuan Li, Huan Liu, Li Gao, Samendra Sherchan, Ting Zhou, Stuart Khan, Markvan Loosdrecht, and Qiin Wang. 2022. Wastewater-based epidemiology pre-dicts COVID-19-induced hospital and ICU admission numbers in over 100 USAcounties. (2022).[14] Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X Liu, andSchahram Dustdar. 2021. Pyraformer: Low-complexity pyramidal attention forlong-range time series modeling and forecasting. In International conference onlearning representations .[15] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin,and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer us-ing shifted windows. In Proceedings of the IEEE/CVF international conference oncomputer vision . 10012–10022.[16] Douglas C Montgomery, Cheryl L Jennings, and Murat Kulahci. 2015. Introductionto time series analysis and forecasting . John Wiley & Sons.[17] Douglas C Montgomery, Elizabeth A Peck, and G Geoffrey Vining. 2021. Intro-duction to linear regression analysis . John Wiley & Sons.[18] Colleen C Naughton, Fernando A Roman Jr, Ana Grace F Alvarado, Arianna QTariqi, Matthew A Deeming, Krystin F Kadonsky, Kyle Bibby, Aaron Bivins,Gertjan Medema, Warish Ahmed, et al .2023. Show us the data: global COVID-19wastewater monitoring efforts, equity, and gaps. FEMS Microbes 4 (2023), xtad003.[19] Hanh H Nguyen and Christine W Chan. 2004. Multiple neural networks for along term time series forecast. Neural Computing & Applications 13 (2004), 90–98.[20] OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL][21] Juliette O’Keeffe. 2021. Wastewater-based epidemiology: current uses and futureopportunities as a public health surveillance tool. Environmental Health Review64, 3 (2021), 44–52.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Zhicheng Zhang, Sonja Neumeister, Angel Desai, Maimuna Shahnaz Majumder, and Fei Fang[22] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1985. Learninginternal representations by error propagation . Technical Report. California UnivSan Diego La Jolla Inst for Cognitive Science.[23] Hannah R Safford, Karen Shapiro, and Heather N Bischel. 2022. Wastewateranalysis can be a powerful public health tool—if it’s done sensibly. Proceedings ofthe National Academy of Sciences 119, 6 (2022), e2119600119.[24] Thomas A Slater, Sam Straw, Michael Drozd, Stephe Kamalathasan, Alice Cowley,and Klaus K Witte. 2020. Dying ‘due to’or ‘with’COVID-19: a cause of deathanalysis in hospitalised patients. Clinical medicine 20, 5 (2020), e189.[25] Antti Sorjamaa, Jin Hao, Nima Reyhani, Yongnan Ji, and Amaury Lendasse. 2007.Methodology for long-term prediction of time series. Neurocomputing 70, 16-18(2007), 2861–2869.[26] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and RuslanSalakhutdinov. 2014. Dropout: a simple way to prevent neural networks fromoverfitting. The journal of machine learning research 15, 1 (2014), 1929–1958.[27] Natalie Stephens, Frederic Béen, and Dragan Savic. 2022. An analysis of SARS-CoV-2 in wastewater to evaluate the effectiveness of nonpharmaceutical inter-ventions against COVID-19 in the Netherlands. ACS Es&t Water 2, 11 (2022),2158–2166.[28] Pew Research Center . 2022. The Changing Political Geography of COVID-19Over the Last Two Years. https://www.pewresearch.org/politics/2022/03/03/the-changing-political-geography-of-covid-19-over-the-last-two-years/[29] Ruey-Chyn Tsaur. 2008. Forecasting analysis by using fuzzy grey regressionmodel for solving limited time series data. Soft Computing 12, 11 (2008), 1105–1113.[30] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in neural information processing systems 30 (2017).[31] Norbert Wiener, Norbert Wiener, Cyberneticist Mathematician, Norbert Wiener,Norbert Wiener, and Cybernéticien Mathématicien. 1949. Extrapolation, inter-polation, and smoothing of stationary time series: with engineering applications .Vol. 113. MIT press Cambridge, MA.[32] Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. 2021. Autoformer: De-composition transformers with auto-correlation for long-term series forecasting.Advances in Neural Information Processing Systems 34 (2021), 22419–22430.[33] Ke Wu, Didier Darcet, Qian Wang, and Didier Sornette. 2020. Generalized logisticgrowth modeling of the COVID-19 outbreak: comparing the dynamics in the 29provinces in China and in the rest of the world. Nonlinear dynamics 101, 3 (2020),1561–1581.[34] Neo Wu, Bradley Green, Xue Ben, and Shawn O’Banion. 2020. Deep transformermodels for time series forecasting: The influenza prevalence case. arXiv preprintarXiv:2001.08317 (2020).[35] Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. 2022. Are transformerseffective for time series forecasting? arXiv preprint arXiv:2205.13504 (2022).[36] Qingyu Zhan, Kristina M Babler, Mark E Sharkey, Ayaaz Amirali, Cynthia CBeaver, Melinda M Boone, Samuel Comerford, Daniel Cooper, Elena M Cortizas,Benjamin B Currall, et al .2022. Relationships between SARS-CoV-2 in wastewaterand COVID-19 clinical cases and hospitalizations, with and without normalizationagainst indicators of human waste. Acs Es&T Water 2, 11 (2022), 1992–2003.[37] Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong,and Wancai Zhang. 2021. Informer: Beyond efficient transformer for long se-quence time-series forecasting. In Proceedings of the AAAI conference on artificialintelligence , Vol. 35. 11106–11115.[38] Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin.2022. Fedformer: Frequency enhanced decomposed transformer for long-termseries forecasting. In International Conference on Machine Learning . PMLR, 27268–27286. |
rMSlLb33Gb | A Snapshot of COVID-19 Incidence, Hospitalizations, andMortality from Indirect Survey Data in China in January 2023(Extended Abstract)Juan Marcos Ramírez, Sergio Díaz-Aranda, JoseAguilar, Antonio Fernández AntaIMDEA Networks Institute, Madrid, SpainOluwasegun Ojo, Rosa Elvira LilloUniversidad Carlos III, Madrid, SpainABSTRACTThe estimation of incidence has been a crucial component for moni-toring COVID-19 dissemination. This has become challenging whenofficial data are unavailable or insufficiently reliable. Hence, the im-plementation of efficient, inexpensive, and secure techniques thatcapture information about epidemic indicators is required. Thisstudy aims to provide a snapshot of COVID-19 incidence, hospital-izations, and mortality in different countries in January 2023. To thisend, we collected data on the number of cases, deaths, vaccinations,and hospitalizations among the fifteen closest contacts to survey re-spondents. More precisely, indirect surveys were conducted for 100respondents from Australia on 19 January 2023, 200 respondentsfrom the UK on 19 January 2023, and 1,000 respondents from Chinabetween 18-26 January 2023. To assess the incidence of COVID-19,we used a modified version Network Scale-up Method (NSUM) thatfixes the number of people in the contact network (reach). We havecompared our estimates with official data from Australia and theUK in order to validate our approach. In the case of the vaccinationrate, our approach estimates a very close value to the official data,and in the case of hospitalizations and deaths, the official results arewithin the confidence interval. Regarding the remaining variables,our approach overestimates the values obtained by the Our Worldin Data (OWID) platform but is close to the values provided by theOfficer of National Statistics (ONS) in the case of the UK (within theconfidence interval). In addition, Cronbach’s alpha gives values thatallow us to conclude that the reliability of the estimates in relationto the consistency of the answers is excellent for the UK and goodfor Australia. Following the same methodology, we have estimatedthe same metrics for different Chinese cities and provinces. It isworth noting that this approach allows quick estimates to be madewith a reduced number of surveys to achieve a wide populationcoverage, preserving the privacy of the participants.KEYWORDSCOVID-19, incidence estimation, indirect surveys, NSUM1 INTRODUCTIONTo effectively manage public health resources, monitoring infec-tious diseases such as COVID-19 requires knowledge of variousepidemic indicators, such as the number of cases, deaths, and hos-pitalizations, among others. Most of these indicators have beencollected through the use of methods that require the presenceof a substantial portion of the target population, such as antigentest screenings or hospital records. In order to overcome thesedisadvantages, several methods have used direct surveys to esti-mate indicators [ 1,2]. Unfortunately, direct surveys depend onthe participation of a large number of people to obtain reliableestimates, usually collect sensitive personal data (which may de-ter respondents due to privacy concerns), and require careful datamanipulation.An alternative to these surveys is using indirect surveys, whichask participants about the people in their contact network, ratherthan themselves. From the responses provided by indirect surveys,the estimates of different variables can be derived using NetworkScale-up Method (NSUM) [ 3,4]. As a result of this approach, 1) alarger sub-population may be reached, 2) data collection costs maybe reduced, 3) a computationally efficient method can be used toobtain estimates, and 4) participants will be assured of high levelsof privacy. Indirect surveys have already been implemented forestimating indicators during the COVID-19 pandemic [5, 6].In this work, we use indirect online surveys to capture a snapshotof cases, mortality, vaccination, and hospitalizations due to COVID-19 in China for the period of January 18-26, 2023. To this end, amodified version of the NSUM approach that fixes the number ofpeople in the contact network is used to estimate different epidemicindicators. In essence, this modified version extracts knowledgeabout epidemic indicators without resorting to additional controlquestions that usually are considered to estimate the reach (thenumber of people in the contact network). In addition, a data pre-processing stage is included, which comprises of a set consistencyfilters and a nonlinear outlier detection stage, to improve the reli-ability of the collected data. We validate our approach using datafrom Australia and the United Kingdom (UK) collected on January19, 2023. These metrics are compared with respect to the officialvalues reported by Our World in Data (OWID) and the Office forNational Statistics (ONS) from UK. In addition, we use Cronbach’salpha index [ 7], which is a reliability value to measure the internalconsistency of the questionnaire generated by indirect surveys.2 METHODS2.1 Sampling ParticipantsWe conducted online indirect surveys using the PollFish platform.Specifically, we conducted an online survey in China between Jan-uary 18-26, 2023. This online survey collected information aboutvarious COVID-19 indicators (vaccination, deaths, and number ofcases in the last month, the last 7 days, and the past 24 hours) amongthe 15 closest contacts of 1,000 participants (see SupplementaryInformation section for the English version of the survey questions).Notice that the selected number of closest contacts to respondents(15) is considered the size of the good-friends support group accord-ing to Dunbar’s theory [ 8]. This number provides us a trade-offbetween the size of the subpopulation we aim to cover (reach) andJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillothe minimization of undesired effects due to respondents such astransmission and recall errors [ 4]. Additionally, for validation, weconducted online surveys in Australia (100 responses) and the UK(200 responses) on January 19, 2023. Table 3 in Supplementary In-formation shows the characteristics of the survey respondents (theplatform provides information on gender, age group, education,and ethnicity). The respondents of each survey are also stratifiedby region. For instance, Fig. 1 in Supplementary Information showsa map of China where the intensity corresponds to the number ofquestionnaires completed in each province.2.2 Data AnalysisIn order to obtain a reliable dataset, we performed two subphasesof preprocessing: (1) an inconsistency filter, and (2) a univariateoutlier detection.(1)The inconsistency filter removes participants with inconsistentresponses: less infected contacts than fatalities, less infectedcontacts than hospitalized, less infected contacts in the lastmonth than in the last 7 days, and less infected contacts in thelast month than in the last 24 hours.(2)Since the collected variables exhibit extremely skewed distri-butions, the robust outlier detection method reported in [ 9]is applied. Based on the variable data, this method firstly es-timates the quartiles Q1andQ3, as well as the interquartilerange (IQR). Then, the whiskers QαandQβare set. Finally, thismethod preserves the samples in the interval limited by[Q1−1.5eaMCIQR;Q3+1.5ebMCIQR] (1)whereMCis the medcouple statistic that estimates the degree ofskewness of the data. Samples outside the interval are marked asoutliers and, consequently, are removed. In addition, to estimatethe parameters aandb, we consider the system [9] log23Q1−QαIQR≈aMClog23Qβ−Q3IQR≈bMC .(2)whereQαandQβare theα-th andβ-th quantiles of the distri-bution, with α=0.15andα=0.85.We consider the NSUM approach to estimate the rates of thedifferent COVID-19 indicators. In particular, NSUM is a statisticalframework for estimating hidden populations from indirect surveys.There are three main NSUM approaches: frequentist models thatestimate subpopulation rates, Bayesian models that include priors,and network models that estimate population properties [ 4]. Toestimate cumulative incidences, hospitalization rates, and mortalityrates, we modify an NSUM method belonging to the category offrequentist models based on the maximum likelihood estimation(MLE). In this regard, let cibe the number of contacts of the i-threspondent that have a particular characteristic, e.g., persons whohave been hospitalized. Further, consider rithe number of closecontacts of the i-th respondent (which in this study is fixed at ri=15, as shown in the questions in the Supplementary Information).The requirement of close contacts is introduced to minimize theeffect of the visibility bias [ 10] with respect to the classical method[3]. Hence, we estimate the aggregated rate, p, asÍici/Íiri=Íici/(15n), withnas the number of responses (samples). Theestimator’s variance is√︁p(1−p)/(15n), assuming that the ciareindependent binomial random variables with 15 trials and successprobabilityp.We evaluated the validity of our approach by comparing thedifference between the official values reported on the Our World inData (OWID)1platform and the values estimated by our approachfor Australia and the United Kingdom (see Table 1). In both coun-tries, official data were extracted between December 20, 2022, andJanuary 19, 2023. In order to determine the number of hospitalizedpersons given the hospital occupancy, the length of a hospital stayis fixed at 4 days [12, 13].Additionally, for the UK, we use the data provided by the Officefor National Statistics (ONS)2. In particular, for the number of caseswe use the daily estimates of the infected population obtainedby the Coronavirus (COVID-19) Infection Survey of the ONS. Forthe 7 days and the last month’s estimates, in order not to countmultiple times the same cases, the sum of the daily percentages isdivided by 10 days, an estimated average duration of the infectionwith Omicron [ 14]. Hospitalizations are the sum of the weeklyadmission rates with COVID-19 in England from Dec 19, 2022, toJan 22, 2023 (5 weeks). Mortality is the rate of registered deathsinvolving COVID-19 in England from Dec 17, 2022, to Jan 20, 2023.Finally, we use Cronbach’s Alpha coefficient to measure the reli-ability of the results obtained from the indirect surveys. Specifically,it quantifies the reliability of a value of an unobservable variableconstructed from the observed variables. The closer this coefficientis to its maximum value of 1, the greater the reliability of the mea-sure, but in general, it is considered that values greater than 0.7are sufficient to guarantee reliability. In this work, we computeCronbach’s Alpha coefficient based on correlations [15].3 RESULTSTable 1 displays the estimates and the 95% confidence interval forthe surveys conducted in the UK and Australia. In addition, it showsthe statistics provided by official reports. The confidence intervalis computed as p±1.96√︁p(1−p)/(15n). As can be observed, thevaccination estimates are very close to the official values: they areestimated as 76.50% (73.70% - 79.29%) and 78.86% (95% confidenceinterval: 77.00% - 80.72%) in Australia and UK, respectively, whilethe official (OWID) values are 84.95% and 79.71%. In the case ofmortality and hospitalizations in the last month, the official valuesare within the confidence interval of our estimates in the case ofAustralia. Specifically, the mortality rate is 0.34% (0.00% - 0.72%) andthe official is 0.005%, the hospitalization rate is 1.02% (0.36% - 1.68%)and the official is 0.112%. Also, in the case of the UK, the officialvalues of ONS are within the confidence interval of our estimates ofthe number of cases, new cases in the last 7 days, and cases in thelast 24 hours. Cronbach’s alpha coefficient is 0.83 for Australia and0.95 for the UK, which tells us that the reliability of the estimatesis very good. The results of the estimates and Cronbach’s alphacoefficient allow concluding that we can use the indirect surveyapproach to make estimates when official data is not available or1https://ourworldindata.org/, downloaded on July 24th, 2023. Observe that these valueshave changed from those downloaded in February 2023 [11].2https://www.ons.gov.uk/, downloaded on February 3rd, 2023.A Snapshot of COVID-19 Incidence, Hospitalizations, and Mortality from Indirect Survey Data in China in January 2023 (Extended Abstract)Table 1: COVID-19 metrics in % (and 95% CI) obtained from indirect survey data and official reports for Australia and the UK. (1)People aged 12 years and over that have received at least one/two/three doses on Aug 31, 2022. (2) England data only, 5 weeks.Australia United KingdomIndirect Survey OWID Indirect Survey OWID ONSCases12.43 (10.26 - 14.60) 1.731 8.67 (7.39 - 9.96) 0.298 9.663(last month)Vaccination76.50 (73.70 - 79.29) 84.95 78.86 (77.00 - 80.72) 79.71 93.6/88.2/70.2(1)rateMortality0.34 (0.00 - 0.72) 0.005 0.43 (0.13 - 0.73) 0.006 0.005(2)(last month)Hospitalizations1.02 (0.36 - 1.68) 0.112 0.81 (0.40 - 1.22) 0.133 0.044(2)(last month)Cases2.03 (1.10 - 2.96) 0.118 1.30 (0.78 - 1.82) 0.037 1.458(24 hours)New cases2.71 (1.64 - 3.78) 0.118 1.30 (0.78 - 1.82) 0.023 1.116(7 days)Cronbach’s alpha 0.83 0.95Table 2: COVID-19 incidence metrics in % obtained from indirect survey data for China.SamplesCases Vaccination Mortality Hosp Cases Cases(last month) rate (last month) (last month) (24 hours) (7 days)China 46978.57 91.03 1.19 9.30 2.87 9.52(77.62-79.54) (90.36-91.70) (0.94-1.45) (8.61-9.97) (2.48-3.26) (8.83-10.21)ProvincesJiangsu 4875.56 87.92 1.67 7.64 2.64 9.44(72.42-78.69) (85.54-90.30) (0.73 - 2.60) (5.70-9.58) (1.47-3.81) (7.31-11.58)Guangdong 4580.00 86.07 0.59 5.33 3.26 6.96(76.98-83.02) (83.46-88.69) (0.01-1.17) (3.64-7.03) (1.92-4.60) (5.04-8.88)Shandong 2774.81 95.80 1.48 8.40 2.22 6.67(70.59 - 79.04) (93.85-97.76) (0.30-2.66) (5.69-11.10) (0.79-3.66) (4.24-9.10)CitiesShanghai 968.89 88.15 2.22 5.93 0.74 5.19(61.08-76.70) (82.70-93.60) (0.00-4.71) (1.94-9.91) (0.00-2.19) (1.44-8.93)Guangzhou 1181.82 86.67 1.82 9.70 4.85 7.27(75.93-87.70) (81.48-91.85) (0.00-3.86) (5.18-14.21) (1.57-8.13) (3.31-11.24)Chengdu 889.17 88.33 0.83 8.33 0.83 8.33(83.61-94.73) (82.59-94.08) (0.00-2.46) (3.39-13.28) (0.79-2.45) (3.39-13.28)Beijing 874.17 91.67 0.83 13.33 5.00 11.67(66.33-82.00) (86.72-96.61) (0.00-2.45) (7.25-19.42) (1.10-8.90) (5.92-17.41)reliable and use them considering a prudential bias when assessingthem.Table 2 shows the estimated results for China for all the questionsof the survey. While 1.000 indirect survey responses were collected,the filters specified in Section 2.2 were used, reducing drasticallythe sample size to 469. Comparing our results with the OWIDdata for China, the vaccination rate is 91.9% while we estimate91.03% (90.36%-91.7%), which is almost a perfect match. The numberof deaths reported by OWID is 0.005% while we estimate 1.19%(0.94%-1.45%), a much higher value. However, OWID warns that“the number of confirmed deaths may not accurately represent thetrue number of deaths”. Therefore, our estimate could serve asa first approximation (that may be biased). Our estimate of thenumber of cases in the last month is 78.57% (77.62%-79.54%), veryfar from 6.182% reported by OWID (which warns that “the numberof confirmed cases is lower than the true number of infections").Note that some areas of China may have a high incidence, as notedin the report published at [ 16]: “nearly 90% of Henan’s populationhad been infected by 6 January".We compute estimates for the provinces and cities with thelargest number of samples (see Table 2). The rate of vaccination andcases in the last month is similar in all of them and similar to the val-ues in China. The Guangdong province shows the lowest estimatesof hospitalizations and deaths, while it has large case estimatesamong provinces. Among cities, Beijing shows low estimates ofmonthly cases, but large rates of recent cases and hospitalizations.Unfortunately, the sample size for cities is very small. Finally, wewould like to point out that, in general, the data are relatively smallcompared to the size of the country. Additionally, as can be seenin Table 3 in Supplementary Information, the sample is biased byage and education level. These biases are reduced with the use ofindirect questions, but still more studies are needed.4 CONCLUSIONS AND FUTURE WORKThis work aims to estimate a snapshot of COVID-19 incidence,hospitalizations, and mortality from indirect surveys in China inJanuary 2023. To estimate these epidemic indicators, we used amodified version of the NSUM technique that fixes the number ofpeople in the contact network. In addition, a data pre-processingstage is included to extract a reliable set of survey samples. In futurework, we are interested in analyzing multiple data preprocessingtechniques to minimize the number of discarded samples and maxi-mize indirect survey knowledge extraction. Additional results anda more extended discussion can be found in the full version of thearticle [11].5 RESEARCH ETHICS APPROVALTo carry out this, a request was previously made before the ethicscommittee of IMDEA Network Institute, who approved it in theJuan Marcos Ramírez, Sergio Díaz-Aranda, Jose Aguilar, Antonio Fernández Anta and Oluwasegun Ojo, Rosa Elvira Lillolast quarter of 2022. Basically, the ethics committee approved thatthe study could be carried out keeping the anonymity of the re-spondents. On the other hand, the platform used for the collectionof survey information guarantees that the participants (belong tothat platform) give their consent to participate in them.6 CONFLICT OF INTEREST DISCLOSURESNone reported.7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateResearch Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.8 DATA SHARING STATEMENT:The data collected in the indirect surveys is publicly available athttps://github.com/GCGImdea/coronasurveys/tree/master/papers/2023-COVID-19-China-January.9 ACKNOWLEDGMENT:We want to thank Lin Wang for his help with the Chinese versionof the survey.REFERENCES[1] Astley Christina M, Tuli Gaurav, Mc Cord Kimberly A, et al. Global monitoring ofthe impact of the COVID-19 pandemic through online surveys sampled from theFacebook user base Proceedings of the National Academy of Sciences. 2021;118.[2] Oliver Nuria, Barber Xavier, Roomp Kirsten, Roomp Kristof. Assessing the Im-pact of the COVID-19 Pandemic in Spain: Large-Scale, Online, Self-ReportedPopulation Survey Journal of Medical Internet Research. 2020;22:e21319.[3] Killworth Peter D, McCarty Christopher, Bernard H Russell, Shelley Gene Ann,Johnsen Eugene C. Estimation of seroprevalence, rape, and homelessness in theUnited States using a social network approach Evaluation review. 1998;22:289–308.[4]Laga Ian, Bao Le, Niu Xiaoyue. Thirty years of the network scale-up methodJournal of the American Statistical Association. 2021;116:1548–1559.[5]Garcia-Agundez Augusto, Ojo Oluwasegun, Hernández-Roig Harold A, et al.Estimating the covid-19 prevalence in spain with indirect reporting via opensurveys Frontiers in Public Health. 2021;9:658544.[6] Srivastava Ajitesh, Ramirez Juan Marcos, Diaz Sergio, et al. Estimating TemporalTrends using Indirect Surveys arXiv preprint arXiv:2307.06643. 2023.[7] L. Cronbach. Coefficient alpha and the internal structure of tests Psychometrika.1951;16:297–334.[8]Dunbar Robin. How many friends does one person need? Dunbar’s number andother evolutionary quirks . Harvard University Press 2010.[9] Hubert Mia, Vandervieren Ellen. An adjusted boxplot for skewed distributionsComputational statistics & data analysis. 2008;52:5186–5201.[10] Killworth Peter D, McCarty Christopher, Johnsen Eugene C, Bernard H Russell,Shelley Gene A. Investigating the variation of personal network size underunknown error conditions Sociological Methods & Research. 2006;35:84–112.[11] Ramirez Juan Marcos, Diaz-Aranda Sergio, Aguilar Jose, Ojo Oluwasegun, LilloRosa Elvira, Fernandez Anta Antonio. A Snapshot of COVID-19 Incidence, Hos-pitalizations, and Mortality from Indirect Survey Data in China in January 2023medRxiv. 2023:2023–02.[12] Abdullah F, Myers J, Basu D, et al. Decreased severity of disease during the firstglobal omicron variant covid-19 outbreak in a large hospital in tshwane, southafrica International Journal of Infectious Diseases. 2022;116:38–42.[13] Peralta-Santos André, Rodrigues Eduardo Freire, Moreno Joana, et al. Omicron(BA. 1) SARS-CoV-2 variant is associated with reduced risk of hospitalizationand length of stay compared with Delta (B. 1.617. 2) MedRxiv. 2022:2022–01.[14] Ries Julia. Omicron Infection Timeline: When Symptoms Start and How LongThey Last Health. November 18, 2022.[15] Nunnally Jum C, Bernstein Ira H. Psychometric Theory . 1994.[16] Lewis Dyani. China’s COVID wave has probably peaked, model suggests Nature.2023;613:424–425.Figure 1: Number of completed questionnaires for the surveydeployed in ChinaTable 3: Characteristics of the survey respondents for Aus-tralia, the United Kingdom, and China.Characteristic Australia United Kingdom China1.Number of participants 100 200 10002.Gender, (%)(a)Female 56.00 58.00 46.90(b)Male 44.00 42.00 53.103.Age groups, (%)(a)18-24 13.00 9.50 18.70(b)25-34 27.00 26.00 44.30(c)35-44 29.00 24.50 27.40(d)45-54 17.00 22.50 8.40(e)>54 14.00 17.50 1.204.Education, (%)(a)Middle school 2.00 5.00 1.50(b)High school 33.00 22.00 7.90(c)Technical college 14.00 35.00 8.30(d)University 43.00 25.00 63.30(e)Post-graduate 7.00 11.50 18.905.Ethnicity, (%)(a)Arab 0.00 0.00 0.20(b)Asian 8.00 7.50 94.60(c)Black 0.00 2.50 0.20(d)Hispanic 0.00 1.00 0.00(e)Latino 0.00 0.00 0.20(f)White 83.00 74.00 1.00(g)Multiracial 3.00 1.00 0.20(h)Other 6.00 14.00 1.60SUPPLEMENTARY INFORMATIONQuestions of the Indirect SurveyQuestions in English. Think of your 15 closest contacts in the lastmonth. The rest of the questions below are with respect to thisgroup of people. These contacts can be family, friends, or colleagueswhose health status you know.(1)From the above 15 closest contacts in the last month, howmany have had COVID-19 in the last month?(2)From the above 15 closest contacts in the last month, howmany have been hospitalized for COVID-19 in the lastmonth?(3)From the above 15 closest contacts in the last month, howmany died from COVID-19 in the last month?(4)From the above 15 closest contacts in the last month, howmany have COVID-19 today?(5)From the above 15 closest contacts in the last month, howmany started with COVID-19 in the latest 7 days?(6)From the above 15 closest contacts in the last month, howmany have (ever) been vaccinated for COVID-19? |
qkDCSV-RMt | Spectral Clustering Identifies High-risk Opioid TaperingTrajectories Associated with Adverse EventsMonika RayGeneral Internal MedicineCenter for Healthcare Policy andResearchUniversity of California Davis HealthUnited States of Americamray@ucdavis.eduJoshua J. FentonDepartment of Family andCommunity MedicineCenter for Healthcare Policy andResearchUniversity of California Davis HealthUnited States of Americajjfenton@ucdavis.eduPatrick S. RomanoGeneral Internal MedicineCenter for Healthcare Policy andResearchUniversity of California Davis HealthUnited States of Americapsromano@ucdavis.eduABSTRACTNational opioid prescribing guidelines and related quality measureshave stimulated changes in opioid prescribing. Studies have shownthat rapid dose tapering may be associated with increased opioid-related and mental health events in some patient groups. However,there isn’t enough research on trajectories of dose tapering imple-mented in clinical practice, and how heterogeneous populations ofpatients respond to different treatments. Our aim was to examineprescribed opioid doses in a large, longitudinal, clinically diverse,national population of opioid-dependent patients with either Medi-care or commercial insurance. We performed phenotype clusteringto identify unsuspected, novel patterns in the data. In a longitu-dinal cohort (2008-2018) of 113,618 patients from the OptumLabsData Warehouse with 12 consecutive months at a high, stable meanopioid dose (≥50 morphine milligram equivalents), we identified30,932 patients with one dose tapering phase that began at the first60-day period with ≥15% reduction in average daily dose acrossoverlapping 60-day windows through seven months of follow-up.We applied spectral clustering as we preferred an assumption-freeapproach with no apriori information being imposed. Spectral clus-tering identified several cluster-cohorts, with three that includedover 98% of the sample. These three clusters were similar in baselinecharacteristics, but differed markedly in the magnitude, velocity, du-ration, and endpoint of tapering. The cluster-cohort characterisedby moderately rapid, steady tapering, most often to an end opioiddose of zero, had excess drug-related events, mental health events,and deaths, compared with a cluster characterised by very slow,steady tapering with long-term opioid maintenance. Moderatelyrapid tapering to discontinuation may be associated with higherrisk than slow tapering with longer-term maintenance of opioidanalgesia. Furthermore, several clusters highlighted a cohort thathad complete taper reversals indicating a treatment failure as thetapering was not maintained. Our findings suggest that identify-ing subtle yet clinically meaningful patterns in opioid prescribingdata, such as patterns within the dose trajectories, can highlightthe distinct characteristics separating subpopulations.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).CCS CONCEPTS•Applied computing →Health informatics ;Physical sciencesand engineering .KEYWORDShigh dose opioids, spectral clustering, patient subpopulations, phe-notype clustering, opioid crisisACM Reference Format:Monika Ray, Joshua J. Fenton, and Patrick S. Romano. 2023. Spectral Clus-tering Identifies High-risk Opioid Tapering Trajectories Associated with Ad-verse Events. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDD InternationalWorkshop on Epidemiology meets Data Mining and Knowledge Discovery,August 7, 2023, Long Beach, CA, USA. ACM, New York, NY, USA, 9 pages.1 INTRODUCTIONNational prescribing guidelines by the Centers for Disease Controland Prevention (CDC) and the current opioid overdose crisis haveled to substantial dose tapering among patients on long-term opioidtherapy for chronic pain, especially since 2016 [ 10,16,30]. A qualitymetric endorsed by the National Quality Forum (NQF) encouragesprescribers to reduce opioid doses below 90 morphine milligramequivalents (MME) per day [ 33]. In the setting of long-term opi-oid therapy for chronic pain, several studies have shown worseoutcomes associated with rapid dose reduction [ 1,13,17,41] anddose tapering has emerged as a complex issue for both physiciansand patients. To better inform evidence-based clinical practices,health system policies, and public programmes, it is necessary tocharacterise population heterogeneity (phenotype clustering) andto understand which patients are appropriate candidates for dif-ferent tapering approaches. This type of research requires a betterunderstanding of the variety of tapering trajectories that cliniciansimplement in diverse populations to enable comparisons of the risksand benefits of alternative approaches in relevant subpopulations.Large healthcare data warehouses that accumulate longitudinalrecords from multiple sources offer great opportunities for im-proved understanding of population heterogeneity in opioid dosemanagement.To undertake this research, we used retrospective data from theOptumLabs Data Warehouse (OLDW), which includes longitudinalhealth information for over 109 million commercial enrollees and12.5 million Medicare Advantage enrollees. We leveraged the ret-rospective cohort previously created by Agnoli and colleagues [ 1],whose prior research suggested that the peak tapering velocity hasepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. Romanoa significant mean effect on adverse outcomes. However, opioid-dependent patients with chronic pain often resist any dose reduc-tion, while pharmacies and regulators encourage dose reduction forevery eligible patient. To inform better clinical practice and policies,we need to understand how the peak tapering velocity fits into over-all patterns of opioid dose management over time, and then explorethe characteristics of higher- and lower-risk subpopulations of pa-tients undergoing dose tapering. For this purpose, we used spectralclustering to describe clinically meaningful subpopulations. Specif-ically, we wanted to examine similarities among patients withina cluster and differences among patients across clusters. Spectralclustering has been applied to speech processing, computer visionand exploratory data mining in biology [ 3,6,11,21,38,42], butopioid dosing is a novel and highly topical application in the currentera of increasing opioid-related overdose death rates [15].This work deviates from the popular hypothesis-driven approacheswhere the functional form of the models are independent predic-tors and dependent outcomes. In this data-driven approach theaim is to first cluster phenotypes, without classifying features asindependent or dependent variables, and then identify meaningfulsignatures within these clusters [ 25]. These signatures can then beused in predictive models as either predictors or outcomes. Themain purpose of phenotype clustering is to uncover hidden pat-terns. The primary focus of our exploratory work is see (1) how thepatients cluster based on their phenotypes (grouping patterns orphenotypes) and (2) whether these clusters have any remarkabledifferences (i.e., identify signatures that can be used in predictiveanalytics).1.1 Data Cohort and Adverse EventsWe obtained data from 2008-2018 for adults from the OptumLabsData Warehouse (OLDW) which contains de-identified adminis-trative claims data, including medical and pharmacy claims andeligibility information for commercial and Medicare Advantage en-rollees, representing a mixture of ages and regions across the UnitedStates. The entire cohort, which we received from Agnoli and col-leagues [ 1], had a stable baseline period of 12 consecutive monthsat a high opioid dose ≥50 MME, resulting in 113,618 patients. Thetapered cohort was defined as the subset of patients who had a dosetapering phase, which began on the first 60-day period with ≥15%reduction in average daily dose across overlapping 60-day windowsthrough the initial seven months of follow-up. Patients who had≥15% reduction in average daily dose over a longer time frame werenot included due to uncertainty about the intent of slight MMEdose reductions (which could be driven by delays in picking upprescriptions). To facilitate interpretation we selected a populationof patients who had only one period of tapering. Mortality in thetapered cohort was determined by analysing the time after taperinitiation and matching against the records in the OLDW mortalitytable.Adverse events included emergency department (ED) visits orhospitalisations for (1) drug or alcohol overdose or withdrawal(drug-related events); and (2) depression, anxiety, or suicide at-tempts (mental health events). Drug-related and mental healthevents were identified using International Classification of Diseases,Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codesfor claims from October 2015 through 2019 and ICD-9-CM diagnosiscodes for claims from 2008 through September 2015. Comorbiditieswere identified for all patients using the available software (AHRQ"Elixhauser" Comorbidity Software) in the OLDW [ 12,29]. Thisproject was determined by the University of California Office of thePresident to be exempt from human subjects review, as the OLDWuses completely de-identified, anonymised data.1.2 Analytic MethodsWe considered several methods to identify subpopulations and theircharacteristics such as K−Means clustering and latent class analy-sis (LCA).K−Means clustering is a popular clustering algorithmbut it is based on many restrictive assumptions, which most real-world datasets violate [ 20,35]. The algorithm operates on the inputdata matrix and, hence, is sensitive to the size of the data ( N) as wellas number of features. LCA [ 23,43], a type of finite mixture model,may be suitable for describing dose trajectories, but it requiresan outcome to be specified. By comparison, spectral clustering ispurely unsupervised and does not require outcome variables. Forour analyses, we used a novel spectral clustering algorithm (Spec-trum) developed by John and colleagues [ 21]. Spectral graph theoryassociates the spectrum of a matrix, i.e. eigenvalues of a matrix,to the properties of a graph via the Laplacian matrix [ 7,8,37]. Itoperates on graphs that are constructed between neighbouringnodes that represent data points (i.e., patients). It identifies arbitrar-ily shaped clusters (with convex or non-convex boundaries) usingthe eigenvectors in the Laplacian similarity matrix [ 7,9,26,46].A Laplacian similarity matrix models the local neighborhood rela-tionships between data points as an undirected graph [ 4,37,40].Spectral clustering is robust to the geometry of the clusters andoutliers, and does not require the user to specify the number ofclusters [ 2,24,46]. It identifies the number of clusters by comput-ing the differences between the consecutive ordered eigenvaluesof the graph Laplacian and identifying the first pair of consecutiveeigenvalues with the maximum difference in their values.The steps of spectral clustering include - (1) creation of the sim-ilarity matrix, then (2) the creation of the Laplacian matrix, andfinally (3) creation of clusters [ 32,44]. Variations of spectral clus-tering algorithms address issues related to creation of the similaritymatrix, graph-partitioning and speed on massive datasets. Sincespectral clustering operates on the Laplacian similarity matrix,which is an NxNmatrix ofNdata points, it is sensitive to the sizeof the data. The Spectrum algorithm developed by John et al., isnovel in the way it combines the following features - (1) combinedZelnik-Manor self-tuning [ 49], and the Zhang density-aware [ 50]kernels to create the similarity matrix, (2) Ng spectral clusteringmethod to estimate the optimal number of clusters [ 31], and Gauss-ian mixture modelling (GMM) [ 47] to finally cluster the data, and (3)a fast approximate spectral clustering (FASP) method [ 48] to allowfor fast clustering of massive data on regular desktop machines.The self-tuning component of the kernel adjusts to the scale ofthe data, while the density-aware component adapts to the localdensity of the data creating more or fewer connections dependingon the density of the regions. Spectrum uses the diffusion of tensorproduct graphs (TPG) to capture higher order information in thedata and highlight underlying patterns in the data [ 39]. The finalSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAclusters are plotted using the first two principal components, PC1and PC2. We did not use the eigen gap-statistic to determine thenumber of clusters as it was not essential for us to constrain thenumber of clusters nor were we against identifying small cohortsif the cohort had important patterns to investigate further. In ourwork, we were searching for anomalies or ‘interesting patterns’that could explain the underlying population heterogeneity. Theeigen gap heuristic works well if there are well-defined clustersbut not of much help when there are noisy or overlapping clusters,which is likely to be the case in this data.The variables in the input space of the spectral clustering algo-rithm were age, gender, monthly average opioid dose (MME), meanbaseline dose, count of drug-related events in the pre-taper and aftertapering initiation phases, the number of mental health events inthe pre-taper and after tapering initiation phases, benzodiazepinesco-prescription at baseline and at 30 days, 31 Elixhauser comor-bidity flags, and the change in dose across consecutive months for12 months. The number of drug-related and mental health eventswere identified for each patient before taper and after taper initi-ation as these were the adverse events of interest. We reviewedeach cluster to identify the prevalence of different adverse eventsas well as the number of deaths after taper initiation. We report thedistinguishing characteristics across the cluster subpopulations. Forcounterfactual inference, we identified the number and proportionof drug-related and mental health events in each cluster, and thencomputed the excess number of those events relative to the nullassumption of equal event risk across all clusters. The counterfac-tual calculation for each adverse event is given by - ExcessEvents =(NumEventsCluster )−(NumPatientsCluster ∗(TotalEventsTotalPatients)),where, for each adverse event, i.e., mortality, drug-related events ormental health events, ExcessEvents is the number of excess eventsin the cluster, NumEventsCluster is the number of observed eventswithin the cluster, NumPatientsCluster is the number of patients inthe cluster, TotalEvents is the total number of adverse events in theentire data and TotalPatients is the total number of patients in theanalysis.2 RESULTSAmong the 113,618 patients in the entire cohort 33,628 had one ormore phases of opioid dose tapering (29.5%) based on the taperingdefinition of≥15% reduction in average daily dose in 7-months offollow-up [ 1]. Fig. 1 shows the analytical pipeline and the resultantplot of the 10 clusters identified. We could not show all the tenclusters clearly in a 2-D plot. Since spectral clustering plots theclusters by collapsing them onto the first two principal components,the multi-dimensional aspect of the clusters is not visible. However,Fig. 1 shows that the clusters are not spherical and the data hasoutliers. Table 1 shows the characteristics of patients who tapered;the sample was 54% female and 92% had only one tapering periodavailable for analysis.Spectral clustering of 30,932 patients who underwent single ta-pers resulted in 10 clusters (groups of patients or subpopulations)with relatively similar baseline characteristics. All clusters hadpatients with high mean baseline doses of 140-237 MME/day. Ofparticular interest were the three large clusters and their baselinecharacteristics shown in Table 2. The other seven clusters’ charac-teristics are discussed below but not shown due to small cell sizepolicy. The three large clusters (1, 2, and 10) were very similar de-mographically, with mean ages of 58.7, 57.0, and 58.4 years, and 56%,53%, and 50% female composition, respectively. They were also sim-ilar on baseline co-prescribing of benzodiazepines (29%, 30%, and30%, respectively) and comorbid diagnoses during the baseline year,such as alcohol abuse and dependence (2%, 3%, and 2%, respectively),drug abuse and dependence (17%, 17%, and 15%, respectively), anddepression (32%, 31%, and 30%, respectively). Furthermore, theyhad similar medical experiences during their pre-taper period ofstable opioid dosing, with relatively few drug-related events (mean0.042, 0.053, and 0.043, respectively) and more mental health events(mean 3.81, 4.03, and 3.66, respectively).Fig. 2 compares the tapering trajectories across clusters. Eachtrajectory is plotted as the average monthly dose of the patients inthe cluster. The three largest clusters had markedly different opioiddose tapering trajectories and associated adverse events as shownin Table 3. The number of excess events represents the differencebetween the number of observed events and the number of eventsthat would have occurred if all the clusters had the same event rate.About 55% of patients were in cluster 1, characterised by very slowand steady tapering to a final dose about two-thirds of baseline,with low event rates and no reversal to pre-taper baseline dose.While clusters 2 and 10 looked quite similar in their baseline char-acteristics, they had very different taper trajectories. Cluster 2 wascharacterised by relatively rapid tapering to zero or very low doses,while cluster 10 was characterised by somewhat slower taperingfrom lower baseline doses to higher end doses. Both these clustershad slightly higher event rates than other clusters. Clusters 2 and10 also had more drug-related events than cluster 1 (mean 0.116and 0.128 versus 0.074), more mental health events (mean 0.089 and0.075 versus 0.058), and more deaths (mean 0.079 and 0.098 versus0.036) during the tapering year. However, compared to cluster 10,cluster 2 had higher baseline mean and median doses (192.3 and137.0 MME versus 140.3 and 104.0 MME), and a lower mean enddose (12.9 versus 37.6 MME). The slow trajectory for cluster 1, andthe very low or zero doses in clusters 2 and 10, continued intothe 15th month, although those months were not included in thespectral clustering analyses.The characteristics of the taper trajectories for all the clusters aredetailed in Table 4. The left panel in Fig. 3 shows the proportion ofpatients with 0 MME dose of opioids across the three clusters eachmonth, while the right panel shows the taper trajectory. Table 5shows the relative change in the proportion of patients who wereprescribed 0 MME opioids at each time point in the three clusters.Cluster 2 had the highest proportion of patients (73%) who werecompletely tapered off opioids at the end of 12 months, comparedto cluster 10 (66%) and cluster 1 (2%). Since cluster 1 demonstratedthe safest outcomes, we compared clusters 2 and 10 to cluster 1.The graph in the left panel in Fig. 3 shows that cluster 2 had a steepyet steady upward trend in the proportion of patients who weretaken off opioids, whereas patients in cluster 1 almost uniformlystayed on opioids, and cluster 10 demonstrated a pattern of delayeddiscontinuation.The remaining 1.3% of patients sorted into seven smaller clusters,all of which had patients who were tapered to or close to 0 MMEepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 1: Analysis FlowchartTable 1: Characteristics of the patients who taperedVariables Categories nGender Female 18,197Male 15,431Age Mean±Std. 58.0±11.6Number of Tapers 1 30,9322 2,462>=3 234Number of drug-related events before tapering 0 32,2381 1,182>=2 208Number of drug-related events after tapering 0 31,2101 1,8882 356>=3 174Number of mental health events before tapering 0 14,7881 3,9842 2,9493 2,0404 1,6655 1,2236 1,034>=7 5,945Number of mental health events after tapering 0 32,0411 1,0962 300>=3 191Spectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 2: Characteristics of Clusters 1, 2 and 10 in the pre-taper periodCluster No. patients Age Female benzodiazepines Alcohol Depression Drug Drug-related Mental Health Base dose(Mean) (% ) Rx (%) abuse (% ) (% ) abuse (% ) event event (Mean MME)counts (Mean) counts(Mean)1 16,965 58.74 55.7 28.9 2.4 31.7 16.6 0.04 3.81 189.822 13,025 56.96 53.1 30.1 3.0 31.4 16.5 0.05 4.03 192.3110 531 58.36 49.5 29.7 3.4 30.3 15.1 0.04 3.66 140.33Table 3: Adverse events after taper initiation in clusters 1, 2 and 10Cluster No. patients Drug-related No. Excess drug- Mental Health No. Excess Mental Deaths/1000 No. Excess(%) events/1000 related events events/1000 Health events Deaths1 16,965 (55%) 74.0 -320.2 58.4 -240.2 36.1 -329.82 13,025 (42%) 116.2 303.6 89.4 220.5 79.1 306.210 531 (< 2%) 128.1 18.7 75.3 1.5 97.9 22.5Table 4: Average monthly dose for 12 months from taper initiation - Taper TrajectoriesCluster BaseDose Mon1 Mon2 Mon3 Mon4 Mon5 Mon6 Mon7 Mon8 Mon9 Mon10 Mon11 Mon12 Taper Trajectory1 189.82 174.53 170.27 165.64 161.23 157.28 154.15 155.05 155.53 155.25 154.05 151.68 144.01 Very slow, no reversal2 192.31 175.19 157.04 139.42 119.01 96.06 75.19 59.71 45.49 33.53 23.35 15.18 12.90 Rapid, no reversal3 236.81 213.18 121.69 1.38 193.46 204.26 206.02 191.60 163.58 150.98 141.49 129.90 114.59 Very Rapid, complete reversal4 192.57 179.16 0.44 185.31 194.26 194.64 176.29 167.38 160.98 150.52 143.25 134.76 133.31 Very Rapid, complete reversal5 196.99 183.05 147.09 92.71 0.33 172.22 176.60 158.29 145.41 139.10 135.23 119.75 113.12 Very Rapid, complete reversal6 212.81 205.10 182.34 153.96 106.37 77.02 5.26 0.00 168.49 169.27 152.98 120.84 115.09 Very Rapid, complete reversal7 227.55 217.24 171.99 152.88 122.05 101.76 57.73 31.72 22.56 0.00 148.42 147.73 135.03 Rapid, partial reversal8 217.07 205.71 177.62 161.43 145.93 102.60 78.04 64.87 51.06 33.13 0.00 157.58 166.52 Rapid, partial reversal9 220.37 203.30 160.72 117.39 85.31 63.20 59.18 48.60 36.30 29.20 18.94 0.00 143.26 Rapid, partial reversal10 140.33 124.30 114.04 111.72 109.34 101.91 92.57 85.40 80.46 100.04 101.61 81.17 37.57 Erratic, no reversalFigure 2: The average monthly dose in MME for all the patients within each cluster.(not shown due to small cell size policy). In clusters 3, 4, and 5, dosetapering to near zero occurred very rapidly within 4 months afterinitiation, but the pre-taper dose was quickly restored and slowtapering was initiated instead. On the other hand, in clusters 6, 7, 8,and 9, rapid tapering occurred over a longer period of 6-11 months,but the taper was largely reversed and the subsequent trajectorywas truncated due to the cohort design. Drug-related event ratesand mental health event rates were quite variable across these smallclusters (data not shown), but in aggregate, the mental health eventrate of patients in these seven clusters was over twice that of cluster1 (mean 0.117 versus 0.058).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoFigure 3: The proportion of patients without opioids, i.e., with an average monthly dose of 0 MME, in the three clusters ofinterest and their corresponding tapering trajectories.Table 5: Relative change in the proportion of patients who were prescribed 0 MME opioids by monthMonth C1 Prop. C1 Relative C2 Prop. C2 Relative Diff.Relative C10 Prop. C10 Relative Diff. RelativePatients change Patients change changes C1 - C2 Patients change changes C1 - C102nd 0.007 0.058 0.0243rd 0.010 0.046 0.112 0.95 -0.49 0.038 0.54 -0.084th 0.013 -0.99 0.187 0.66 -1.65 0.056 0.50 -1.495th 0.015 0.13 0.287 0.54 -0.41 0.090 0.60 -0.476th 0.016 -0.98 0.378 0.32 -1.30 0.109 0.21 -1.197th 0.009 -0.46 0.454 0.20 -0.66 0.154 0.41 -0.878th 0.010 -0.99 0.530 0.17 -1.16 0.196 0.27 -1.269th 0.008 -0.21 0.597 0.13 -0.34 0.102 -0.48 0.2710th 0.008 -0.99 0.659 0.10 -1.10 0.098 -0.04 -0.9511th 0.007 -0.15 0.707 0.07 -0.22 0.358 2.65 -2.8012th 0.024 -0.98 0.733 0.04 -1.01 0.663 0.85 -1.83Relative change refers to the difference in the proportion of patients within the cluster between the current and the previous month.Negative value indicates that fewer patients were prescribed 0 MME opioid in the current month compared to the previous month. C1-Cluster 1; C2- Cluster 2; C10- Cluster 10.3 DISCUSSIONIn this large longitudinal cohort of patients with chronic pain receiv-ing high dose opioids at stable dosing for at least one year, spectralclustering analysis suggested wide variability in dose tapering pat-terns over the first year of tapering. These trajectories show notablevariation in the velocity and duration of tapering, post-taperingminimum doses and subsequent re-initiation (taper reversal) ofmoderate-to-high opioid doses, which was an unexpected finding.While the specific number of clusters is not important, the cohortsidentified were interesting and are discussed here. The largest clus-ter (cluster 1 with 55% of patients) was characterised by very slow,gradual tapering from a mean baseline dose of 190 MME to 144MME at 12 months, whereas the second largest cluster (cluster 2with 42% of patients) was characterised by quicker and steep taper-ing from a mean baseline dose of 192 MME to only 12.9 MME (with73% of patients discontinued). The latter cluster, unlike other clus-ters, had a substantial excess of both drug-related and mental healthevents after the initiation of tapering, suggesting that tapering pa-tients accustomed to high-dose prescription opioids to zero maybe associated with important health risks. Our results suggest thatthere is a significant subpopulation of patients receiving high-doseopioids for chronic pain who may not tolerate tapering to very lowdoses. Many of these patients may have had opioid use disorders;previous research in the OLDW has shown that such patients havebetter outcomes if treated with buprenorphine or methadone [ 45].There wasn’t any strong rationale to specify the number of clus-ters as we were looking for ‘interesting patterns’ which could seemSpectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAlike outliers compared to the rest of the data. Notably, spectral clus-tering identified previously unsuspected and unusual patterns inthe opioid dose management data. In particular, two small clusterswere characterised by rapid tapering to negligible or zero doses,followed by re-initiation of prescription opioids at moderately highdoses. These patterns merit further exploration as they stronglysuggest that reversal of tapering may be a marker of an unsuccess-ful tapering strategy and that clinicians can safely resume prioropioid doses for some of these patients. These patients with unsuc-cessful tapers need to be separated and studied alongside the groupof successful tapers rather than be combined as was done whenthis cohort was selected for analysis (See Data Cohort and AdverseEvents section). This suggests that the definition of a tapered cohortneeds to be re-visited and taper reversals be counted as an adverseevent. Our findings highlight the importance of considering the ve-locity of tapering, as suggested by Agnoli and colleagues’ research,along with the taper duration and post-tapering final dose as clin-icians attempt to devise safer dose tapering strategies to addressthe current opioid overdose epidemic in the US. Unsupervised datamining methods are powerful tools when the aim is to understandthe data better and see what may have been previously missed inhypothesis-driven studies. Lastly, unsupervised knowledge discov-ery research helps in extracting novel, unsuspected phenomenathat can be investigated using supervised methods. These methodsmay also challenge what was previously thought to be true; for ex-ample, by identifying previously unrecognised patterns of taperingreversal shown in Fig. 2.During the writing of this manuscript, another report was pub-lished that analysed trajectories in patients receiving long-termopioid therapy using based trajectory modeling (GBTM) [ 5]. Bin-swanger’s analysis identified five trajectories. From the clinicalperspective, this is interesting but is an oversimplification as itputs all tapering patients into two groups – one slightly decreas-ing (which they reassigned to the stable group) and one decreasing(which they compared with the stable group) but they did not clearlyidentify taper reversals, suggesting that all tapers are maintainedover time. We selected our cohort based on whether they taperedat some point but did not filter to select those with decreasing tra-jectories based on different velocities. Hence, it is quite plausibleto expect multiple groups. In addition to being fully exploratory,with no assumptions on what kind of trajectories to expect, ouranalysis focused on patients for whom a taper was pre-determinedto understand the different types and speeds of tapering. Therefore,our results support and facilitate future analyses comparing the out-comes of these different tapering approaches with the alternative ofnot tapering at all (a control group of non-tapers), which is a viableapproach but was not represented in our sample. Other notabledifference from Binswanger’s work is that we did not assume anydata properties such as distributions, number of anticipated clusters,etc. to run spectral clustering and our dataset is many times largerand representative of the entire population in the US. As we weresearching for subtle differences in a population that consists oftapering patients, in order to receive an amplified signal, we need alarge cohort and use methods that do not impose any assumptionson the input data or the results. This is exactly what knowledgediscovery is, i.e., where the scholar keeps an open mind about thekind of patterns/information that will emerge. Unlike Binswanger’sreport, we did not impose any restriction on the spectral cluster-ing algorithm. It was during the analysis of clusters to understandwhy the patients segregated as such, did we notice that the patternof the trajectories were the point of subtle difference and discussedthis in detail. This is work in progress as we will need to furtheranalyse these patterns using parametric methods and also studyother potential outcomes of such tapering patterns. For the purposeof knowledge discovery with no apriori information, we preferredan assumption-free approach with no apriori information beingimposed in any phase of the analysis. Furthermore, as we did nothave any prior knowledge of the underlying distribution patternsin this cohort, GBTM could have led us to incorrect results [ 28].GBTM relies heavily on prior information which, in essence, is adifferent approach than the one here which was to identify pat-terns that automatically emerge and would correlate with nuanceddifferences in an already tapering population.We acknowledge some limitations in our analyses such as un-known intent of the prescribing provider. For example, the physi-cian’s choice of a rapid or slow taper may be driven by unobservedcharacteristics of patients or their medical histories, which mayindependently contribute to the resulting outcomes. We were alsounable to distinguish patient-supported tapering from physician-demanded tapering and what may have triggered taper reversals.Finally, the current data do not capture illicit opioid use, sharingof opioids prescribed for other patients, or methadone adminis-tered in certified treatment programmes. Nevertheless, our studyis relevant to the research and clinical communities grapplingwith the opioid crisis. There is substantial interest in understand-ing factors contributing to the current epidemic of opioid-relatedoverdose deaths [ 15], reflected in several recent economic analy-ses on physician prescribing patterns and opioid abuse [ 18,22],statewide surveys and reports on prescribing practices and patientoutcomes [ 14,27,34], and studies of physician prescribing patternsand outcomes [ 19,36]. Previous studies of opioid dose tapering ei-ther used smaller, less nationally representative cohorts or relied onsupervised analytic methods, where an outcome is always defined,to identify patient characteristics that are associated with adverseoutcomes.4 CONCLUSIONOur objective was knowledge discovery, which was to identify hid-den, unsuspected patterns in claims data for patients with chronicpain. Since our analysis was performed using a large dataset that isrepresentative of the population of the United States these resultsare generalisable. The insights from this work will be used to extendthis work and guide predictive analysis. Our study also highlightsthe need for more detailed investigations to identify what patientfactors should be considered while suggesting a dose tapering regi-men. Dose tapering to discontinuation may plausibly increase therisk of subsequent opioid overdose if these opioid-dependent pa-tients seek alternative opioids from illicit sources or mix opioidswith other sedating drugs such as benzodiazepines, thereby negat-ing the purpose of dose tapering. We find these results, obtainedusing a data driven approach, to be compelling enough to warrantfurther investigations into dose tapering patterns to inform futurenational prescribing policies and clinical practice.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Monika Ray, Joshua J. Fenton, and Patrick S. RomanoACKNOWLEDGMENTSThe authors extend their sincere gratitude to Guibo Xing, ElizabethMagnan, Alicia Agnoli and Daniel Tancredi for data sharing aswell as members of the OptumLabs OLDW team for their valuableguidance.REFERENCES[1]Alicia Agnoli, Guibo Xing, Daniel J. Tancredi, Elizabeth Magnan, Anthony Jerant,and Joshua J. Fenton. 2021. Association of Dose Tapering With Overdose orMental Health Crisis Among Patients Prescribed Long-term Opioids. JAMA 326,5 (08 2021), 411–419.[2]G. Chen Arias-Castro and G. Lerman. 2011. Spectral clustering based on locallinear approximations. Electronic Journal of Statistics 5 (2011), 1537–1587.[3]Francis R. Bach and Michael I. Jordan. 2006. Learning Spectral Clustering, WithApplication To Speech Separation. Journal of Machine Learning Research 7, 71(2006), 1963–2001.[4]Mikhail Belkin and Partha Niyogi. 2003. Laplacian Eigenmaps for DimensionalityReduction and Data Representation. Neural Computation 15, 6 (2003), 1373–1396.[5]Ingrid A. Binswanger, Susan M. Shetterly, Stanley Xu, Komal J. Narwaney, David L.McClure, Deborah J. Rinehart, Anh P. Nguyen, and Jason M. Glanz. 2022. OpioidDose Trajectories and Associations With Mortality, Opioid Use Disorder, Contin-ued Opioid Therapy, and Health Plan Disenrollment. JAMA Network Open 5, 10(2022), e2234671–e2234671.[6]Xiao Cai, Feiping Nie, Heng Huang, and Farhad Kamangar. 2011. Heterogeneousimage feature integration via multi-modal spectral clustering. In CVPR 2011 .1977–1984.[7]S Chaiken and D.J Kleitman. 1978. Matrix Tree Theorems. Journal of Combinato-rial Theory, Series A 24, 3 (1978), 377–381.[8]Fan R. K. Chung. 1997. Spectral Graph Theory (second ed.). CBMS RegionalConference Series in Mathematics, American Mathematical Society.[9]Nello Cristianini, John Shawe-Taylor, and Jaz Kandola. 2001. Spectral KernelMethods for Clustering. In Advances in Neural Information Processing Systems ,T. Dietterich, S. Becker, and Z. Ghahramani (Eds.), Vol. 14.[10] D. Dowell, T.M. Haegerich, and R. Chou. 2016. CDC Guideline for PrescribingOpioids for Chronic Pain–United States. JAMA 315, 15 (2016), 1624–1645.[11] Scott Doyle, Shannon Agner, Anant Madabhushi, Michael Feldman, and JohnTomaszewski. 2008. Automated grading of breast cancer histopathology usingspectral clustering with textural and architectural image features. In 2008 5th IEEEInternational Symposium on Biomedical Imaging: From Nano to Macro . 496–499.[12] A. Elixhauser, C. Steiner, D.R. Harris, and R.M. Coffey. 1998. Comorbidity mea-sures for use with administrative data. Medical care 36, 1 (1998), 8–27.[13] Joshua J. Fenton, Alicia L. Agnoli, Guibo Xing, Lillian Hang, Aylin E. Altan,Daniel J. Tancredi, Anthony Jerant, and Elizabeth Magnan. 2019. Trends andRapidity of Dose Tapering Among Patients Prescribed Long-term Opioid Therapy,2008-2017. JAMA Network Open 2, 11 (2019), e1916271–e1916271.[14] Patrick Fink, Richard Deyo, Sara Hallvik, and Christi Hildebran. 2018. OpioidPrescribing Patterns and Patient Outcomes by Prescriber Type in the OregonPrescription Drug Monitoring Program. Pain medicine 19, 12 (2018), 2481–2486.[15] Centers for Disease Control and CDC Prevention. 2022. Drug Overdose Deathsin the United States, 2001–2021, NCHS Data Brief No. 457, December 2022.https://www.cdc.gov/nchs/products/databriefs/db457.htm.[16] Centers for Disease Control and CDC Prevention. 2022. U.S. Opioid DispensingRate Maps. https://www.cdc.gov/drugoverdose/rxrate-maps/index.html.[17] S.E. Hallvik, S. El Ibrahimi, K. Johnston, J. Geddes, G. Leichtling, P.T. Korthuis,and D.M. Hartung. 2022. Patient outcomes after opioid dose reduction amongpatients with chronic opioid therapy. Pain 163, 1 (2022), 83–90.[18] M.C. Harris, L.M. Kessler, M.N. Murray, and B. Glenn. 2020. Prescription Opi-oids and Labor Market Pains. The Effect of Schedule II Opioids on Labor ForceParticipation and Unemployment. The Journal of Human Resources (2020).[19] M.C. Harris, L.M. Kessler, M.N. Murray, and B. Glenn. 2022. The RelationshipBetween Provider Age and Opioid Prescribing Behavior. Medical care 28, 5 (2022),223–228.[20] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2009. The Elements ofStatistical Learning (second ed.). Springer.[21] Christopher John, David Watson, Michael Barnes, Costantino Pitzalis, and MylesLewis. 2019. Spectrum: Fast density-aware spectral clustering for single andmulti-omic data. Bioinformatics (Oxford, England) 36 (09 2019).[22] Jessica A. Laird and Torben Heien Nielsen. 2017. Physician Prescribing Behaviorson Prescription Drug Use and Labor Supply: Evidence from Movers in Denmark.(2017).[23] Paul F. Lazarsfeld and Neil W. Henry. 1968. Latent structure analysis (first ed.).Houghton, Mifflin.[24] Anna Little, Mauro Maggioni, and James M. Murphy. 2020. Path-Based SpectralClustering: Guarantees, Robustness to Outliers, and Fast Algorithms. Journal ofMachine Learning Research 21 (2020), 1–66.[25] T.J. Loftus, B. Shickel, J.A. Balch, P.J. Tighe, K.L. Abbott, B. Fazzone, E.M. An-derson, Rozowsky J., T. Ozrazgat-Baslanti, Y. Ren, S.A. Berceli, W.R. Hogan, P.A.Efron, J.R. Moorman, P. Rashidi, G.R. Jr. Upchurch, and A. Bihorac. 2022. Phe-notype clustering in health care: A narrative review for clinicians. Frontiers inArtificial Intelligence 5 (2022), 842306.[26] Marina Meila and Jianbo Shi. 2000. Learning Segmentation by Random Walks.InAdvances in Neural Information Processing Systems , T. Leen, T. Dietterich, andV. Tresp (Eds.), Vol. 13.[27] Barry R. Meisenberg, Jennifer Grover, Colson Campbell, and Daniel Korpon. 2018.Assessment of Opioid Prescribing Practices Before and After Implementation ofa Health System Intervention to Reduce Opioid Overprescribing. JAMA NetworkOpen 1, 5 (2018), e182908–e182908.[28] Miceline Mesidor, Marie-Claude Rousseau, Jennifer O’Loughlin, and Marie-PierreSylvestre. 2022. Does group-based trajectory modeling estimate spurious trajec-tories? BMC Medical Research Methodology 22, 194 (2022).[29] B.J. Moore, S. White, R. Washington, N. Coenen, and A. Elixhauser. 2017. Identi-fying Increased Risk of Readmission and In-hospital Mortality Using HospitalAdministrative Data: The AHRQ Elixhauser Comorbidity Index. Medical care 55,7 (2017), 698–705.[30] Nisha Nataraj, Andrea E. Strahan, Gery P. Guy, Jan L. Losby, and Deborah Dowell.2022. Dose tapering, increases, and discontinuity among patients on long-termhigh-dose opioid therapy in the United States, 2017–2019. Drug and AlcoholDependence 234 (2022), 109392.[31] Andrew Y Ng, MI Jordan, Y Weiss, et al .2002. On spectral clustering: analysisand an al-gorithm. Proceedings of IEEE Neural Information Processing Systems(NIPS) 14, 849–856.[32] Juan Camilo Orduz. 2020. Getting Started with Spectral Clustering. https://www.kdnuggets.com/2020/05/getting-started-spectral-clustering.html.[33] Pharmacy Quality Alliance (PQA). 2019. Use of opioids at high dosage in per-sons without cancer (2019 update) (OHD-2019). https://www.pqaalliance.org/measures-overview.[34] S.M. Price, A.C. O’Donoghue, L. Rizzo, S. Sapru, and K.J. Aikin. 2018. OpioidEducation and Prescribing Practices: Results from a National Survey of HealthCare Providers. JAMA Network Open 1, 5 (2018), e182908–e182908.[35] Baig F Little MA Raykov YP, Boukouvalas A. 2016. What to Do When K-MeansClustering Fails: A Simple yet Principled Alternative Algorithm. PLoS ONE 11, 9(2016).[36] C.L. Rowe, K. Eagen, J. Ahern, M. Faul, A. Hubbard, and P. Coffin. 2022. Evaluatingthe Effects of Opioid Prescribing Policies on Patient Outcomes in a Safety-netPrimary Care Clinic. Journal of General Internal Medicine 37 (2022), 117—-124.[37] Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller. 1998. NonlinearComponent Analysis as a Kernel Eigenvalue Problem. Neural Computation 10, 5(1998), 1299–1319.[38] M. Shi and G Xu. 2017. Spectral clustering using Nyström approximation for theaccurate identification of cancer molecular subtypes. Scientific Reports 7 (2017).[39] Le Shu and Longin Jan Latecki. 2016. Integration of Single-view Graphs with Dif-fusion of Tensor Product Graphs for Multi-view Spectral Clustering. In Asian Con-ference on Machine Learning (Proceedings of Machine Learning Research, Vol. 45) ,Geoffrey Holmes and Tie-Yan Liu (Eds.). 362–377.[40] Alexander J. Smola and Risi Kondor. 2003. Kernels and Regularization on Graphs.InLearning Theory and Kernel Machines . Springer Berlin Heidelberg, 144–158.[41] A. Sud, A. Armas, H. Cunningham, S. Tracy, K. Foat, N. Persaud, F. Hosseiny, S.Hyland, L. Lowe, E. Zlahtic, R. Murti, H. Derue, I. Birnbaum, K. Bonin, R. Upshur,and M.L.A. Nelson. 2020. Multidisciplinary care for opioid dose reduction inpatients with chronic non-cancer pain: A systematic realist review. PLoS One 15,7 (2020).[42] Frederick Tung, Alexander Wong, and David A. Clausi. 2010. Enabling scalablespectral clustering for image segmentation. Pattern Recognition 43, 12 (2010),4069–4076.[43] Jeroen K. Vermunt and Jay Magidson. 2003. Latent class models for classification.Computational Statistics and Data Analysis 41, 3 (2003), 531–537.[44] Ulrike von Luxburg. 2006. A Tutorial on Spectral Clustering, Technical ReportNo. TR-149. https://www.cs.cmu.edu/~aarti/Class/10701/readings/Luxburg06_TR.pdf.[45] Sarah E. Wakeman, Marc R. Larochelle, Omid Ameli, Christine E. Chaisson,Jeffrey Thomas McPheeters, William H. Crown, Francisca Azocar, and Darshak M.Sanghavi. 2020. Comparative Effectiveness of Different Treatment Pathways forOpioid Use Disorder. JAMA Network Open 3, 2 (2020), e1920622–e1920622.[46] Y Weiss. 1999. Segmentation using eigenvectors: a unifying view. Proceedings ofthe Seventh IEEE International Conference on Computer Vision 2, 975–982.[47] Tao Xiang and Shaogang Gong. 2008. Spectral clustering with eigenvectorselection. Pattern Recognition 41, 3 (2008), 1012–1029.[48] D Yan, L Huang, and MI Jordan. 2009. Fast approximate spectral clustering In:Proceedings of the 15th ACM SIGKDD International Conference on KnowledgeDiscovery and Data Mining. ACM New York.[49] L Zelnik-Manor and P Perona. 2005. Self-tuning spectral clustering. Advances inneural information processing systems. 17 (2005), 1601–1608.Spectral Clustering Identifies High-risk Opioid Tapering Trajectories epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA[50] Xianchao Zhang, Jingwei Li, and Hong Yu. 2011. Local density adaptive similaritymeasurement for spectral clustering. Pattern Recognition Letters 32, 2 (2011), 352–358. |
N0qlvDjnEv | Risk-Based Ring Vaccination: A Strategy for PandemicControl and Vaccine AllocationDinh Song An NguyenThe Ohio State UniversityColumbus, Ohio, USAnguyen.2687@osu.eduMarie CharpignonMITCambridge, Massachusetts, USAmcharpig@mit.eduKathryn L SchaberBoston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, USAkathryn.schaber@childrens.harvard.eduMaimuna Shahnaz Majumder∗Boston’s Children Hospital, HarvardMedical SchoolBoston, Massachusetts, USAmaimuna.majumder@childrens.harvard.eduAndrew Perrault∗The Ohio State UniversityColumbus, Ohio, USAperrault.17@osu.eduAbstractThroughout an infectious disease crisis, resources that canbe used to slow and prevent spread are often scarce or expen-sive. Designing control policies to optimally allocate theseresources to maximize objectives is challenging. Here, westudy the case of ring vaccination, a strategy that is used tocontrol the spread of infection by vaccinating the contacts ofidentified infected individuals and their contacts of contacts.Using agent-based modeling to simulate an Ebola outbreak,we introduce a risk-based ring vaccination strategy in whichindividuals in a ring are prioritized based on their relativeinfection risks. Assuming the risk of transmission by con-tact type is known and a fixed supply of vaccine doses isavailable on each day, we compared this strategy to ring vac-cination without prioritization and randomized vaccination.We find that risk-based ring vaccination offers a substantialadvantage over standard ring vaccination when the numberof doses are limited, including reducing the daily infectedcount and death count, and shifting the pandemic peak by aconsiderable amount of time. We believe that control policiesbased on estimated risk can often offer significant benefitswithout increasing the burden of administering the policyby an unacceptable amount.Keywords: agent-based modeling, ring vaccination, Ebola,public health∗These authors co-supervised this research.Permission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).ACM Reference Format:Dinh Song An Nguyen, Marie Charpignon, Kathryn L Schaber,Maimuna Shahnaz Majumder, and Andrew Perrault. 2023. Risk-Based Ring Vaccination: A Strategy for Pandemic Control and Vac-cine Allocation. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 6 pages.1 IntroductionDesigning control policies for infectious disease outbreakscan be challenging for several reasons, including scientificuncertainty surrounding newly emerging diseases, manyobjectives that can be in tension with each other, and limitedaccess to labor and other critical resources. In this paper,we consider the case of ring vaccination , a vaccination deliv-ery strategy that is employed when the supply of vaccinesand the labor required to administer them is limited. Ringvaccination vaccinates individuals within a ring, contactsand contacts of contacts of an infected case. Given a vaccinewith appropriate properties, especially the ability to safelyinoculate an individual who has been recently exposed, ringvaccination can be highly effective. It has been used as a keytool in several Ebola and smallpox outbreaks [2, 6, 7].Ring vaccination functions by targeting individuals whowould be at a higher level of risk of developing the infec-tion, relative to the general population. For example, in the(early/late) stages of Ebola outbreak of Gulu district, Ugandain 2000, the attack rate across the population was roughly0.126% [12]. However, the secondary attack rate (SAR), de-fined as the probability that an infection occurs among sus-ceptible people within a specific set of contacts, can betterreflect the relation between social interactions and transmis-sion risk [ 10]. Yang et al . [15] estimate its value at 2.5%—thus,a vaccine administered immediately after exposure wouldbe about 20 times more effective compared to a randomlydelivered vaccination.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew PerraultHowever, not all individuals in a ring have the same in-fection risk. For instance, contacts of contacts are less likely,on average, to become infected because transmission mustoccur twice. Many observable and unobservable factors maycontribute to this risk, including the type and duration ofcontact between individuals, biological differences that makesome people more effective transmitters, multiple exposurepaths, and behavioral differences that are caused by the pres-ence or absence of public health monitoring (i.e., immediateself isolation at symptom onset).Like other control policies that target individuals withelevated risk such as contact tracing, ring vaccination facesa fundamental challenge that the number of such individu-als is roughly linear in the number of infected individuals,which varies by orders of magnitude throughout a crisis,but the amount of supplies and labor available per day isroughly fixed. We argue that control policies can leverageestimated risk to prioritize vaccine dose allocation, yieldingbetter performance when supplies are scarce. To that end, wepropose a risk-based ring vaccination strategy that leveragesthe differing risks associated with different contact types,information that can be easily elicited as part of contacttracing.We evaluate the risk-based ring strategy in an agent-basedmodel (ABM) and consider Ebola as the case study becauseof its unique transmission intensity bases on type of contact.We show that, when doses are highly restricted, risk-basedring vaccination yields significant benefits over standardring vaccination and randomized vaccination by not onlyreducing overall transmissions and deaths but also shiftingthe pandemic peak. We find that the extra risk associatedwith ring membership is quickly diluted as there are manymore contacts of contacts than contacts, and most contactshave little transmission chance associated with them.2 Agent-based modelWe develop an ABM for Ebola Virus Disease (EVD) withN=14652 agents (Table 1). We model two agent characteris-tics that influence spread and mortality: age and householdmembership. We replicate the household structure and agedistributions from Dodd et al . [5], who collected data in Zam-bia and South Africa in 2005-2006, and again in Zambia in2011. Each agent is in one of the six following discrete stateson each day: Susceptible (S), Incubating(IC), Infectious(I),Vaccinated but not yet immune (V), Deceased(D), and Re-moved (immune or recovered) (R). StateScomprises agentswho have not yet received a vaccine or become immune.StateIcomprises agents who are capable of transmittingEVD to their contacts who are currently in S. At the endof their infectious period, agents in state Itransition intostateDor stateR, depending on Pr(D|age). We estimate theage-specific probability of death using previously reportedcase fatality rates (CFR) of EVD for different age groups [ 14].Contacts are sampled daily. We sample household andnon-household contacts separately. We assume that contactsbetween each pair of individuals within a household occursevery day. Non-household contacts are sampled from thepopulation according to the inter-household contact matrixfrom Ozella et al . [13] , collected in a village in rural Malawi,accounting for the age of the person. We assume that thenumber of contacts follows an independent Poisson distri-bution for each age-age contact pair.Each contact has an associated exposure type. For house-hold contacts, we use and sample the exposure types andtheir distributions observed by Bower et al . [1], which in-clude handling fluids, direct and indirect wet and dry con-tacts, and minimal to no contact. Direct contact refers tosituation in which individuals come into direct contact, suchas touching and caring for a patient diagnosed with EVD,whereas an indirect contact refers to situations such as wash-ing clothes or sharing the same bed with an EVD positivepatient. In addition, wet contact refers to contact with anEVD patient that is symptomatic (e.g. vomiting, bleeding,etc.) while dry contact refers to contact with patients with-out any symptoms. Each type of contact associates with adifferent risk level. For example, a direct contact with fluidsis associated with a higher risk of transmission than a dry,physical contact. We let Wx,y,t represent the risk ratio ofthe contact between agents xandy. For household contacts,it is the age-adjusted risk ratio from Bower et al . [1]. Fornon-household contacts, we assign the same type to each,with a risk ratio we set to match with the non-householdSAR reported in Dixon et al . [4] (see Inferred parameters).Wx,y,t=0if no contact occurred.We define the probability of transmission from agent xtoagentyon daytasPr(base)·Wx,y,twherePr(base)is an inferred baseline probability of infec-tion. The process for inferring this parameter is described inthe next section.Vaccination. The 2017 Guinea ring vaccination trial demon-strates that the vaccine we considered in our simulations(rVSV-ZEBOV) is safe to administer to individuals who areincubating, but do not yet show symptoms [ 6]. Moreover,rVSV-ZEBOV has 100% effectiveness if administered afterexposure. Therefore, we assume that agents in state ICandSare eligible for vaccination. After vaccination, they transi-tion to state V, and nine days later, they transition to stateR, where agents are considered immune.Inferred parameters. We need to infer the parametersPr(base)andRR(non-household), the non-household riskratio, from data. Pr(base)can be interpreted as the probabil-ity of transmission for a household contact of the minimalcontact type. We set this value in order to match the sec-ondary attack rate (SAR) of the ABM to the SAR that wasRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1. Parameters for the ABM.Parameters Values ReferencesEbola dynamicsIncubation period Lognormal: μ=2.446days,σ=0.284 Legrand et al. [9]Infectious period Lognormal: μ=2.2915 days,σ=0.1332 Legrand et al. [9]Case fatality rate Ages < 15: 77.8% Qin et al. [14]Ages 15 - 59: 85.87%Ages > 59: 95.7%Time from vaccination to immunity 9days Kucharski et al. [8]Household secondary attack rate 12.3% Dixon et al. [4]Non-household secondary attack rate 4.8% Dixon et al. [4]Non-household contact matrix Adults-Children: Poisson, λ=1.2 Ozella et al. [13]Adults-Adolescents: Poisson, λ=1.5Adults-Adults: Poisson, λ=5.3Adolescents-Children: Poisson, λ=2.0Adolescents-Adolescents: Poisson, λ=3.6Children-Children: Poisson, λ=0.2Inferred model parametersBase probability of transmission 0.01962 Inferred from Bower et al. [1]Contact type distribution (household) Handled fluids: 16.3%,RR: 9.7 Bower et al. [1]and risk ratios (RR) Direct wet contact: 40.3%,RR: 8.3Direct dry contact: 17%,RR: 5.6Indirect wet contact: 2.6%,RR: 4.9Indirect dry contact: 10%,RR: 1.3Minimal contact: 13.8%,RR: 1Risk ratio for non-household 2.45 Inferred from Equation 2previously reported for Ebola. Specifically, we solve the fol-lowing equation for Pr(base)SARhh=Pr(base)∑︁iPr(i|household contact)RR(i),(1)wherePr(i)is the probability of a contact having type i,RR(i)is the risk ratio associated with contact type i. Thisresults inPr(base)=0.01962 . WithPr(base)identified, wecan solve for RR(non-household):SAR non-hh=Pr(base)RR(non-household), (2)resulting in RR(non-household)=2.45, an intensity be-tween indirect wet and indirect dry contact.3 Risk-based ring vaccinationIn the risk-based ring vaccination strategy, we prioritizethe limited vaccine doses to agents within a ring with thehighest estimated risks. The estimation strategy for risksneeds to be simple and only use information that is easy toobserve. Specifically, we propose estimating risks based oncontact type and household membership and doing so onlywithin a ring—thus, there are at most two contact eventsthat contribute to any estimated risk. We assume that risksare estimated separately for each ring and that there is nocoordination between rings. Risks are updated for each indi-vidual at most once—we update them for contacts of contactsif the contact becomes infected.We define a ring as the contacts and contacts of contacts ofthe infected agent. Let xdenote the seed case for the ring, ydenote a contact of x, andzdenote a contact of y. We definethe risk foryasR(y)=Pr(base)·Wx,y, (3)whereWx,yis the risk ratio associated with the highest inten-sity contact between xandyafterxdeveloped symptoms,i.e.,maxtWx,y,t withtinx’s infectious period. For z, wedefine the risk asR(z|yis not infected)=Pr(base)·Wx,y·Pr(base)·Wy,z(4)R(z|yis infected)=Pr(base)·Wy,z, (5)using equation 4 if yis not known to be infected and updatingto use equation 5 if ybecomes infected.Individuals in the ring are then vaccinated in order of theirrisk ranking, i.e., each day the Uunvaccinated individualswho do not have symptoms with highest risk are vaccinated.If there are still some vaccines left after everyone in the ringhas been vaccinated, which can happen when individuals areepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perraultunreachable during the vaccination process or in the laterstage of the outbreak, then the remaining vaccines will berandomly distributed to the susceptible agents that are notin the identified clusters.4 Preliminary resultsWe compare the risk-based ring vaccination approach tothree baselines: random vaccination, full ring vaccination,and no prioritization ring vaccination. All baselines vacci-nate only individuals that have no symptoms and are un-vaccinated (i.e., individuals in states SandIC). In randomvaccination ,Uindividuals are vaccinated at random eachday. In no prioritization ring ,Uindividuals that are in a ringare vaccinated and any leftover vaccines are randomly dis-tributed. In full ring ,allindividuals in a ring are vaccinated,relaxing the constraint of Uvaccines per day. In all cases,each individual has a 30% to be unreachable (as in [ 8]). Thedose that would go to that individual instead goes to thenext eligible agent (i.e., the next highest risk in risk-basedor another agent in the ring in no prioritization ring). Wesimulate the ABM with 10 seed cases selected uniformly atrandom from the population.By ranking individuals who are at most at risk, risk-basedring vaccination substantially reduces the infected numberof infections and deaths (Fig. 1 and Tab. 2). However, theimpact of risk-based prioritization varies significantly acrossdose limits. In all dose limits, we see a statistically significantdifference between risk-based prioritization and standardring vaccination. This difference is most salient for moderatedose limits—for 100 daily doses, risk-based reduces deathsby roughly 2times that of randomized vaccination and 1.8times for no prioritization ring. With 200 doses available,both risk-based and no-prioritization ring differ substantiallyfrom randomized vaccination, whereas in 50 and 100 doses,no prioritization ring and random achieve relatively similarperformance. In the case of 50 daily doses, risk-based ring hasa smaller impact on the number of infections and deaths ( <9%relative to random). However, we see substantial shiftingof the infection curve in this setting, delaying the peak byabout 20 days.The full ring strategy (without dose limit) results in fewdeaths as the vaccine for EVD is highly effective even whenadministered after exposure, even when 30% of contacts areunreachable at the time of vaccination. However, the costof this performance is the need for a surge of vaccination inthe first month of 321±179doses per day. This approachachieves control early resulting in an average of 111±152daily doses across the whole period.5 Discussion and Future WorkCreating control policies during an outbreak is challengingdue to resource constraints such as limited healthcare per-sonnel and medical supplies. Using an ABM, we study theimpact of ring vaccination strategies under a daily dose limit,and consider EVD as the case study, specifically. We find that,even with vaccination-infection combination that is highlysuited to ring vaccination, ring vaccination has limited im-pact on new infections relative to random vaccination untilthe number of doses available is sufficiently high. Moreover,the implementation of risk-based ring vaccination we con-sider only requires slightly more information (contact types),but has an impact even at much lower numbers of delivereddoses.It is expected to observe phase transitions in vaccinationprograms due to the exponential dynamics involved in in-fections: when the number of daily vaccine doses passes athreshold, infections will decay exponentially, and the out-break can be contained. However, this intuition does notapply directly to ring vaccination. Despite the ability of ringvaccination to identify individuals who have a higher riskof infection than the broader population, the impact on newinfections is relatively modest. A small modification of stan-dard ring vaccination—involving risk-based prioritizationamong documented contacts—induces dramatically differentbehavior. Specifically, for a small number of doses (Fig. 1), arisk-based approach yields a shift in the time at which thepeak in new infections is reached, thus postponing a surgemore efficiently than standard ring vaccination and random-ized vaccination. Moreover, above a certain threshold, lyingbetween 50 and 100 daily doses in our model, benefits of therisk-based approach compound and the shift in the timingof the peak is coupled with a significant reduction in themaximum number of new infections. These two distinct ef-fects and their potential coupling are not well understoodand merit further study.A key question is whether more sophisticated vaccinationstrategies such as ring vaccination are worth the additionaloverhead cost of reliably identifying and contact tracingcases. The answer to this question is multi-faceted and willdepend on the interplay among outbreak stage, vaccine avail-ability, and the combination of vaccination and infectionproperties. More effort is needed to understand these inter-actions: during an infectious disease emergency, resourcesare scarce and need to be allocated towards the geographicalareas or subpopulations that result in the highest impacts,i.e., the largest reduction in the maximum number of newinfections and the greatest delay in the timing of the peak.Our study has several limitations. Our current ABM doesnot incorporate realistic superspreading dynamics. Yet manyinfectious diseases demonstrate a high degree of transmis-sion heterogeneity, i.e., relatively few seed cases cause manysecondary infections [ 11]. While not well captured in ourmodel, this aspect has substantial consequences for ring vac-cination because the variance of the strategy’s outcome isincreased, i.e., a single missed secondary case can have aRisk-Based Ring Vaccination: A Strategy for Pandemic Control and Vaccine Allocation epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA(a) 50 doses (b) 100 doses (c) 200 dosesFigure 1. The daily mean count ( ±standard deviation) of infected under different vaccination strategies. We simulate outbreakswith 10 seed cases for each policy given different numbers of vaccine availability. The shaded region indicates the standarddeviation for each vaccination strategy.Table 2. Mean (95% CI) count of deceased for each strategy and dose limit.Strategy 50 doses 100 doses 200 dosesRisk-based ring 8465.77 3268.67 175.77(8370.63–8560.91) (1399.83–5137.50) (144.14–207.4)No prioritization ring 9184 6091.50 784.7(9101.12–9266.88) (5915.62–6267.38) (663.08–906.32)Random 9272.33 6488.57 2044.4(9164.44.35–9380.22) (6425.06–6552.09) (1627.39–2461.41)Full ring 27.33(no dose limit) (10.79–43.87)No vaccination 12189.80(12156.43–12223.17)much larger impact on the timing of the peak in new in-fections and its magnitude than in the absence of transmis-sion heterogeneity. We suspect that accounting for super-spreading events would further reduce the benefits of ringvaccination. However, in some circumstances, pronouncedsuperspreading can make risk-based targeting more effectiveas observations from a given ring can be used to infer thetransmission potential of the seed case.Furthermore, it is already a hard task to gather contactsand contacts of contacts to form a ring for vaccination. Ob-taining information regarding exposure types between in-fected individuals and their contacts is even more time andresource intensive. Although risk-based ring vaccination ismore effective in our results, it is important to consider ad-ditional factors like timing and human resources in order tobetter evaluate the efficacy of our method.By design, ring vaccination targets individuals with ahigher number of contacts or more centrally located in anetwork. These individuals tend to get infected earlier thantheir counterparts with an average number of contacts andcentrality [ 3].Risk-based ring vaccination, by prioritizingindividuals with contacts at higher risk, will additionally tar-get individuals in larger households. This additional featureoperates independently from the “encirclement” aspect ofstandard ring vaccination; more work is needed to quantifytheir respective contributions (e.g., by comparing risk-basedvaccination to strategies that prioritize individuals based onhousehold size).AcknowledgmentsKS was supported in part by grant SES2200228 from theNational Science Foundation. MSM was supported in part bygrant R35GM146974 from the National Institute of GeneralMedical Sciences, National Institutes of Health. The fundershad no role in study design, data collection and analysis,decision to publish, or preparation of the manuscript.References[1]Hilary Bower, Sembia Johnson, Mohamed S Bangura, Alie JoshuaKamara, Osman Kamara, Saidu H Mansaray, Daniel Sesay, CeciliaTuray, Francesco Checchi, and Judith R Glynn. 2016. Exposure-specificand age-specific attack rates for Ebola virus disease in Ebola-affectedhouseholds, Sierra Leone. Emerging infectious diseases 22, 8 (2016),1403.[2]Ebola ça Suffit Ring Vaccination Trial Consortium. 2015. The ringvaccination trial: a novel cluster randomised controlled trial designto evaluate vaccine efficacy and effectiveness during outbreaks, withspecial reference to Ebola. BMJ: British Medical Journal 351 (2015),h3740.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Dinh Song An Nguyen, Marie Charpignon, Kathryn Schaber, Maimuna Majumder, Andrew Perrault[3]Nicholas A Christakis and James H Fowler. 2010. Social networksensors for early detection of contagious outbreaks. PloS one 5, 9(2010), e12948.[4]Meredith G Dixon, Melanie M Taylor, Jacob Dee, Avi Hakim, PaulCantey, Travis Lim, Hawa Bah, Sékou Mohamed Camara, Clement BNdongmo, Mory Togba, et al .2015. Contact tracing activities duringthe Ebola virus disease epidemic in Kindia and Faranah, Guinea, 2014.Emerging infectious diseases 21, 11 (2015), 2022.[5]Peter J Dodd, Clare Looker, Ian D Plumb, Virginia Bond, Ab Schaap,Kwame Shanaube, Monde Muyoyeta, Emilia Vynnycky, Peter Godfrey-Faussett, Elizabeth L Corbett, et al .2016. Age-and sex-specific socialcontact patterns and incidence of Mycobacterium tuberculosis infec-tion. American journal of epidemiology 183, 2 (2016), 156–166.[6]Ana Maria Henao-Restrepo, Anton Camacho, Ira M Longini, Conall HWatson, W John Edmunds, Matthias Egger, Miles W Carroll, Natalie EDean, Ibrahima Diatta, Moussa Doumbia, et al .2017. Efficacy andeffectiveness of an rVSV-vectored vaccine in preventing Ebola virusdisease: final results from the Guinea ring vaccination, open-label,cluster-randomised trial (Ebola Ça Suffit!). The Lancet 389, 10068(2017), 505–518.[7]Mirjam Kretzschmar, Susan Van den Hof, Jacco Wallinga, and JanVan Wijngaarden. 2004. Ring vaccination and smallpox control. Emerg-ing infectious diseases 10, 5 (2004), 832.[8]Adam J Kucharski, Rosalind M Eggo, Conall H Watson, Anton Cama-cho, Sebastian Funk, and W John Edmunds. 2016. Effectiveness ofring vaccination as control strategy for Ebola virus disease. Emerginginfectious diseases 22, 1 (2016), 105.[9]Judith Legrand, Rebecca Freeman Grais, Pierre-Yves Boelle, Alain-Jacques Valleron, and Antoine Flahault. 2007. Understanding thedynamics of Ebola epidemics. Epidemiology & Infection 135, 4 (2007),610–621.[10] Yang Liu, Rosalind M Eggo, and Adam J Kucharski. 2020. Secondaryattack rate and superspreading events for SARS-CoV-2. The Lancet395, 10227 (2020), e47.[11] James O Lloyd-Smith, Sebastian J Schreiber, P Ekkehard Kopp, andWayne M Getz. 2005. Superspreading and the effect of individualvariation on disease emergence. Nature 438, 7066 (2005), 355–359.[12] SI Okware, FG Omaswa, S Zaramba, A Opio, JJ Lutwama, J Kamugisha,EB Rwaguma, P Kagwa, and M Lamunu. 2002. An outbreak of Ebolain Uganda. Tropical Medicine & International Health 7, 12 (2002), 1068–1075.[13] Laura Ozella, Daniela Paolotti, Guilherme Lichand, Jorge P Rodríguez,Simon Haenni, John Phuka, Onicio B Leal-Neto, and Ciro Cattuto.2021. Using wearable proximity sensors to characterize social contactpatterns in a village of rural Malawi. EPJ Data Science 10, 1 (2021), 46.[14] Enqiang Qin, Jingfeng Bi, Min Zhao, Ye Wang, Tongsheng Guo, TaoYan, Zhiwei Li, Juan Sun, Jieli Zhang, Suhong Chen, et al .2015. Clinicalfeatures of patients with Ebola virus disease in Sierra Leone. Clinicalinfectious diseases 61, 4 (2015), 491–495.[15] Yingrui Yang, Ashley McKhann, Sixing Chen, Guy Harling, and Jukka-Pekka Onnela. 2019. Efficient vaccination strategies for epidemiccontrol using network information. Epidemics 27 (2019), 115–122. |
Ql4CuaB3-D | Using Reinforcement Learning for Multi-Objective Cluster-LevelNPI OptimizationXueqiao Pengpeng.969@osu.eduThe Ohio State UniversityColumbus, Ohio, USAJiaqi Xuxu.4015@osu.eduThe Ohio State UniversityColumbus, Ohio, USAXi Chenchen.10183@osu.eduThe Ohio State UniversityColumbus, Ohio, USADinh Song An Nguyennguyen.2687@osu.eduThe Ohio State UniversityColumbus, Ohio, USAAndrew Perraultperrault.17@osu.eduThe Ohio State UniversityColumbus, Ohio, USAABSTRACTNon-pharmaceutical interventions (NPIs) play a critical role in thedefense against emerging pathogens. Among these interventions,familiar measures such as travel bans, event cancellations, socialdistancing, curfews, and lockdowns have become integral compo-nents of our response strategy. Contact tracing is especially widelyadopted. However, the optimization of contact tracing involvesnavigating various trade-offs, including the simultaneous goals ofminimizing virus transmission and reducing costs. Reinforcementlearning (RL) techniques provides a promising avenue to model in-tricate decision-making processes and optimize policies to achievespecific objectives, but even modern deep RL techniques strug-gle in the high dimensional partially observable problem settingpresented by contact tracing. We propose a novel RL approach tooptimize a multi-objective infectious disease control policy thatcombines supervised learning with RL, allowing us to capitalize onthe strengths of both techniques. Through extensive experimenta-tion and evaluation, we show that our optimized policy surpassesthe performance of five benchmark policies.KEYWORDSreinforcement learning, machine learning, contact tracing, publichealthACM Reference Format:Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and AndrewPerrault. 2023. Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining and KnowledgeDiscovery, August 7, 2023, Long Beach, CA, USA. , 7 pages.1 INTRODUCTIONThe COVID-19 pandemic has highlighted the crucial role of non-pharmaceutical interventions (NPIs) in effectively managing thespread of infectious diseases. The implementation of NPIs requiresPermission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).careful consideration of multiple objectives, including the preven-tion of viral transmission and the reduction of costs associatedwith quarantine measures. Contact tracing has emerged as a widelyadopted policy within the realm of NPIs and has been extensivelystudied in the context of COVID-19 [7, 8, 11, 21].Nevertheless, optimizing NPIs remains a challenging open prob-lem in many settings for several reasons. First, the objective is in-herently multi-objective—intensified control efforts lead to highercosts. In addition, sensing actions, such as testing, may be includedin all but the earliest stages of an infectious disease crisis. Thesehave their own costs and constraints associated with them. Sec-ondly, inferring the probability that an individual is difficult forinfections that do substantial transmission asymptomatically, suchas SARS-CoV-2. This inference problem is perhaps surprisingly highdimensional, as we show it is dependent on the symptom statusand test results of all individuals in the same cluster due to thetransmission heterogeneity.Cluster Symptom StatusTest InformationCNNTestInformationInfectionProbabilityIndividual Symptom StatusQuarantineTestIndividual StateActionsSimulatorPPO LearningRewardFigure 1: Illustration of our approach. We combine a infec-tion probability decoder that uses supervised learning witha reinforcement learning-based policy.In this work, our goal is to develop a generic approach for cluster-level optimization of NPIs. To tackle this challenge, we proposea novel approach that integrates convolutional neural networks(CNN) and reinforcement learning (RL) model[ 5,20] (Fig. 1). TheCNN is used to solve the high dimensional infection inferenceproblem and uses a novel representation of the symptom and teststate of the entire cluster as input, allowing a single CNN to betrained for all cluster sizes. The RL agent takes the CNN output andother features as its state and selects an action for each individual(including quarantine and testing) and aims to maximize a multi-objective reward function. This reward function includes a penaltyepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew Perraultfor days where an individual is infectious but not isolated, a penaltyfor days where they are quarantined but not infectious, as well as acost for any control action that is taken (e.g., test cost). As a casestudy, we have developed a branching process-based SARS-CoV-2virus simulator, where we evaluate the effectiveness of our method.In this work, we focus on optimization only—in the longer term,we aim to use the results of optimization to automatically discoversimple, implementable policies.This paper makes the following contributions:•We propose a novel RL approach for finding optimal con-tact tracing policies. Our approach combines a supervisedlearning model with an RL model, leveraging the strengthsof both techniques to optimize the desired objectives. Theresulting agent can be trained and deployed simultaneouslyacross all cluster sizes.•We show the existence of a theoretically simple, yet optimal,threshold type policy for contact tracing in the setting whereno sensing actions are available. Running this policy requiressupervised learning only.•We develop a simple branching process-based model forSARS-CoV-2 and compare our policies with baselines. Weshow that we achieve better rewards across a range of ob-jective parameters.Related work. We identify two main thrusts of work that optimizecontact tracing and NPIs: network and branching process. Networkmodels represent connections between individuals as edges in apossibly dynamic contact graph [ 4,9,12,15,16]. These approachescan leverage network structure in their decisions but make thestrong assumption that the entire contact network is known. Theclosest existing approach to ours is RLGN [ 12], which formulatesthe problem as a sequential decision-making task within a tempo-ral graph process. These approaches often consider a fixed budgetof interventions rather than a multi-objective reward function. Incontrast, branching processes are used, resulting in a cluster-based,tree-structured view of contagion [ 10,13,17]. These approacheshave the advantage of aligning more closely with the informationavailable to public health decision-makers in many practical set-tings (but allow for less expressive policies). All of these modelsare agent-based in the sense that they model individuals ratherthan subpopulations—because contact tracing decisions depend onthe specific time that certain events happen for individuals (e.g.,exposure, symptoms), the additional detail that agent-based modelsprovide is valuable for modeling and optimization.2 BRANCHING PROCESS ENVIRONMENTWe take a branching process-based view of an infectious diseasecrisis (Fig. 2). We track two generations of potential individuals:the seed case and their contacts. We assume that interventionsbegin after a reporting and tracing delay. At that point, day tstart(tstart=3in Fig. 2), we observe the symptom history for each agentup to daytand must decide which action to take for each agent(e.g., quarantine, test). On day t, we observe the symptom state ofeach agent plus the results of any sensing actions (defined below)we have taken up to day tand must decide what action to take foreach agent on day t. The simulation proceeds for a fixed period oftime untilT.Close ContactsTime(days)Seed CaseInfectiousWithoutSymptomsWithSymptomsExposedQuarantinedIsolationFigure 2: An agent-based branching process model. The dia-gram depicts standard contact tracing for an example seedcase with six contacts.In Fig. 2, we present an application of contact tracing policy in thebranching process framework. The seed case remains infectious fortwo days without exhibiting symptoms, followed by one day withsymptoms, before entering isolation. In this example, all six contactswere exposed on the same day. Contacts 1 and 4 are infected andshow symptoms on day 2 and day 3, respectively. All contacts areasked for quarantine if their infection probability is higher thana threshold. Contact 3 and contact 5 serve quarantine on day 3.Contact 2 and contact 6 start quarantining on day 4.In an infectious disease crisis, we can use whatever data is avail-able to construct such a branching process model. Many of therequired components are distributions that are often estimated byepidemiologists in the early stages of an outbreak. We describedistributions we used to simulate SARS-CoV-2 and their sourcesin Tab. 1. Components that are not known can be filled in conser-vatively or sensitivity analysis can be performed. In some cases,distributional estimates can be shared across diseases—for exam-ple, POLYMOD [ 14] provides contact distributions for the US andWestern European settings for both droplet and physical contact.The superspreading dynamics of infection can be impactful becauseit is often that most transmission is driven by a small number ofseed cases, and this concentration can be exploited by control poli-cies [ 17]. Nevertheless, superspreading dynamics are often poorlyunderstood, especially early in a crisis and greater understandingwould benefit approaches such as this paper’s.We define the objective function as(−S1−α2×S2−α3×S3)/cluster_size (1)where•S1is the count of transmission days where an infected indi-vidual is not quarantined,•S2is the count of days where a quarantined individual is notinfected, and α2(which we assume is in [0,1]) is the weightfor this term,•S3is the sum of the action costs (e.g., test cost) and α3is theweight for this term, and•cluster_size normalizes the objectives to a score per indi-vidual.In summary, the objective function seeks to minimize the numberof transmission days (i.e., days where an individual is infectiousUsing Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USATable 1: Parameters of the SARS-CoV-2 branching process modelParameter Assumed value Details and referencesIncubation timeLog-normal: Log mean 1.57days and log std 0.65 daysMean: 5.94 days. Bi et al. [2]Duration of infectious period7 days—2 days before and5 days after onset if symptomaticBi et al. [2]Probability that an infectedindividual shows symptoms0.8 Buitrago-Garcia et al. [3]Probability of symptomswithout infectiousness0.01 per day Perrault et al. [17]Probability of asymptomatic infection 0.2 Buitrago-Garcia et al. [3]Probability of highly transmissive 0.109 Perrault et al. [17]Infectiousness multiplier forhighly transmissive individuals24.4 Perrault et al. [17]Test parametersTP = 0.86, FP = 0.66TN = 0.14, FN = 0.34Besutti et al. [1]DelaysObservation Delay = 3 daysTest Result Delay = 1 dayAssumedbut not quarantined), minimize the number of days of non-effectivequarantine, and minimize the cost associated with actions.We consider two action types. Quarantine-type actions reduce thenumber of transmission days for an agent. The simplest quarantine-type action causes an agent to not produce a transmission daywith probability 1 and incurs no additional cost. A more complexquarantine-type action may work probabilistically (because an indi-vidual may not choose to quarantine if directed), incur an additionalcost (e.g., the cost of checking in with that individual by phone), ormay be coupled with a sensing action (see below). Quarantine-typeactions are that they contribute to S2if the individual quarantinesand is not infected.Sensing-type actions do not directly affect the number of trans-mission days directly. Instead, they reveal information about anindividual’s infectious state according to a probability distribution.For example, if someone has had known exposure to someone in-fected, but he/she doesn’t show the symptoms. With antigen tests,we can know whether this person is infected or not. Actions cancombine both sensing and quarantine, e.g., an action that performsan antigen test and then quarantines if the result is positive.3 APPROACHWe show that the optimization problem from the previous sectioncan be formulated as a partially observable Markov decision pro-cess (POMDP). However, solving this POMDP directly is wildlyintractable. Some hope arrives from the result that, under a sim-plified model that contains only sensing-type actions, the POMDPcan be solved optimally if the probability that an individual is in-fectious can be estimated—itself a challenging problem due to thehigh dimensional observation space.Motivated by this conclusion, we formulate our solution ap-proach: we use a convolutional neural network (CNN) to estimatethe probability of infectiousness for each individual in a cluster,and this output, along with cluster-wide statistics, serves as thestate for the RL agent.3.1 POMDP FormulationWe define a POMDP [ 6] as⟨S,A,R,P, Ω,O,γ,S 0⟩, whereSandArepresent the state and action spaces, respectively, R:S×A→Ris the reward function, P:S×A→ΔSis the transition function,Ωis the observation state, O:S×A→ΔΩis the observationprobabilities, γ∈[0,1]is the discount factor, and S0:ΔSis thedistribution of initial states.We briefly describe how to interpret the control problem ofthe previous section as a POMDP. We define the state space ascontaining all of the relevant information required to simulate thecluster, including whether the seed case is highly transmissive,whether each contact of a seed case will become infected, whetherthey will show symptoms and if so, on what day. This simulatordata cannot be observed directly—instead we must rely on receivingaction-dependent observations. We define the action space as theset of daily quarantine and sensing actions that are available foreach individual in the cluster. For instance, in our experiments, weconsider five actions: no quarantine and no test, quarantine andno test, test and no quarantine, test and quarantine, and test andquarantine only if positive. If we have Nindividuals in the cluster,we have an action space of size |A|N. For observations, we receivetwo types of information from each individual in each timestep:symptom information and test results. We receive test results onlywhen a sensing-type action is taken and these results are noisy(Tab. 1). Similarly, we always observe symptoms if they are present,but both infectiousness without symptoms and symptoms withoutinfectiousness are possible. The resulting observation space size is4N.In principle, solving the POMDP formulation results in the opti-mal control policy. In practice, exact solving is not possible due tothe high computational complexity of the best-known algorithms.A particular source of difficulty is the problem of calculating theposterior probability of infection for each individual given the ob-servations. A key challenge is that the variation in infectiousness ofthe seed case causes the posterior probability of infection for eachepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew Perraultindividual to depend on the observations for all other individuals.Intuitively, observing symptoms or positive test results for one indi-vidual makes it more likely that the seed case is highly transmissiveand thus more likely that each other individual is infected.3.2 Optimal Policy Without Sensing ActionsWe first consider a simplified POMDP where the only actions avail-able are a quarantine action and no quarantine action. We showthat, if the posterior probability of infection can be calculated ex-actly, the optimal policy has a threshold-type form: if the posteriorprobability of infection is above a threshold, we quarantine andotherwise do not. We show this initially for a costless quarantineaction with 100% efficiency as this is what we use in experiments(Thm. 1). We then generalize the result to any menu of non-sensingactions because the expected reward of each action can be exactlycalculated given the posterior probability of infection (Thm. 2). Weremark that these results provide additional context to the findingsof Perrault et al . [17] by defining the class of optimal risk-basedpolicies.Letpinfrepresent the posterior probability of infection for anindividual given the observations so far.Theorem 1. With a costless quarantine action that is always suc-cessful and a null action, the objective function of Eq. 1, the optimalpolicy is to quarantine if pinf>α21+α2and take the null action other-wise.Proof. Because we have access to the exact posterior probabilityof infection, we can calculate the expected objective value for eachaction exactly:E[r]=−α2·(1−pinf) if quarantined−pinf if not quarantined.(2)We can then show that if pinf>α21+α2, the quarantine action hashigher expected reward. □We can use the above proof technique to derive the optimal policyfor any menu of non-sensing actions. A useful generalization iswhen the quarantine action has a cost and a failure rate.Theorem 2. With a quarantine action with success rate 0≤β≤1and cost 1 and a null action, the optimal policy is to quarantine ifpinf>α2·β+α3(1+α2)·βand otherwise do not.These results highlight the importance of the posterior probabil-ity of infection. We next dedicate our attention to producing usefulestimates of pinf.3.3 Supervised LearningWe could use RL directly to solve the POMDP using the observationinformation as the state. Indeed, we show that this is somewhateffective if we leverage the state representation we develop in thenext section. However, as we know the unobserved infectious statefor each agent in simulation, we hypothesize that using a supervisedlearning model to predict pinfand using this as input to the RLalgorithm will lead to better objective values compared to pureRL (and in the experiments, we see that the improvement is oftensubstantial). Another option for estimating pinfwould be to use analgorithm for approximate probabilistic inference such as Markovchain Monte Carlo, but doing so is challenging due to the highdimensional discrete observation space where most observationshave zero probability for a given state of infectiousness.A key question for applying supervised learning is how to repre-sent the observation space. We have two desiderata. First, we wouldlike the representation to not vary with cluster size. We can alsoachieve this property in the RL agent, resulting in an agent that si-multaneously be deployed across all cluster sizes, which makes bothtraining and deployment simpler. Second, there is an advantage tousing a representation that inherently accounts for the symmetriesthat arise due to the ordering of individuals, i.e., if we permute theorder of individuals in an observation, it should not affect pinfforeach individual.After testing several representations that satisfy these properties,we arrive at the 7×Tmatrix shown in Fig. 3, where Tis the sim-ulation length (in our experiments, T=30). This is an egocentricrepresentation of the observation—it is from the perspective of aparticular contact and contains all information gathered so far. Wetrain the supervised learning model fto produce output dimension[0,1]T, i.e., for every day of the simulation, what is the probabil-ity that the agent will be infectious given the observation usingsimulation outputs where the infectiousness of each individual isprovided.The representation contains the following information. The firstrow is 1 for each day after (inclusive) that the individual showssymptoms. The second row is a binary indicator of whether thisday is in the future (1 if yes). The third row is a count of the numberof individuals in the cluster that have shown symptoms up to (in-clusive) day t. The fourth row is the total number of contacts in thecluster minus 1 (constant across time). The fifth row is t. The sixthrow is 1 if a test was conducted for this individual, and the sixthrow represents the results of that test (with a one-day delay). In row2, 0s are used to indicate that observation was made by this dayand 1s represent the future. In row 6 and 7, 0s are used to representthe future (no test was ordered and no results were received).We will show that this representation can achieve an AUC of0.95 to predict infectiousness for our branching process model ifan appropriate architecture is selected.0111...0001...3333...9999...0123...0110...0010...Symptoms shown by day t?Total symptom count in clusterCluster Size - 1tTest on day t?Day t-1 test positive?0 for past and present, 1 for futureFigure 3: The observation representation used for supervisedlearning, shown on a cluster of size 10 after observing theoutcome of day 2.Using Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA2D ConvolutionLinear Layer0.080.170.250.60.40.5011001000101010001Observed StateInfection Probabilitypinf for past three dayspinf for next three daysSymptom indicator for past three daysTest indicator for past three daysTest results for last three daysCluster SizeCNNNumber of tests run across cluster in past three days7*30Input MatrixFigure 4: The supervised learning (CNN) output is used asinput to the RL state which prioritizes immediately relevantinformation.3.4 Reinforcement LearningTo make RL effective, we develop a compact state representationthat includes supervised learning outputs. As with supervised learn-ing, we want the representation to have the same size for all clustersand to naturally encode permutation invariance. The representa-tion we use is a 7×3matrix shown in Fig. 4. As with the suprvisedlearning representation, it is egocentric and time-specific.The first and second rows represent the pinfoutputs from super-vised learning for the last three days and next three days, respec-tively. The third row indicates whether the individual exhibitedsymptoms for each day in the past three days. The fourth row is anindicator for if this individual was tested for each of the past threedays. The fifth row denotes the test results with a one day delay.The sixth row is the cluster size. The last row indicates the numberof tests conducted in the cluster in the past three days.Training the RL algorithm is straightforward. First, we train thesupervised learning predictor from data collected from the simula-tor. In our experiments, we use a fixed but stochastic control policyto collect this data. This has the advantage that a single supervisedlearning training run can serve as input to an arbitrary number ofRL training. If the optimal policies are dramatically different thanthe data collection policy, an addition run of supervised learningtraining can be performed with the current RL policy to increaseits accuracy.Once the supervised learning predictor is trained, we train RLwith Proximal Policy Optimization (PPO) [ 19]. In our experiments,we use six different policy initializations, train each for 800000environment interactions and pick the best based on 100 evaluationruns. All training is performed on a single core, using Intel i5-8259U@2.3GHz with 8GB of RAM, and a single RL training run takes 20minutes.4 EXPERIMENTSWe compare different control policies in the branching processenvironment we construct for SARS-CoV-2. We consider a set offive control actions for each individual for each day: null action,quarantine, test but don’t quarantine, quarantine but don’t test, andtest and quarantine only if results are positive. We assume thatthere is no failure rate for actions, and all actions that include a testcost 1 and others are costless. For α2, we use small values of 0.01and 0.02as typical SARS-CoV-2 contact tracing policies accept alarge number of quarantine days for non-infectious individuals. Forα3, we use values of 0.001,0.005,0.01,0.02,0.03and 0.2. We samplecluster size from a uniform distribution on (2, 40). The model codeis available online (https://github.com/XueqiaoPeng/CovidRL).4.1 Supervised Learning ModelWe experiment with a variety of supervised learning model archi-tectures (Tab. 2) to find one that achieves a high AUC across clustersizes. We find that CNNs are generally most effective and comparedifferent kernels and layer structures. In single layer architectures,we find that larger 2D convolutions tend to achieve higher AUC.We then found that a single convolution layer followed by a linearlayer performs just as well as deeper architectures—this setup of a(5, 2) 2D convolution followed by a linear layer is what we use inthe experiments below.Table 2: We find that two-layer architectures using a 2D con-volution followed by a linear layer achieve performance onpar with larger models.Cluster size = 4 8 16 321 LayerConv1d (5,2) 0.798 0.807 0.823 0.830Conv1d (5,3) 0.814 0.830 0.835 0.839Conv2d (5,2) 0.800 0.814 0.827 0.830Conv2d (5,3) 0.832 0.820 0.838 0.840Conv2d (5,4) 0.858 0.849 0.843 0.859Conv2d (5,5) 0.864 0.895 0.893 0.8932 LayerConv1d (5,2)0.824 0.830 0.833 0.840Conv1d (1,2)Conv2d (5,3)0.883 0.903 0.898 0.897Conv2d (1,3)Conv2d( 5,2)0.955 0.960 0.947 0.961Linear LayerConv2d (5,3)0.951 0.960 0.940 0.964Linear Layer3 LayerConv1d (5,3)0.958 0.957 0.950 0.961 Conv1d (1,3)Linear Layer4 LayerConv1d (4,3)0.958 0.958 0.953 0.965Conv1d (2,3)Conv1d (1,3)Linear Layer4.2 Benchmark PoliciesWe compare the RLSL approach we propose to several baselines.•Threshold is the threshold-type policy suggested in Sec. 3.2.It does not use test actions. This policy turns out to be highlyconservative and results in long quarantine duration for allcontacts for the tested α2values.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Xueqiao Peng, Jiaqi Xu, Xi Chen, Dinh Song An Nguyen, and Andrew PerraultTable 3: RLSL achieves higher objective values (higher is better) than baselines across all tested α2andα3.α2=0.01α3=0.001α2=0.01α3=0.005α2=0.01α3=0.01α2=0.01α3=0.02α2=0.01α3=0.03α2=0.01α3=0.2α2=0.02α3=0.001α2=0.02α3=0.005α2=0.02α3=0.01α2=0.02α3=0.02α2=0.02α3=0.03α2=0.02α3=0.2RLSL (Ours) −3.77±0.25 −10 .27±0.15 −17 .13±0.48−44.22±0.84−46.46±1.47−110.92±1.54 −4.01±0.21 −17 .64±0.32 −25 .39±0.48−49.28±0.66−64.45±0.83−120.21±0.22Threshold−21.79±0.20−21.79±0.20−21.79±0.20 −21 .79±0.20 −21 .79±0.20 −21 .79±0.20−43.65±0.32−43.65±0.32−43.65±0.32 −43 .65±0.32 −43 .65±0.32 −43 .65±0.32Symptom-BasedQuarantine−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−111.13±14.18−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.94−112.60±11.9414 DaysQuarantine−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−97.18±9.97−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00−106.63±11.00No Quarantine−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−235.98±18.53−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38−242.16±20.38Table 4:S1,S2andS3per individual compared across different cluster sizes (lower is better), using α2=0.01andα3=0.01. Evenrelatively conservative strategies such as 14-day quarantine from exposure fail to isolate some infections in our simulation.RLSL can benefit substantially from the additional information available in large clusters resulting in strong performance withlow test costs.Cluster size = 4 Cluster size = 8 Cluster size = 16 Cluster size = 32S1 S2 S3 S1 S2 S3 S1 S2 S3 S1 S2 S3RLSL 0.064±0.008 6.808±0.184 10.144±0.052 0.077±0.012 7.552±0.099 11.825±0.056 0.075±0.011 10.033±0.127 11.253±0.087 0.054±0.007 8.259±0.090 10.808±0.134Threshold 0.078±0.013 16.012±0.211 - 0.063±0.013 17.656±0.198 - 0.05±0.008 19.681±0.173 - 0.016±0.003 20.701±0.319 -Symptom-BasedQuarantine1.418±0.199 0.236±0.029 - 1.207±0.187 0.239±0.014 - 1.196±0.052 0.232±0.017 - 1.072±0.146 0.261±0.042 -14-dayQuarantine1.042±0.072 2.469±0.113 - 0.965±0.082 2.440±0.144 - 0.973±0.114 2.291±0.125 - 0.929±0.107 2.004±0.155 -No Quarantine 2.361±0.195 - - 2.597±0.282 - - 2.075±0.203 - - 1.856±0.173 - -Table 5: In cases where test costs are higher, RLSL produces polices that test too often, resulting in lower performance thanRLSL models with only quarantine actions—we discuss potential fixes.α2=0.01α3=0.001α2=0.01α3=0.005α2=0.01α3=0.01α2=0.01α3=0.02α2=0.01α3=0.03α2=0.01α3=0.2α2=0.02α3=0.001α2=0.02α3=0.005α2=0.02α3=0.01α2=0.02α3=0.02α2=0.02α3=0.03α2=0.02α3=0.2RLSL −3.77±0.25 −10 .27±0.15−17 .13±0.48−44.22±0.84−46.46±1.47−110.92±1.54 −4.01±0.21 −17 .64±0.32−25 .39±0.48−49.28±0.66−64.45±0.83−120.21±0.22RLSL (Daily Test) −4.30±0.42−13.15±0.15−24.46±0.17−45.62±1.27−74.68±0.2−737.78±3.33−12.81±0.55−23.72±0.47−27.25±0.58−50.50±0.11−75.88±0.26−739.98±1.516RLSL (No Test) −34.56±0.39−34.56±0.39−34.56±0.39−34.56±0.39−34.56±0.39−34.56±.39−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13−52.92±0.13RL Only−14.64±0.79−20.32±0.83−34.02±0.70−46.10±1.14−53.22±1.01−84.35±1.04−15.36±0.76−25.66±0.56−39.80±0.39−63.07±0.81−70.56±0.827−162.4±2.36Threshold (SL Only) −21.79±0.20−21.79±0.20−21.79±0.20 −21 .79±0.20−21 .79±0.20 −21 .79±0.20−43.65±0.32−43.65±0.32−43.65±0.32 −43 .65±0.32 −43 .65±0.32 −43 .65±0.32•Symptom-Based Quarantine quarantines if an individualexhibits symptoms on the day before the observed day andotherwise does not.•14-Day Quarantine quarantines individuals from the initialday they exhibit symptoms until either 14 days have passedor until they no longer exhibit symptoms, whichever is later.No test action is included.•No Quarantine always performs the null action.4.3 AnalysisOur experimental results report the average objective value andstandard error taken over 10 random clusters (Tab. 3). We find thatRLSL and Threshold acheive better performance than baselines inall cases. However, our current methods for RLSL struggle relativeto Threshold when tests are expensive. Our experimental resultscould be broadened by including more αvalues and more analysis asto where the RLSL policies gain their advantage (but see discussionof Tab. 5 below for some insights).Focusing on the setting of α1=0.01andα2=0.01, we reportobjective values broken out by component and by cluster size asmeasured per individual (Tab. 4). Here we can get an intuitive graspof what is happening in the different policies. Threshold aggres-sively quarantines, resulting in S2=16–20, i.e., 16–20 days ofquarantine without infection per contact, for the tested αvalues.This is able to drive S1to a low value, resulting in an average objec-tive value of−21.79. Recall that S1is much more highly weighted(100 times) higher than S2in this setting. Symptom-based and 14-day quarantine reduce S2by a factor of 8 to 100, but this causes S1to be roughly 150 to 200 times higher. By leveraging tests, RLSLcan reduceS2by a factor of 2–3 and S1by a factor of 0.8–3.5.In the ablation study (Tab. 5), we gain a more detailed view intothe operation of the RLSL policy. We see that the introduction of theSL outputs to the RL state results in better performance in all testedscenarios compared to RL Only, which uses the state representationof Fig. 4 without the first two rows.We can observe limitations of the supervised infectiousness pre-diction model in Tab. 4, where the S2cost does not decrease ascluster size increases—from Thm. 1, we can conclude that if pinfis correct, the ratio of S1toS2should not depend on cluster sizefor Threshold. There are several possible causes of this issue. First,the SL model outputs might be miscalibrated, as is often the casefor neural networks trained on highly imbalanced data. This issuecould be fixed with post-hoc calibration such as Platt scaling [ 18].In this instance, a more sophisticated calibration could be employedwith separate calibration parameters per cluster size, if necessary.Second, it may be the case that the SL model outputs are wrong forreasons other than calibration. For example, it may receive insuffi-cient relevant training data as it is trained on data produced from arandom policy and not Threshold or RLSL. It is also possible thatwe performed insufficient architecture search.We also see that RLSL (No Test) often performs better than RLSLas test costs increase. This suggests that RLSL is not finding a trueUsing Reinforcement Learning for Multi-Objective Cluster-Level NPI Optimization epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAoptimal policy. This could likely be address by using a wider rangeof initialization values for RLSL—for example, initializing someseeds to policies that test very little (the initialization we use forRLSL and RL Only tests heavily). This observation has a silverlining: RL (No Test) can achieve much stronger performance thanbaselines even without tests. This implies that RL (No Test) is ableto correct for the errors in Threshold to find a policy closer to whatis suggested by Thm. 1.5 DISCUSSION AND FUTURE WORKThis work aims to develop a generic multi-objective optimizationapproach for cluster-level optimization of NPIs. We formulate thisproblem for RL in a branching process environment. We presentinitial results that demonstrate the potential of our approach—in abranching process model of SARS-CoV-2, we can achieve substan-tially higher objective values than baseline policies. The resultingpolicies can be applied across all cluster sizes and do not take muchtime to train on consumer hardware. The policies we propose areable to heavily exploit superspreading dynamics.Our vision for an infectious disease crisis is that a canonical prob-abilistic model of the disease is constructed and updated throughoutthe crisis. The model can be constructed from estimates of key dis-ease parameters that are made from various sources throughout acrisis and can reflect uncertainty in these estimates. We advocatethat superspreading dynamics be given substantial attention inthese early stages due to the substantial influence on interventionsthat we find it can have. Using this canonical model, a branchingprocess environment can be constructed and optimized against aswe propose in this paper. We do not consider uncertainty in theparameters of this model, but it is possible to do so with existingtechniques and leads to different RL algorithmic choices dependingon the form of the uncertainty and the desired objective.A key disadvantage of our approach as presented is the com-plexity of the resulting policies. For instance, to execute our RLSLpolicy requires training and drawing outputs from two neural net-works. In contrast, policies that were employed in the SARS-CoV-2pandemic consisted of short lists of rules. We believe that this isnot an inherent weakness of our approach—we can leverage inter-pretable ML and RL techniques to “distill” the RLSL policies into,say, low-depth decision trees, allowing them to be applied at scalewith low logistical cost. There will be some decrease in quality, butwe suspect still substantial advantage over baselines.An area for future study is cost and benefit of taking a cluster-rather than individual-level view of policy application. This imposesadditional logistical costs and the benefit is dependent on the degreeof cluster-level transmission heterogeneity that is present. Thistrade-off is not well understood and is a critical area for futurework.REFERENCES[1]Giulia Besutti, Paolo Giorgi Rossi, Valentina Iotti, Lucia Spaggiari, RiccardoBonacini, Andrea Nitrosi, Marta Ottone, Efrem Bonelli, Tommaso Fasano, SimoneCanovi, et al .2020. Accuracy of CT in a cohort of symptomatic patients withsuspected COVID-19 pneumonia during the outbreak peak in Italy. Europeanradiology 30 (2020), 6818–6827.[2]Qifang Bi, Yongsheng Wu, Shujiang Mei, Chenfei Ye, Xuan Zou, Zhen Zhang,Xiaojian Liu, Lan Wei, Shaun A Truelove, Tong Zhang, et al .2020. Epidemiologyand transmission of COVID-19 in 391 cases and 1286 of their close contacts inShenzhen, China: a retrospective cohort study. The Lancet infectious diseases 20,8 (2020), 911–919.[3]Diana Buitrago-Garcia, Dianne Egli-Gany, Michel J Counotte, Stefanie Hossmann,Hira Imeri, Aziz Mert Ipekci, Georgia Salanti, and Nicola Low. 2020. Occurrenceand transmission potential of asymptomatic and presymptomatic SARS-CoV-2infections: A living systematic review and meta-analysis. PLoS medicine 17, 9(2020), e1003346.[4]Xingran Chen, Hesam Nikpey, Jungyeol Kim, Saswati Sarkar, and Shirin Saeedi-Bidokhti. 2023. Containing a spread through sequential learning: to exploit or toexplore? arXiv preprint arXiv:2303.00141 (2023).[5]Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep learning . MITpress.[6]Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. 1998. Plan-ning and acting in partially observable stochastic domains. Artificial intelligence101, 1-2 (1998), 99–134.[7]Matt J Keeling, T Deirdre Hollingsworth, and Jonathan M Read. 2020. Efficacy ofcontact tracing for the containment of the 2019 novel coronavirus (COVID-19). JEpidemiol Community Health 74, 10 (2020), 861–866.[8]Cliff C Kerr, Robyn M Stuart, Dina Mistry, Romesh G Abeysuriya, KatherineRosenfeld, Gregory R Hart, Rafael C Núñez, Jamie A Cohen, Prashanth Selvaraj,Brittany Hagedorn, et al .2021. Covasim: an agent-based model of COVID-19dynamics and interventions. PLOS Computational Biology 17, 7 (2021), e1009149.[9]Varun Kompella, Roberto Capobianco, Stacy Jong, Jonathan Browne, Spencer Fox,Lauren Meyers, Peter Wurman, and Peter Stone. 2020. Reinforcement learningfor optimization of COVID-19 mitigation policies. arXiv preprint arXiv:2010.10560(2020).[10] Mirjam E Kretzschmar, Ganna Rozhnova, and Michiel Van Boven. 2021. Isolationand contact tracing can tip the scale to containment of COVID-19 in populationswith social distancing. Frontiers in Physics (2021), 677.[11] Shengjie Lai, Nick W Ruktanonchai, Liangcai Zhou, Olivia Prosper, Wei Luo,Jessica R Floyd, Amy Wesolowski, Mauricio Santillana, Chi Zhang, Xiangjun Du,et al.2020. Effect of non-pharmaceutical interventions to contain COVID-19 inChina. nature 585, 7825 (2020), 410–413.[12] Eli Meirom, Haggai Maron, Shie Mannor, and Gal Chechik. 2021. Controllinggraph dynamics with reinforcement learning and graph neural networks. InInternational Conference on Machine Learning . PMLR, 7565–7577.[13] Michela Meister and Jon Kleinberg. 2023. Optimizing the order of actions in amodel of contact tracing. PNAS Nexus (2023).[14] Joël Mossong, Niel Hens, Mark Jit, Philippe Beutels, Kari Auranen, Rafael Mikola-jczyk, Marco Massari, Stefania Salmaso, Gianpaolo Scalia Tomba, Jacco Wallinga,et al.2008. Social contacts and mixing patterns relevant to the spread of infectiousdiseases. PLoS medicine 5, 3 (2008), e74.[15] Han-Ching Ou, Haipeng Chen, Shahin Jabbari, and Milind Tambe. 2021. Activescreening for recurrent diseases: A reinforcement learning approach. arXivpreprint arXiv:2101.02766 (2021).[16] Han-Ching Ou, Arunesh Sinha, Sze-Chuan Suen, Andrew Perrault, Alpan Raval,and Milind Tambe. 2020. Who and when to screen: Multi-round active screeningfor network recurrent infectious diseases under uncertainty. (2020).[17] Andrew Perrault, Marie Charpignon, Jonathan Gruber, Milind Tambe, andMaimuna Majumder. 2020. Designing efficient contact tracing through Risk-BasedQuarantining . Technical Report. National Bureau of Economic Research.[18] John Platt et al .1999. Probabilistic outputs for support vector machines and com-parisons to regularized likelihood methods. Advances in large margin classifiers10, 3 (1999), 61–74.[19] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347(2017).[20] Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An intro-duction . MIT press.[21] Xutong Wang, Zhanwei Du, Emily James, Spencer J Fox, Michael Lachmann,Lauren Ancel Meyers, and Darlene Bhavnani. 2022. The effectiveness of COVID-19 testing and contact tracing in a US city. Proceedings of the National Academyof Sciences 119, 34 (2022), e2200652119. |
fhxHhXTnHc | Accurate Measures of Vaccination andConcerns of Vaccine Holdouts from Web Search LogsSerina Chang†Stanford Universityserinac@cs.stanford.eduAdam FourneyMicrosoftadam.fourney@microsoft.comEric HorvitzMicrosofthorvitz@microsoft.comABSTRACTTo design effective vaccine policies, policymakers need detaileddata about who has been vaccinated, who is holding out, and why.However, existing data in the US are insufficient: reported vacci-nation rates are often delayed or missing, and surveys of vaccinehesitancy are limited by high-level questions and self-report biases.Here, we show how large-scale search engine logs and machinelearning can be leveraged to fill these gaps and provide novel in-sights about vaccine intentions and behaviors. First, we developavaccine intent classifier that can accurately detect when a useris seeking the COVID-19 vaccine on search. Our classifier demon-strates strong agreement with CDC vaccination rates, with corre-lations above 0.86, and estimates vaccine intent rates to the levelof ZIP codes in real time, allowing us to pinpoint more granulartrends in vaccine seeking across regions, demographics, and time.To investigate vaccine hesitancy, we use our classifier to identifytwo groups, vaccine early adopters andvaccine holdouts . We findthat holdouts, compared to early adopters matched on covariates,are 69% more likely to click on untrusted news sites. Furthermore,we organize 25,000 vaccine-related URLs into a hierarchical ontol-ogy of vaccine concerns, and we find that holdouts are far moreconcerned about vaccine requirements, vaccine development andapproval, and vaccine myths, and even within holdouts, concernsvary significantly across demographic groups. Finally, we explorethe temporal dynamics of vaccine concerns and vaccine seeking,and find that key indicators emerge when individuals convert fromholding out to preparing to accept the vaccine.KEYWORDSCOVID-19, vaccination, search logs, graph machine learningACM Reference Format:Serina Chang†, Adam Fourney, and Eric Horvitz. 2023. Accurate Measuresof Vaccination and Concerns of Vaccine Holdouts from Web Search Logs.InepiDAMIK 2023: 6th epiDAMIK ACM SIGKDD International Workshop onEpidemiology meets Data Mining and Knowledge Discovery, August 7, 2023,Long Beach, CA, USA. ACM, New York, NY, USA, 19 pages.1 INTRODUCTIONCOVID-19 vaccines provide significant protection against severecases of SARS-CoV-2 [ 46,59], yet a large portion of the United†Research performed during an internship at Microsoft.Permission to make digital or hard copies of part or all of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for third-party components of this work must be honored.For all other uses, contact the owner/author(s).epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA©2023 Copyright held by the owner/author(s).States remains unvaccinated. Effective vaccine policies—for exam-ple, where to place vaccine sites [ 49,74], how to communicateabout the vaccine [ 18,72], and how to design campaigns to reachunvaccinated populations [ 5,22,60]—rely on detailed data aboutwho is seeking vaccination, who is holding out, and why. However,existing data are insufficient [ 43]. Reported vaccination rates are fre-quently delayed [ 2], missing at the county-level and below [ 70], andmissing essential demographic data [ 33,42]. Surveys provide a start-ing point for understanding vaccine hesitancy but are often limitedby high-level questions [ 16], small or biased samples [ 13,71], andself-reporting biases (e.g., recall or social desirability bias) [ 3,66]especially in sensitive contexts such as vaccination [36].Here, we demonstrate how large-scale search logs from Bingand machine learning (ML) can be leveraged to fill these gaps, en-abling fine-grained estimation of vaccine rates and discovering theconcerns of vaccine holdouts from their search interests. Whilesearch logs are powerful, with widespread coverage, real-time sig-nals, and access to personal interests, the vast amounts of data theyprovide are unlabeled and unstructured, consisting of billions ofnatural language queries and clicks on search results. To derivemeaning from these queries and clicks, we first impose structure byconstructing query-click graphs , which encode aggregated query-click patterns as bipartite networks. Second, using a combinationof semi-supervised graph ML techniques and manual annotation,we develop two computational resources that enable us to extractvaccine behaviors from large unlabeled search logs.First, we develop a vaccine intent classifier that can accuratelydetect when a user is seeking the COVID-19 vaccine on search. Ourclassifier achieves areas under the receiver operating characteristiccurve (AUCs) above 0.90 on held-out vaccine intent labels in allstates, and demonstrates strong agreement with CDC vaccinationrates across states ( r=0.86) and over time ( r=0.89). Using ourclassifier, we can estimate vaccine intent rates to the level of ZIPcode tabulation areas (ZCTAs), approximately 10x the granularityof counties and preceding lags in reporting. We carefully correct forbias in our estimates from non-uniform Bing coverage, and demon-strate minimal additional bias from our classifier, as it achievesequivalent true and false positive rates across regions.Second, we construct a novel ontology of COVID-19 vaccine con-cerns on search. Our ontology consists of 25,000 vaccine-relatedURLs, clicked on by Bing users, that we organize into a hierarchy ofvaccine concerns from eight top categories to 36 subcategories to156 low-level URL clusters. Unlike surveys, our ontology discoversthese concerns directly from users’ expressed interests and exploresthem at multiple scales. Furthermore, by measuring individuals’interest in each concern from their clicks, we capture revealed pref-erences, side-stepping potential biases in self-reporting [24, 66].1epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. HorvitzCombining our ontology with the vaccine intent classifier al-lows us to conduct a thorough analysis of how individuals’ vaccineconcerns relate to whether they decide to seek the vaccine. Weuse our classifier to identify two groups of users—vaccine earlyadopters and vaccine holdouts—and compare their search behav-iors. We identify significant differences in their vaccine concernsand news consumption; for example, compared to early adoptersmatched on covariates, vaccine holdouts are 69% more likely to clickon untrusted news sites. We find that vaccine concerns also differsignificantly even within holdouts, varying across demographicgroups. Finally, we analyze the temporal dynamics of vaccine con-cerns and vaccine seeking, and discover that individuals exhibittelltale shifts in vaccine concerns when they eventually convertfrom holding out to preparing to accept the vaccine.Our contributions can be summarized as follows:(1)A novel vaccine intent classifier, developed with graph MLand human annotation, that achieves AUCs above 0.9 on allstates and strong agreement with CDC vaccination rates;(2)Bias-corrected estimates of vaccine intent rates from ourclassifier, including estimates for over 20,000 ZCTAs;(3)A hierarchical ontology of COVID-19 vaccine concerns, in-cluding 25,000 URLs clicked on by Bing users, 156 URL clus-ters, 36 subcategories, and eight top categories;(4)Analyses of vaccine holdouts’ search concerns and newsconsumption, comparing to early adopters and studyingdynamics over time.We are publicly releasing our code, vaccine estimates, and ontol-ogy.1We hope that our resources, methods, and analyses can pro-vide researchers and public health agencies with valuable insightsabout vaccine behaviors, helping to guide more effective, data-driven interventions.2 DATAOur work uses a variety of datasets, including Bing search logs,CDC vaccination rates, US Census data, and Newsguard labels(Figure 1). Bing is the second largest search engine worldwide andin the US, with a US market share of around 6% on all platforms andaround 11% on desktop [ 65]. Despite having non-uniform coverageacross the US, Bing has enough penetration in the US that we canestimate representative samples after applying inverse proportionalweighting (Section 4). The Bing data we use consist of individualqueries made by users, where for each query, we have informationincluding the text of the query, an anonymized ID of the user, thetimestamp, the estimated geolocation (ZIP code, county, and state),and the set of URLs clicked on, if any. Since our work is motivatedby insufficient vaccine data and vaccine concerns in the US, we limitour study to search logs in the US market. However, the methods weintroduce could be extended to study vaccination rates and vaccineconcerns in other languages and countries. We apply our vaccineintent classifier (Section 3) to all Bing search logs in the US fromFebruary 1 to August 31, 2021.21https://github.com/microsoft/vaccine_search_study.2February 2021 was the earliest that we could study following data protection guide-lines, which allow us to store and analyze search logs up to 18 months in the past.We end in August 2021, since the FDA approved booster shots in September and ourmethod is not designed to disambiguate between vaccine seeking for the primaryseries versus boosters.Bing search logsOntology of vaccine concernsVaccine intent estimatesZIP , county, stateVaccine concerns of holdouts vs. early adoptersMatched vaccine holdouts and early adoptersNews consumption of holdouts vs. early adoptersDemographic trends in vaccine intentNewsguardlabelsCDC vaccination ratesGoogle search trendsUS Census dataVal.Val.Methods: community detection on graphs, manual annotationMethods: PageRank, GNNs, manual annotation, bias correctionExternal dataOur workLegendVal.:validationFigure 1: Our work integrates a variety of datasets and meth-ods to analyze vaccine behaviors from search logs.To evaluate our vaccine intent classifier, we compare it to vacci-nation rates reported by the CDC (Section 4). The CDC providesdaily vaccination rates at the levels of states [ 27] and counties [ 26].CDC data are essential but limited, with a substantial portion ofcounty-level data missing. These limitations serve as one of themotivations of our work, since we hope that our vaccine intent clas-sifier can serve as a complementary resource to monitor vaccinationrates, especially in smaller regions. To characterize demographictrends in vaccine intent, we use data from the US Census’ 20205-year American Community Survey [ 15]. To capture political lean,we use county-level data from the 2020 US presidential election [ 53].To quantify the trustworthiness of different news sites, we use labelsfrom Newsguard [ 52]. Finally, to evaluate the representativenessof Bing search trends, we compare them to Google search trends,which are publicly available online [34].Data ethics. Our work was approved by the Microsoft IRB officeand by an internal privacy review process which included officersfrom both Microsoft Research and the Bing product team. When weuse search logs, we are mindful of the need to balance privacy andsocial benefits when using potentially sensitive user data. Whilewe study individual search logs, since we need to be able to link in-dividual vaccine outcomes (as predicted by our classifier) to searchinterests, those sessions are assembled using only anonymous useridentifiers, which are disassociated from any specific user accountsor user profiles, and cannot be linked to any other Microsoft prod-ucts. Likewise, in this anonymous view of the logs, location anddemographic data were limited to ZIP code-level accuracy. Finally,we are careful to only report results aggregated over thousands ofindividuals. Aside from Bing search logs, all of the data sources weuse are publicly available and aggregated over many individuals.3 VACCINE INTENT CLASSIFIEROur first goal is to develop a classifier that can accurately detectwhen a search user is expressing vaccine intent, i.e., trying to getthe COVID-19 vaccine (e.g., book an appointment or find a loca-tion). Detecting vaccine intent requires precision: for example, if2Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAu1u2q1q2q3q4u1u2u3.........Step 1: URL candidatesPersonalized PageRankStep 2: AnnotationAmazon Mechanical Turkq1q2q3q4u1u2u3......Step 3: URL expansionGraph neural networkGiven that a person clicked on this page during a search session, how sure are you that this person is seeking to get the COVID-19 vaccine?Figure 2: Our pipeline of methods to identify a large, high-precision set of vaccine intent URLs.a user issues the query [covid vaccine], they may be trying to getthe vaccine, but they could also be generally curious about vaccineinformation or eligibility. Thus, we begin by defining a set of regu-lar expressions that allow us to identify vaccine intent queries, i.e.,queries that unambiguously express vaccine intent. To be included,the query must include both a COVID-19 term (“covid” or “coro-navirus”) and a vaccine term (“vaccin”, “vax”, “johnson”, etc.). Inaddition, the query must satisfy at least one of the following criteria:(1) matching some variant of “find me a COVID-19 vaccine”, (2)containing appointment-related words or location-seeking words,(3) containing a pharmacy name.However, in addition to maintaining high precision, we seek todetect as many users as possible who have expressed vaccine intent,so that we have sufficient statistical power for our downstreamanalyses. Since our search logs contain both queries and clicks, welose the opportunity to detect many more users if we only detectvaccine intent based on queries. For example, a user may issue theambiguous query [covid vaccine], but then click on the URL forthe CVS COVID-19 vaccine registration page, thus clarifying theirintent through their clicks [ 61]. The challenge with URLs is thatthey are less formulaic than queries, so we cannot easily defineregular expressions to identify URLs expressing vaccine intent.Our key insight is that, while we cannot use regular expressionsto identify URLs, we can use them to identify vaccine intent queriesand then use those queries to identify URLs, based on commonquery-click patterns. For example, vaccine intent queries such as[cvs covid vaccine] or [covid vaccine near me] may result in clickson the CVS COVID-19 vaccine registration page. To capture thesepatterns, we construct query-click graphs [20,45], which are bipar-tite networks between queries and URLs where an edge from aquery to a URL indicates how often this query is followed by a clickon this URL. Specifically, we construct a query-click graph per USstate, aggregating over queries and clicks from two representativemonths in our study period (April and August 2021). Then, ourpipeline proceeds in three steps (Figure 2): first, we use personal-ized PageRank to propagate labels from queries to URLs, so that wecan generate a set of URL candidates (Section 3.1); next, we presentthe URL candidates to annotators on Amazon Mechanical Turk tolabel as vaccine intent or not (Section 3.2); finally, we use thoselabels to train graph neural networks (GNNs) so that we can furtherexpand our set of vaccine intent URLs (Section 3.3).State URLCA https://myturn.ca.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.goodrx.com/covid-19/walgreenshttps://www.costco.com/covid-vaccine.htmlhttps://www.walgreens.com/topic/promotion/covid-vaccine.jspNY https://covid19vaccine.health.ny.gov/https://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://vaccinefinder.nyc.gov/https://www.goodrx.com/covid-19/walgreensTX https://www.cvs.com/immunizations/covid-19-vaccinehttps://vaccine.heb.com/https://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://corporate.walmart.com/covid-vaccinehttps://dshs.texas.gov/covidvaccine/FL https://www.publix.com/covid-vaccinehttps://www.cvs.com/immunizations/covid-19-vaccinehttps://www.walgreens.com/topic/promotion/covid-vaccine.jsphttps://floridahealthcovid19.gov/vaccines/https://www.goodrx.com/covid-19/walgreensTable 1: Top 5 URLs from Personalized PageRank (S-PPR) forthe four largest states in the US.3.1 Personalized PageRank for URL candidatesPersonalized PageRank [ 14] is a common technique for seed expan-sion, where a set of seed nodes in a graph are identified as membersof a community, and one wishes to expand from that set to identifymore community members [ 40]. In our case, the vaccine intentqueries act as our seed set, and our goal is to spread the influencefrom the seed set over the rest of the query-click graph. Given aseed setS, personalized PageRank derives a score for each node inthe graph that represents the probability of landing on that nodewhen running random walks from S.We run personalized PageRank from the seed set of vaccineintent queries (S-PRR) to derive scores for all URLs in each query-click graph. Then, we order the URLs from each state according totheir S-PPR ranking and keep the union over states of their top 100URLs as our set of URL candidates, resulting in 2,483 candidates.The number of URLs we have in the union is much lower than thenumber of states multiplied by 100, since there is overlap betweenstates. However, there is also substantial heterogeneity in top URLsacross states, reflecting state-specific vaccine programs and policies(Table 1). By constructing separate graphs and running S-PPR perstate, our approach is uniquely able to capture this state-specificheterogeneity. In supplementary experiments, we show that an al-ternative approach that uses a combined graph over states severelyhurts performance for small states (Section A2.2).S-PPR also provides scores for all queries in the graph, but wefound that the seed set was comprehensive in identifying vaccineintent queries. The top-ranked queries that were not in the seed settended to be location-specific, such as [covid vaccine new york],which is suggestive of vaccine intent but not unambiguous enough.Thus, in the subsequent steps of annotation and GNN expansion,we only seek to add URLs, and consider regular expressions suffi-cient for identifying queries. However, we also selected a sample3epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzof regular expression-detected queries to present to annotators, tovalidate whether they were truly vaccine intent. To capture a di-verse sample, we use the union over the top 5 and bottom 5 queriesper state (ranked by S-PPR), after filtering out queries that wereissued by fewer than 50 users, resulting in 227 queries to label.3.2 Annotation on Amazon Mechanical TurkIn this step, we present our URL candidates (and sampled queries)to annotators on AMT. For each URL, we first present it to threeannotators. If all three give it a positive label (i.e., Highly Likely orLikely), then we label this URL as vaccine intent. If two give it apositive label and one does not, we assign it to one more annotator,and label it as vaccine intent if that annotator gives a positive label.In other words, we require vaccine intent URLs to receive threepositive annotations. With this relatively strict bar, we still find thata large majority (86%) of our URL candidates are labeled as vaccineintent. Furthermore, we observe a clear relationship between S-PPRrank and the percentage labeled as vaccine intent: for example,around 90% of URLs from ranks 0 to 20, around 81% of URLs fromranks 40-60, and around 71% of URLs from ranks 80 to 100 (FigureA2). We also find a very high positive rate (96%) among the queriesthat we tested, thus validating our regular expressions.3.3 Graph neural networks for expansionSince manual annotation is expensive, we wish to augment ourefforts by training ML models on the AMT labels, then use themodels to expand our set of vaccine intent URLs. We formulate thisproblem as semi-supervised node classification on a graph, sincethe URLs are nodes in the query-click graph and we are trying topredict whether a URL indicates vaccine intent or not, given labelsfor a subset of URLs. In this section, we provide an overview of ourmodeling procedure, with details in Section A1.GNN architecture and training. To solve this problem, we designa GNN [ 39] that consists of character-level convolutions (CNN)and graph convolutions. We use the CNNs to capture textual infor-mation in the queries and URLs, since text can be informative forthis problem (e.g., the appearance of “vaccine”). The graph convo-lutions allow us to learn representations of URLs that draw fromthe representations of their neighboring queries, which draw fromthe representations of their neighboring URLs, and so on. In thisway, we can capture “similar” URLs in embedding space (similar interms of both text and graph structure).To train and test our model, we randomly split the URL labelsinto a train set (60%), validation set (15%), and test set (25%). How-ever, some states have much smaller graphs, and therefore, fewerpositive and negative labels. For example, for Wyoming, we onlyhave 245 positive and 276 negative URLs. We find that with suchfew labels, the model cannot adequately learn how to predict vac-cine intent, with AUCs far below those of large states (Table A1). Toaddress this issue, we pre-train the model on S-PPR rankings, whichrequires no additional supervision. Our intuition is that S-PPR al-ready performed remarkably well at predicting vaccine intent, aswe discussed in the prior section. Furthermore, S-PPR rankings donot require any manual labels; we derive them entirely from ourinitial vaccine intent queries, which were automatically labeledusing regular expressions. This pre-training encourages the modelto learn URL representations that are predictive of S-PPR rankings,which we find help substantially with predicting vaccine intent.Evaluating GNN performance. We evaluate model performanceby computing its AUC on the held-out test set. Furthermore, toaccount for randomness from model training and data splitting,we run 10 random trials for every model/state, where in each trial,we re-split the URL labels, retrain the model on the train set, andre-evaluate the model’s performance on the test set. First, we findthat pre-training significantly improves performance for the smallerstates; for example, the mean AUC for Wyoming increases from 0.74to 0.95 (Figure 3a, Table A1). We find that pre-training seems un-necessary for the larger states, such as Connecticut and Tennesssee,where we are already achieving high AUCs above 0.98. After in-corporating pre-training for smaller states (fewer than 5,000,000nodes), we are able to achieve AUCs above 0.90 for all 50 states andabove 0.95 for 45 states (Figure 3b).Discovering new vaccine intent URLs. Finally, we use our trainedGNNs to identify new vaccine intent URLs. In order to decide whichnew URLs to include, we need a score threshold. Our goal is to setthe threshold such that any URL that scores above it is very likelyto truly be vaccine intent (i.e., we want to maintain high precision).Borrowing the idea of “spies” from positive-unlabeled learning [ 8],our idea is to use the held-out positive URLs in the test set todetermine where to set the threshold. We consider two thresholds:(1)tmed, the median score of the held-out positive URLs, and (2)tprec, the minimum threshold required to achieve precision of atleast 0.9 on the held-out test set. Then, we only include URLs thatpass both thresholds in at least 6 out of the 10 random trials. Evenwith this strict threshold, we discover around 11,400 new URLs(Table A2), increasing our number of vaccine intent URLs by 10x. Inthe following section, we also evaluate the impact of adding theseURLs on our ability to estimate regional vaccine intent rates. Wefind that the new URLs not only increase our coverage of vaccineintent users by 1.5x but also further improve our agreement withreported vaccination rates from the CDC (Table 2).4 ESTIMATING VACCINE INTENT RATESUsing our classifier, we can estimate regional rates of vaccine intent.In this section, we discuss how we correct for bias in our estimates,validate against CDC vaccination rates, and use our estimates toderive insights about fine-grained vaccination trends.Bias evaluation. In Section A2, we decompose potential bias inour approach into two key sources: first, bias from non-uniformBing coverage, and second, bias from non-uniform true positiverates (TPR) and false positive rates (FPR) of our classifier. We showthat, if we can correct for non-uniform Bing coverage and showthat our classifier’s TPRs and FPRs do not significantly differ acrossregions, our vaccine intent estimates should, theoretically, formunbiased estimates of true vaccination rates. We evaluate our clas-sifier’s TPRs and FPRs on held-out vaccine intent labels, using thesame score threshold we used for discovering new vaccine intentURLs. We find that our classifier does indeed achieve statisticallyequivalent TPRs and FPRs across states (Figure 3b), suggesting thatour classifier contributes minimal additional bias. We discuss belowhow we correct for non-uniform Bing coverage. Additionally, to4Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(b)Results across all statesWith pre-trainingWithout pre-trainingWith pre-training for smaller statesWyomingArea under ROC curveTrue positive rateFalse positive rateTrue positive rateFalse positive rateFalse positive rate# nodes in state graph# nodes in state graph# nodes in state graphFigure 3: (a) GNN results with and without pre-training for Wyoming, one of the smallest states. Each line represents one of 10random trials. (b) Final GNN results for all 50 states, with pre-training for smaller states. Each dot represents a state, with itsy-coordinate representing the mean metric over 10 trials and grey bars indicating standard deviation.Pipeline step CDC corr. # vaccine intent usersOnly queries 0.62 3.18M+manual URLs 0.80 4.95M+manual and GNN URLs 0.86 7.45MTable 2: Each step of our classification pipeline (Section 3)improves both our correlation with CDC vaccination ratesand our coverage of vaccine intent users.evaluate the representativeness of Bing data, we compare searchtrends for vaccine intent queries between Google and Bing and findthat, even before applying corrections to Bing data, the trends arehighly correlated (Figure A4).Estimating coverage-corrected rates. When we apply our classifierto Bing search logs from Feburary 1 to August 31, 2021, we find 7.45million “active” Bing users who expressed vaccine intent throughtheir queries or clicks. We focus on active Bing users, i.e., thosewho issued at least 30 queries in a month, since we can reliablyassign them to a location based on their mode ZIP code (or countyor state) from those queries. Given a ZCTA z, we compute N(ˆv,z),the number of active Bing users from zfor whom we detect vaccineintent. Furthermore, we estimate the ZCTA’s Bing coverage asN(b,z)N(z), whereN(b,z)is its average number of active Bing usersover the months in our study period and N(z)is its population sizefrom the 2020 5-year American Community Survey [ 15]. Then, ourcoverage-corrected vaccine intent estimate ̃p(v,z)for ZCTAzis ̃p(v,z)=N(ˆv,z)N(z)N(b,z)N(z)=N(ˆv,z)N(b,z).To estimate the vaccine intent rate for a set Zof ZCTAs, e.g., a stateor county, we simply take the population-weighted average.Comparison to CDC vaccination data. When we compare ourvaccine intent estimates to state-level vaccination rates from theCDC, we observe strong correlation ( r=0.86) on cumulative ratesat the end of August 2021 (Figure 4). Notably, we find that the cor-relation drops to r=0.79if we do not correct for Bing coveragein our estimates. Furthermore, we find that each step of our clas-sification pipeline—only using queries from regular expressions,Proportion of state users with vaccine intentProportion of state population vaccinated (CDC)Figure 4: Comparing CDC state vaccination rates vs. esti-mated vaccine intent rates from Bing search logs.Lag from vaccine intent to CDC reportingProp. state populationgetting first doseProp. state usersshowing first vaccine intentVaccine intent from search logsVaccination data from CDCFigure 5: Rates over time of first vaccine intent (top) vs. firstdose from CDC (bottom) for the four largest states in the US.incorporating manually annotated URLs from personalized PageR-ank and AMT, incorporating URLs found by GNNs—improves bothour correlation with CDC rates and the number of users we are able5epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitz(a)(b)(c)United StatesEstimated vaccine intent rate per ZCTAProportion of users with vaccine intentNew York City Estimated vaccine intent rate per ZCTAManhattanStaten IslandBronxQueensBrooklynUS correlation between ZCTA vaccine intent and demographicFigure 6: (a) Using our classifier, we can estimate vaccine intent rates per ZCTA, approximately 10x the granularity of counties.(b) Zooming in on New York City shows that estimated vaccine intent rates vary substantially across ZCTAs, even within thesame city or county. (c) Correlations between ZCTA vaccine intent rates and demographic variables.to identify (Table 2). Notably, if we only use queries, the correlationdrops tor=0.62and we lose 57% of the users we identified withour full classifier, demonstrating the value of adding vaccine intentURLs through our graph ML framework.Additionally, we compare our vaccine intent estimates to theCDC’s vaccination rates over time. We observe strong correlationshere as well, especially if we allow the CDC time series to lag behindthe vaccine intent time series (Figure 5). With lags of 7-15 days(IQR), the median correlation over states reaches r=0.89; withouta lag, the median correlation drops to r=0.78. The CDC’s lagdemonstrates an advantage of our classifier, as it can detect vaccineseeking in real time without delays from reporting.Granular trends in vaccine seeking. Our vaccine intent classifierallows us to pinpoint who was seeking the COVID-19 vaccine,where, and when. We estimate cumulative vaccine intent rates upto the end of August 2021 at the level of ZCTAs (Figure 6a), approx-imately 10x the granularity of counties, which is the finest-grainedvaccination data the CDC provides and, still, with many countiesmissing or having incomplete data [ 70]. We observe substantialheterogeneity in vaccine intent at the ZCTA-level, even within thesame states and counties. For example, when we focus on New YorkCity, we see that Manhattan and Queens have higher vaccine intentrates, and within Queens, ZCTAs in the northern half have higherrates (Figure 6b), aligning with reported local vaccination rates inNew York City [11].We can also use our estimates to characterize demographic trendsin vaccination. When we measure correlations between ZCTA vac-cine intent rate and different demographic variables, we find thatoverall demographic trends from our estimates align closely withprior literature [ 37,41,71,76]. For example, we observe strongpositive correlations with education, income, and population den-sity, and a strong negative correlation with percent Republican(Figure 6c). However, we discover more nuanced trends when welook closer. Demographic trends vary significantly across states(Figure A5), especially for race and ethnicity, and trends changeover time. For example, we estimate that older ZCTAs were muchlikelier to seek the vaccine early in 2021 but this trend fell over time(Figure A6a), reflecting how the US vaccine rollout initially priori-tized seniors [ 38], and we see an increase in vaccine intent frommore Republican ZCTAs in summer 2021 (Figure A6b). Thus, ourclassifier both confirms existing findings and enables new analyseswith finer granularity across regions, demographics, and time.5 SEARCH CONCERNS OF HOLDOUTSWe use our vaccine intent classifier to identify two groups: vaccineearly adopters , who expressed their first vaccine intent before May2021, and vaccine holdouts , who waited until July 2021 to show theirfirst vaccine intent, despite becoming eligible by April.3Comparingthe search interests of these two groups allows us to discover rela-tionships between expressed vaccine concerns, news consumption,and vaccine decision-making. To reduce potential confounding, wematch each holdout with a unique early adopter from the samecounty and with a similar average query count, since we knowthat the populations seeking vaccination changed over time andwe do not want our comparisons to be overpowered by regional ordemographic differences. In our following analyses, we comparethe search interests of the matched sets, with over 200,000 pairs.Vaccine holdouts are more likely to consume untrusted news. First,we analyze the trustworthiness of news sites clicked on by vaccineholdouts versus early adopters. We use ratings from Newsguard,which assigns trust scores to news sites based on criteria suchas how often the site publishes false content and how it handlesthe difference between news and opinion [ 52]. We find that, inthe period while vaccine holdouts were eligible but still holdingout (April to June 2021), holdouts were 69% (95% CI, 67%-70%)likelier than their matched early adopters to click on untrustednews, defined by Newsguard as domains with trust scores below60. Furthermore, we see that as the trust score from Newsguarddegrades, the likelier it was that holdouts clicked on the site, relativeto early adopters (Figure 7a). For example, sites that are known forspreading COVID-19 misinformation, such as Infowars [ 25], RT [ 6],and Mercola [ 31], were much likelier to be clicked on by holdouts.3We did not consider as holdouts those who never showed vaccine intent during ourstudy period, since those users may have gotten their vaccine in ways that are notvisible via search data. In comparison, individuals who did not show their first vaccineintent until July 2021 likely did not receive the vaccine before.6Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA(a)(d)(b)(c)Religious concernsExpert anti-vaxHigh-profile anti-vaxEerie fearsExemptionAnti-mandateFake vaccine proof FDA approvalVaccine-caused deathsVaccine developmentTravel restrictionsNatural immunityReproductive healthEfficacy against variantsVaccine proofNews on hesitancyEmployee mandatesDecision-makingBreakthrough casesPost-vax guidelinesSevere side effectsSpecial populationsVaccine incentivesVaccine ratesEfficacy from studiesInfo about J&JInfo about PfizerComparing vaccinesNormal side effectsInfo about ModernaLeans towardsvaccine holdoutLeans towardsearly adopterElevated near vaccine intentReduced near vaccine intentSubcategoryFigure 7: In all subfigures, news/categories are colored from yellow to dark purple to represent most holdout-leaning to mostearly adopter-leaning. (a) The lower the trust rating from Newsguard, the likelier it is that vaccine holdouts click on the newssite, relative to early adopters. (b) Holdouts’ top category concerns include Vaccine Safety, Requirements, and Information, withvarying proportions over time. (c) Comparing holdouts vs. early adopters’ relative probabilities of clicking on each subcategory(from April to June 2021) reveals each group’s distinctive concerns. (d) Near when holdouts express vaccine intent ( ±3 days) inJuly and August 2021, their concerns become much more like the concerns of early adopters, with a few important differences.Ontology of vaccine concerns on search. To characterize vaccine-related search interests in far more detail, we construct a hier-archical ontology of vaccine concerns, defined in terms of 25,000vaccine-related URLs that were clicked on by early adopters or hold-outs. We construct our ontology from the bottom-up: first, we seekto automatically partition the URLs into clusters. Leveraging graphML again, we formulate this as a community detection problemon graphs, and apply the Louvain algorithm [ 12] to the collapsedURL-URL graph (collapsing the bipartite query-click graph overqueries). We find that this approach results in remarkably coher-ent clusters (Table A3), due to the strength of the signal containedin query-click graphs, and outperforms standard topic modelingapproaches such as LDA [ 10]. Based on these clusters, we designa comprehensive set of subcategories and top categories, and sortthe clusters accordingly. For example, we identify one cluster ofnews stories announcing vaccine passport requirements in cities,which we sort under the proof of vaccination subcategory and Vac-cine Requirements top category. This bottom-up approach allowsus to discover and measure vaccine concerns directly from users’search interests and analyze them at multiple scales, providingcomplementary insights to more traditional surveys.In Figure A1, we summarize our resulting ontology, which con-sists of 8 top categories and 36 subcategories. Some top categoriesencompass a number of distinct subcategories: for example, underVaccine Safety, we include normal side effects, severe side effects,concerns about reproductive health, vaccine history and develop-ment, FDA approval, fear of vaccine-caused deaths, and “eerie” fears(e.g., myths about vaccine shedding or becoming magnetic [ 28]).At the top category-level, we find that vaccine holdouts are, by far,the most concerned about Vaccine Safety, which accounts for 23%of their vaccine-related clicks, followed by Vaccine Information(10%) and Vaccine Requirements (9%). We also observe changesin interests over time (Figure 7b): for example, interest in VaccineIncentives increased in May 2021, and interest in Vaccine Effective-ness grew in June 2021, following the spread of the Delta variant.Distinctive concerns of holdouts vs. early adopters. Our ontologyallows us to compare the vaccine concerns of holdouts and theirmatched early adopters. First, during the period from April to June2021, we find that holdouts were 48% less likely than early adoptersto click on any vaccine-related URL. Furthermore, their distributionof concerns within their vaccine-related clicks differed significantly(Figure 7c). Using the subcategories from our ontology, we findthat holdouts were far more interested in religious concerns aboutthe vaccine; anti-vaccine messages from experts and high-profilefigures; avoiding vaccine requirements by seeking exemptions, ban-ning mandates, or obtaining fake proof of vaccination; eerie fearsand vaccine-caused deaths; and FDA approval and vaccine develop-ment. In comparison, early adopters were much more concerned7epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzabout normal side effects, vaccine efficacy, comparing differenttypes of vaccines, and information about each vaccine (Moderna,Pfizer, and Johnson & Johnson). These differences reveal the impor-tance of a fine-grained ontology; for example, at the top categorylevel, we would see that both groups were interested in VaccineSafety but miss that early adopters were more concerned about nor-mal and severe side effects, while holdouts were more concernedabout eerie fears and vaccine-caused deaths. Our approach alsoallows us to study who is expressing these concerns in greater gran-ularity. Even within holdouts, we observe significant variabilityin concerns across demographic groups (Figure A7). For example,holdouts from more Democrat-leaning ZCTAs were particularlyconcerned about FDA approval and vaccine requirements, whileholdouts from more Republican-leaning ZCTAs were more con-cerned about eerie fears and vaccine incentives.Holdouts appear like early adopters when seeking the vaccine.In our final analysis, we exploit the fact that all of our vaccineholdouts eventually expressed vaccine intent to explore how vac-cine concerns change as an individual converts from holdout toadopter. From July to August 2021, we analyze how holdouts’ vac-cine concerns change in the small window ( ±3days) surroundingtheir expressed vaccine intent, compared to their typical concernsoutside of that window. We find that in those windows, holdouts’vaccine concerns nearly reverse, such that they look much morelike early adopters than their typical selves (Figure 7d nearly re-verses 7c). During this time, holdouts become far more interestedin the Johnson & Johnson vaccine, comparing different vaccines,and vaccine incentives, and less interested in anti-vaccine messagesand vaccine fears. Notably, not all early adopter-leaning concernsreverse as dramatically; for example, even while expressing vaccineintent, holdouts remain less interested in the Pfizer and Modernavaccines, which may reflect how vaccine hesitant individuals werequicker to accept the one-shot Johnson & Johnson vaccine, insteadof the two-shot mRNA vaccines [ 21,73]. Furthermore, there aresome early adopter-leaning concerns that holdouts do not pick upon during this time, such as interest in vaccine rates. We hypoth-esize that these concerns are more reflective of an early adopter“persona” rather than of concerns that would become relevant whenseeking the vaccine, such as comparing different vaccines.6 RELATED WORKOur work centers Bing search logs, which have been used to studyother health issues such as shifts in needs and disparities in infor-mation access during the pandemic [ 67,68], health informationneeds in developing nations [ 1], experiences around cancer diag-noses [ 55,56], concerns rising during pregnancy [ 29], and medicalanxieties associated with online search [ 75]. Our efforts build onprior work that extracts insights about the COVID-19 vaccine fromdigital traces, such as social media [ 50,57,58] and aggregated searchtrends [ 7,23,48]. Our work is also related to other efforts to detecthealth conditions online, such as predicting depression from socialmedia [19] and monitoring influenza from search queries [32].Our work seeks to address the challenges of working with digitaltraces [ 24,54] and limitations of prior work [ 32,44] by developingML and human-in-the-loop methods to precisely label search logsand evaluate bias. Furthermore, as one of the first works to use indi-vidual search logs to study the COVID-19 vaccine, we have the rareopportunity to link vaccine outcomes (predicted by our classifier)to the same individual’s search interests. Our graph ML pipeline isalso similar to other “big data” approaches that, due to the scale ofunlabeled data, manually annotate a subset of data, train machinelearning models to accurately predict those labels, then use thosemodels to label the rest of the data [ 17,30,35,47]. We extend thisapproach in several ways, such as by using personalized PageRankto select URLs for more efficient annotation and by setting a strictclassification threshold based on “spies” to ensure high precision.7 DISCUSSIONWe have demonstrated how large-scale search logs and machinelearning can be leveraged for fine-grained, real-time monitoringof vaccine intent rates and identification of individuals’ concernsabout vaccines. There are limitations to our approach: for example,while we can achieve finer granularity than existing data, we stillmiss within-ZCTA heterogeneity in vaccine intent. Furthermore,our efforts to minimize bias in our estimates are substantial butimperfect (e.g., we can only approximate TPRs and FPRs of ourclassifier). We also assume in this work that vaccine intent can bedetected through single queries or clicks, but more sophisticatedmodels could incorporate entire search sessions or browsing databeyond search. However, in favor of simplicity and considerationsof privacy, we label vaccine intent at the query and click-level.Despite these limitations, our resources demonstrate strongagreement with existing data and enable analyses that have not beenavailable before. For example, our fine-grained vaccine intent esti-mates can help public health officials to identify under-vaccinatedcommunities, informing where to place vaccine sites or whom toprioritize in online or real-world outreach programs. Furthermore,our novel ontology and analyses of individuals’ vaccine concernsinform how to intervene, guiding messaging strategies for differentholdout populations. Lastly, our observation that holdouts resembleearly adopters when they eventually seek vaccination indicates thatindividuals might follow similar paths towards vaccine acceptance.Future work could model these trajectories, try to identify key in-fluences (e.g., vaccine mandates), and use these models to ideallyallocate limited resources for interventions.To facilitate policy impact and future research, we are releasingour vaccine intent estimates and our ontology of vaccine concerns.We hope that these resources will be useful for conducting detailedanalyses of COVID-19 vaccine behaviors and vaccination rates. Theontology can also be employed widely in web and social mediaresearch; for example, to study how certain classes of URLs (e.g.,eerie fears) are disseminated on social media or surfaced by searchengines. Finally, we note that our graph ML techniques for intentdetection are applicable beyond vaccines, and could be applied toprecisely detect other intents of interest, such as seeking stimuluschecks or COVID-19 tests. More broadly, we hope that our workcan serve as a roadmap for researchers of how to derive rigorousbehavioral and health insights from search logs, including how toprecisely detect user intents and interests, evaluate and correctfor bias, validate against external data, and release resources topromote reproducibility, transparency, and future work.8Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAREFERENCES[1]Rediet Abebe, Shawndra Hill, Jennifer Wortman Vaughan, Peter M. Small, andH. Andrew Schwartz. 2019. Using Search Queries to Understand Health In-formation Needs in Africa. In Proceedings of the Thirteenth International AAAIConference on Web and Social Media (ICWSM ’19) .[2]Yasmeen Abutaleb and Lena H. Sun. 2021. How CDC data problems put theU.S. behind on the delta variant. The Washington Post (2021). https://www.washingtonpost.com/health/2021/08/18/cdc-data-delay-delta-variant/.[3]Alaa Althubaiti. 2016. Information bias in health research: definition, pitfalls, andadjustment methods. Journal of Multidisciplinary Healthcare 9 (2016), 211–217.[4]Emily Anthes, Madeleine Ngo, and Eileen Sullivan. 2021. Adults in all U.S. statesare now eligible for vaccination, hitting Biden’s target. Half have had at leastone dose. The New York Times (2021). https://www.nytimes.com/2021/04/19/world/adults-eligible-covid-vaccine.html.[5]Susan Athey, Kristen Grabarz, Michael Luca, and Nils Wernerfelt. 2023. Digitalpublic health interventions at scale: The impact of social media advertising onbeliefs and outcomes related to COVID vaccines. Proceedings of the NationalAcademy of Science (PNAS) 120, 5 (2023).[6]Julian E. Barnes. 2021. Russian Disinformation Targets Vaccines and the BidenAdministration. The New York Times (2021). https://www.nytimes.com/2021/08/05/us/politics/covid-vaccines-russian-disinformation.html.[7]Shailesh Bavadekar, Adam Boulanger, John Davis, Damien Desfontaines, Ev-geniy Gabrilovich, Krishna Gadepalli, Badih Ghazi, Tague Griffith, Jai Gupta,Chaitanya Kamath, et al .2021. Google COVID-19 Vaccination Search Insights:Anonymization Process Description. arXiv (2021).[8]Jessa Bekker and Jesse Davis. 2020. Learning from positive and unlabeled data: asurvey. Machine Learning 109 (2020), 719–760.[9]Alexis Benveniste. 2021. New York City will require vaccines for entry to restau-rants and gyms. CNN Business (2021). https://www.cnn.com/2021/08/03/business/new-york-city-vaccine-requirements/index.html.[10] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent DirichletAllocation. Journal of Machine Learning Research (2003), 993–1022.[11] Matthew Bloch, Larry Buchanan, and Josh Holder. 2021. See Who Has Been Vacci-nated So Far in New York City. The New York Times (2021). https://www.nytimes.com/interactive/2021/03/26/nyregion/nyc-vaccination-rates-map.html.[12] Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefeb-vre. 2008. Fast unfolding of communities in large networks. Journal of StatisticalMechanics: Theory and Experiment (2008).[13] Valerie C. Bradley, Shiro Kuriwaki, Michael Isakov, Dino Sejdinovic, Xiao-Li Meng,and Seth Flaxman. 2021. Unrepresentative big surveys significantly overestimatedUS vaccine uptake. Nature 600 (2021), 695–700.[14] Sergey Brin and Lawrence Page. 1998. The Anatomy of a Large-Scale Hypertex-tual Web Search Engine. Computer Networks and ISDN Systems (1998).[15] United States Census Bureau. 2020. American Community Survey Data. https://www.census.gov/programs-surveys/acs/data.html.[16] United States Census Bureau. 2021. Household Pulse Survey COVID-19 Vac-cination Tracker. https://www.census.gov/library/visualizations/interactive/household-pulse-survey-covid-19-vaccination-tracker.html.[17] Dallas Card, Serina Chang, Chris Becker, Julia Mendelsohn, Rob Voigt, LeahBoustan, Ran Abramitzky, and Dan Jurafsky. 2022. Computational analysis of 140years of US political speeches reveals more positive but increasingly polarizedframing of immigration. Proceedings of the National Academy of Science (PNAS)119, 31 (2022).[18] Wen-Ying Sylvia Chou and Alexandra Budenz. 2020. Considering Emotion inCOVID-19 Vaccine Communication: Addressing Vaccine Hesitancy and FosteringVaccine Confidence. Health Communication 35, 14 (2020), 1718–1722.[19] Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013.Predicting Depression via Social Media. In Proceedings of the 7th InternationalAAAI Conference on Web and Social Media (ICWSM’13) .[20] Nick Craswell and Martin Szummer. 2007. Random walks on the click graph. InProceedings of the 30th annual international ACM SIGIR conference on Researchand development in information retrieval (SIGIR ’07) .[21] Bob Curley. 2021. Why Some People Still Prefer the Johnson & Johnson COVID-19 Vaccine. Healthline (2021). https://www.healthline.com/health-news/why-some-people-still-prefer-the-johnson-johnson-covid-19-vaccine.[22] Hengchen Dai, Silvia Saccardo, Maria A. Han, Lily Roh, Naveen Raja, SitaramVangala, Hardikkumar Modi, Shital Pandya, Michael Sloyan, and Daniel M. Croy-mans. 2021. Behavioural nudges increase COVID-19 vaccinations. Nature 597(2021), 404–409.[23] Parris Diaz, Pritika Reddy, Reshna Ramasahayam, Manish Kuchakulla, and Ran-jith Ramasamy. 2021. COVID-19 vaccine hesitancy linked to increased internetsearch queries for side effects on fertility potential in the initial rollout phasefollowing Emergency Use Authorization. Andrologia 53, 9 (2021).[24] Susan Dumais, Robin Jeffries, Daniel M. Russell, Diane Tang, and Jaime Teevan.2014. Understanding User Behavior Through Log Data and Analysis. In Ways ofKnowing in HCI . Springer New York, New York, NY, 349–372.[25] Luis Ferré-Sadurní and Jesse McKinley. 2020. Alex Jones Is Told to Stop SellingSham Anti-Coronavirus Toothpaste. The New York Times (2020). https://www.nytimes.com/2020/03/13/nyregion/alex-jones-coronavirus-cure.html.[26] Centers for Disease Control and Prevention. 2023. COVID-19 Vaccinationsin the United States,County. https://data.cdc.gov/Vaccinations/COVID-19-Vaccinations-in-the-United-States-County/8xkx-amqh.[27] Centers for Disease Control and Prevention. 2023. COVID-19 Vaccinations inthe United States,Jurisdiction. https://data.cdc.gov/Vaccinations/COVID-19-Vaccinations-in-the-United-States-Jurisdi/unsk-b7fc.[28] Centers for Disease Control and Prevention. 2023. Myths and Facts about COVID-19 Vaccines. https://www.cdc.gov/coronavirus/2019-ncov/vaccines/facts.html.[29] Adam Fourney, Ryen W. White, and Eric Horvitz. 2015. Exploring Time-Dependent Concerns about Pregnancy and Childbirth from Search Logs. InProceedings of the 33rd Annual ACM Conference on Human Factors in ComputingSystems (CHI’15) . 737–746.[30] Matt Franchi, J.D. Zamfirescu-Pereira, Wendy Ju, and Emma Pierson. 2023. De-tecting disparities in police deployments using dashcam data. In Proceedingsof the 6th ACM Conference on Fairness, Accountability, and Transparency 2023(FAccT’23) .[31] Sheera Frenkel. 2021. The Most Influential Spreader of Coronavirus Misinforma-tion Online. The New York Times (2021). https://www.nytimes.com/2021/07/24/technology/joseph-mercola-coronavirus-misinformation-online.html.[32] Jeremy Ginsberg, Matthew H. Mohebbi, Rajan S. Patel, Lynnette Brammer, Mark S.Smolinski, and Larry Brilliant. 2009. Detecting influenza epidemics using searchengine query data. Nature 457 (2009), 1012–1014.[33] Alice Goldfarb and Kara W. Schechtman. 2021. State-Level Vaccine DemographicData is Messy and Incomplete—We Need Federal Data, Now. The COVID TrackingProject (2021). https://covidtracking.com/analysis-updates/state-level-vaccine-demographic-data-is-messy-and-incomplete.[34] Google. 2023. Google Trends. https://trends.google.com/trends/?geo=US.[35] Justin Grimmer, Margaret E. Roberts, and Brandon M. Stewart. 2021. MachineLearning for Social Science: An Agnostic Approach. Annual Review of PoliticalScience 24 (2021), 395–419.[36] Rodrigo Jiménez-García, Valentín Hernandez-Barrera, Cristina Rodríguez-Rieiro,Pilar Carrasco Garrido, Ana López de Andres, Isabel Jimenez-Trujillo, María DEsteban-Vasallo, Maria Felicitas Domínguez-Berjón, Javier de Miguel-Diez, andJenaro Astray-Mochales. 2014. Comparison of self-report influenza vaccinationcoverage with data from a population based computerized vaccination registryand factors associated with discordance. Vaccine 32, 35 (2014), 4386–4392.[37] Ashish Joshi, Mahima Kaur, Ritika Kaur, Ashoo Grover, Denis Nash, and AymanEl-Mohandes. 2021. Predictors of COVID-19 Vaccine Acceptance, Intention, andHesitancy: A Scoping Review. Frontiers in Public Health 9 (2021).[38] Berkeley Lovelace Jr. 2021. CDC expands Covid vaccination guidelines to every-one 65 and older. CNBC (2021). https://www.cnbc.com/2021/01/12/covid-vaccine-trump-administration-to-expand-eligibility-to-everyone-65-and-older.html.[39] Thomas N. Kipf and Max Welling. 2017. Semi-Supervised Classification withGraph Convolutional Networks. In Proceedings of the 5th International Conferenceon Learning Representations (ICLR ’17) .[40] Isabel M. Kloumann and Jon M. Kleinberg. 2014. Community MembershipIdentification from Small Seed Sets. In Proceedings of the 20th ACM SIGKDDInternational Conference on Knowledge Discovery and Data Mining (KDD’14) .1366–1375.[41] Sarah Kreps, Sandip Prasad, John S. Brownstein, Yulin Hswen, Brian T. Garibaldi,Baobao Zhang, and Douglas L. Kriner. 2020. Factors Associated With US Adults’Likelihood of Accepting COVID-19 Vaccination. JAMA Network Open 3, 10 (2020),e2025594–e2025594.[42] Nancy Krieger, Pamela D Waterman, Jarvis T Chen, Christian Testa, and William PHanage. 2021. Missing again: US racial and ethnic data for COVID-19 vaccination.The Lancet 397, 10281 (2021), 1259–1260.[43] Sharon LaFraniere. 2022. ‘Very Harmful’ Lack of Data Blunts U.S. Response toOutbreaks. The New York Times (2022). https://www.nytimes.com/2022/09/20/us/politics/covid-data-outbreaks.html.[44] David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. TheParable of Google Flu: Traps in Big Data Analysis. Science 343 (2014), 1203–1205.[45] Xiao Li, Ye-Yi Wang, and Alex Acero. 2008. Learning query intent from regularizedclick graphs. In Proceedings of the 31st annual international ACM SIGIR conferenceon Research and development in information retrieval (SIGIR ’08) .[46] Jamie Lopez Bernal, Nick Andrews, Charlotte Gower, Eileen Gallagher, RuthSimmons, Simon Thelwall, Julia Stowe, Elise Tessier, Natalie Groves, GavinDabrera, et al .2021. Effectiveness of Covid-19 Vaccines against the B.1.617.2(Delta) Variant. New England Journal of Medicine 385, 7 (2021), 585–594.[47] Ian Lundberg, Jennie E. Brand, and Nanum Jeon. 2022. Researcher reasoningmeets computational capacity: Machine learning for social science. Social ScienceResearch 108 (2022), 102807.[48] Sean Malahy, Mimi Sun, Keith Spangler, Jessica Leibler, Kevin Lane, ShaileshBavadekar, Chaitanya Kamath, Akim Kumok, Yuantong Sun, Jai Gupta, et al .2021. Vaccine Search Patterns Provide Insights into Vaccination Intent. arXiv(2021).9epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitz[49] Zakaria Mehrab, Mandy L. Wilson, Serina Chang, Galen Harrison, Bryan Lewis,Alex Telionis, Justin Crow, Dennis Kim, Scott Spillmann, Kate Peters, JureLeskovec, and Madhav Marathe. 2022. Data-Driven Real-Time Strategic Place-ment of Mobile Vaccine Distribution Sites. In Proceedings of the 36th AAAI Con-ference on Artificial Intelligence (IAAI’22) .[50] Goran Muric, Yusong Wu, and Emilio Ferrara. 2021. COVID-19 Vaccine Hesitancyon Social Media: Building a Public Twitter Data Set of Antivaccine Content,Vaccine Misinformation, and Conspiracies. JMIR Public Health and Surveillance7, 11 (2021).[51] Nambi Ndugga, Latoya Hill, Samantha Artiga, and Sweta Haldar. 2021.Latest data on COVID-19 vaccinations by race/ethnicity. Kaiser Fam-ily Found (KFF) (2021). https://covid-19archive.org/files/original/f90f767bdd1cd10911587853d70a6320f29bf9b7.pdf.[52] Newsguard. 2022. Rating Process and Criteria. https://www.newsguardtech.com/ratings/rating-process-criteria/.[53] Dave Leip’s Atlas of U.S. Elections. 2022. Store - Election Data. https://uselectionatlas.org/BOTTOM/store_data.php.[54] Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. 2019.Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers inBig Data 2 (2019).[55] Michael J. Paul, Ryen W. White, and Eric Horvitz. 2015. Diagnoses, decisions,and outcomes: Web search as decision support for cancer. In Proceedings of the24th international conference on World Wide Web (WWW’15) .[56] Michael J. Paul, Ryen W. White, and Eric Horvitz. 2016. Search and Breast Cancer:On Episodic Shifts of Attention over Life Histories of an Illness. ACM Transactionson the Web 10, 2 (2016).[57] Francesco Pierri, Brea L. Perry, Matthew R. DeVerna, Kai-Cheng Yang, AlessandroFlammini, Filippo Menczer, and John Bryden. 2022. Online misinformation islinked to early COVID-19 vaccination hesitancy and refusal. Scientific Reports 12,5955 (2022).[58] Soham Poddar, Mainack Mondal, Janardan Misra, Niloy Ganguly, and SaptarshiGhosh. 2022. Winds of Change: Impact of COVID-19 on Vaccine-Related Opinionsof Twitter Users. In Proceedings of the 16th International AAAI Conference on Weband Social Media (ICWSM’22) .[59] Fernando P. Polack, Stephen J. Thomas, Nicholas Kitchin, Judith Absalon, Ale-jandra Gurtman, Stephen Lockhart, John L. Perez, Gonzalo Pérez Marc, Edson D.Moreira, Cristiano Zerbini, et al .2020. Safety and Efficacy of the BNT162b2 mRNACovid-19 Vaccine. New England Journal of Medicine 383, 27 (2020), 2603–2615.[60] Nathaniel Rabb, Megan Swindal, David Glick, Jake Bowers, Anna Tomasulo,Zayid Oyelami, Kevin H. Wilson, and David Yokum. 2022. Evidence from astatewide vaccination RCT shows the limits of nudges. Nature 604 (2022), E1–E7.[61] Filip Radlinski, Martin Szummer, and Nick Craswell. 2010. Inferring QueryIntent from Reformulations and Clicks. In Proceedings of the 19th InternationalConference on World Wide Web (WWW’10) .[62] Lydia Saad. 2021. More in U.S. Vaccinated After Delta Surge, FDA Decision.Gallup (2021). https://news.gallup.com/poll/355073/vaccinated-delta-surge-fda-decision.aspx.[63] Michael Siegel, Isabella Critchfield-Jain, Matthew Boykin, Alicia Owens, Re-beckah Muratore, Taiylor Nunn, and Joanne Oh. 2022. Racial/Ethnic Disparitiesin State-Level COVID-19 Vaccination Rates and Their Association with StructuralRacism. Journal of Racial and Ethnic Health Disparities 9, 6 (2022), 2361–2374.[64] Marianna Sotomayor, Jacqueline Alemany, and Mike DeBonis. 2021. Growingnumber of Republicans urge vaccinations amid delta surge. The New YorkTimes (2021). https://www.washingtonpost.com/politics/growing-number-of-republicans-urge-vaccinations-amid-delta-surge/2021/07/20/52a06e9c-e999-11eb-8950-d73b3e93ff7f_story.html.[65] StatCounter. 2023. Desktop Search Engine Market Share United States Of America,Jan - Dec 2021. https://gs.statcounter.com/search-engine-market-share/desktop/united-states-of-america/2021.[66] Seth Stephens-Davidowitz. 2014. The cost of racial animus on a black candidate:Evidence using Google search data. Journal of Public Economics 118 (2014), 26–40.[67] Jina Suh, Eric Horvitz, Ryen W. White, and Tim Althoff. 2021. Population-Scale Study of Human Needs During the COVID-19 Pandemic: Analysis andImplications. In Proceedings of the 14th ACM International Conference on WebSearch and Data Mining (WSDM’21) . 4–12.[68] Jina Suh, Eric Horvitz, Ryen W. White, and Tim Althoff. 2022. Disparate impactson online information access during the Covid-19 pandemic. Nature Communi-cations 13, 7094 (2022).[69] Tom Tapp. 2021. Los Angeles City Council Votes 13-0 To Create VaccinationRequirement For Indoor Public Spaces Such As Restaurants, Movie Theaters,Concert Venues. Deadline (2021). https://deadline.com/2021/08/los-angeles-city-requires-vaccination-vaccine-indoors-1234813086/.[70] Jennifer Tolbert, Kendal Orgera, Rachel Garfield, Jennifer Kates, , andSamantha Artiga. 2021. Vaccination is Local: COVID-19 Vaccination RatesVary by County and Key Characteristics. Kaiser Family Foundation (KFF)(2021). https://www.kff.org/coronavirus-covid-19/issue-brief/vaccination-is-local-covid-19-vaccination-rates-vary-by-county-and-key-characteristics/.[71] Gianmarco Troiano and Alessandra Nardi. 2021. Vaccine hesitancy in the era ofCOVID-19. Public Health 194 (2021), 245–251.[72] Raymond John D Vergara, Philip Joseph D Sarmiento, and James Darwin NLagman. 2021. Building public trust: a response to COVID-19 vaccine hesitancypredicament. Journal of Public Health 43, 2 (2021), e291–e292.[73] Noah Weiland. 2021. One and Done: Why People Are Eager for Johnson &Johnson’s Vaccine. The New York Times (2021). https://www.nytimes.com/2021/03/04/health/covid-vaccine-johnson-and-johnson-rollout.html.[74] Rebecca L. Weintraub, Kate Miller, Benjamin Rader, Julie Rosenberg, ShreyasSrinath, Samuel R. Woodbury, Marinanicole D. Schultheiss, Mansi Kansal, SwapnilVispute, Stylianos Serghiou, et al .2023. Identifying COVID-19 Vaccine Desertsand Ways to Reduce Them: A Digital Tool to Support Public Health Decision-Making. American Journal of Public Health 113, 4 (2023), 363–367.[75] Ryen W. White and Eric Horvitz. 2009. Cyberchondria: Studies of the Escalationof Medical Concerns in Web Search. ACM Transactions on Information Systems27, 4 (2009).[76] Farah Yasmin, Hala Najeeb, Abdul Moeed, Unaiza Naeem, Muhammad SohaibAsghar, Najeeb Ullah Chughtai, Zohaib Yousaf, Binyam Tariku Seboka, Irfan Ullah,Chung-Ying Lin, and Amir H. Pakpour. 2021. COVID-19 Vaccine Hesitancy inthe United States: A Systematic Review. Frontiers in Public Health 9 (2021).10Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAAPPENDIXThe Appendix provides additional results and experiments, includ-ing detailed descriptions of our ontology (Figure A1), results fromdeveloping our vaccine intent classifier (Section A1), our decompo-sition and evaluations of bias (Section A2), and additional analysesof vaccine intent trends and vaccine concerns (Section A3).A1 VACCINE INTENT CLASSIFIER:ADDITIONAL RESULTSAnnotation results. As discussed in the main text, in the secondstep of our classification pipeline, we present URLs to annotatorson Amazon Mechanical Turk. We find that a large majority (86%) ofour URL candidates are labeled as vaccine intent, when we requireat least three positive annotations to qualify a URL as vaccine intent.Furthermore, we observe a clear relationship between S-PPR rankand the percentage labeled as vaccine intent, whether we set thethreshold at two or three annotations (Figure A2). For example,when we require three positive annotations, around 90% of URLsfrom ranks 0 to 20 qualify, around 81% of URLs from ranks 40-60qualify, and around 71% of URLs from ranks 80 to 100 qualify. Thus,we find that S-PPR predicts vaccine intent remarkably well, witha high rate among its top URLs and agreement with a decreasingrate as the ranking drops.Details from GNN experiments. In the final step of our classifi-cation pipeline, we train GNNs to learn vaccine intent labels anddiscover new URLs. Since there are not enough URL labels fromAMT for smaller states, we experiment with pre-training the GNNon S-PPR rankings. In practice, before training the model on theURL labels from AMT, we train the model to predict the URLs’ S-PPR rankings that we derived in the first step of our pipeline. SinceS-PPR rankings become less meaningful in the long tail of URLs, wefocus on predicting the top K=max(1000,qmax)S-PPR rankings,whereqmaxis the maximum rank (where lower rank correspondsto higher S-PPR score) of the last seed set query.To test the effect of pre-training on S-PPR rankings, we selectsix representative states that vary in graph size and US region. Wefind that pre-training significantly improves performance for thesmaller states. For example, the mean AUC for Wyoming increasesfrom 0.74 to 0.95 (Table A1). Specifically, due to the low numberof URL labels for smaller states, we observe great variance in themodel’s performance if we do not pre-train the model, leading tosome trials that perform well and some that perform poorly (Figure3a). Performance becomes far more stable for smaller states afterwe incorporate the pre-training objective. We find that pre-trainingseems unnecessary for the larger states, such as Connecticut andTennesssee, where we are already achieving high AUCs above 0.98.So, we set a generous cutoff of 5,000,000 nodes (still larger thanthe graph size for Connecticut) and we pre-train all states withfewer than 5,000,000 nodes in our data, of which there are 26. Afterincorporating pre-training for these smaller states, we are able toachieve AUCs above 0.90 for all 50 states and above 0.95 for 45states (Figure 3b).As a supplementary analysis, we can also use AUC to evaluatethe predictive performance of S-PPR alone and GNN-PPR, i.e., theGNN pre-trained on S-PPR rankings before it is also trained on AMTState # nodes AUC w/o pre-train AUC w/ pre-trainWY 752865 0.741 (0.146) 0.951 (0.014)AK 909357 0.796 (0.187) 0.921 (0.074)DE 1269327 0.864 (0.134) 0.968 (0.007)MT 1533071 0.857 (0.139) 0.978 (0.011)CT 4407722 0.987 (0.005) 0.984 (0.008)TN 7712443 0.991 (0.003) 0.990 (0.003)Table A1: Effects of pre-training on S-PPR rankings for sixselected states. We report the mean and standard deviationof AUC on the test set over 10 random trials.labels. Here, we evaluate on allAMT labels, since none of them wereused in constructing S-PPR or GNN-PPR scores. In fact, evaluatingon AMT labels is particularly challenging, since we chose to labelonly the top-ranked URLs according to S-PPR, so we are askingS-PPR to distinguish between URLs that it already considers similar.We conduct this experiment on the 26 smaller states for which wepre-trained our GNNs.First, we find across these states that S-PPR still performs betterthan random, with a mean AUC of 0.569, which complements ourannotation results showing that even within its top-ranked URLs,S-PPR rankings still correlate with true rates of vaccine intentlabels (Figure A2). Second, we find that GNN-PPR consistentlyoutperforms S-PPR by 10-15 points, with a mean AUC of 0.675. Thisis somewhat surprising, since GNN-PPR was only trained to predictS-PPR rankings, without any additional labels. We hypothesizethat GNN-PPR outperforms S-PPR because, unlike S-PPR, the GNNcan incorporate textual information from URLs and queries, inaddition to graph structure. So, while S-PPR incorrectly upweightshigh-traffic URLs such as facebook.com that are often reached onrandom walks starting from the vaccine intent queries, GNN-PPRrecognizes that these URLs do not look like the rest of high-rankingURLs and correctly excludes them. However, in order to achievethis difference between S-PPR and GNN-PPR, it is important not tooverfit on S-PPR. So, we employ early stopping during pre-training;that is, we train the GNN on S-PPR rankings until they achieve acorrelation of 0.8 and then we stop pre-training.Our evaluation results demonstrate that our GNNs are able toaccurately predict vaccine intent labels in all 50 states, which isessential as we use our GNNs to discover new vaccine intent URLs.In Table A2, we provide a uniform random sample of the URLsthat our GNNs discovered. The majority of them seem to expressvaccine intent, with several news stories about new vaccine clinicsand information about vaccine appointments. Furthermore, thesupplemental analysis of S-PPR and GNN-PPR shows that due tothe expressive power of the GNN (with character-level CNN) andthe predictive power of S-PPR from a well-designed seed set, wecan achieve decent performance without anylabels at all. Thesemethods, which should be explored more deeply in future work,may be useful in a zero-shot context, allowing lightweight, effectiveprediction before acquiring any labels.11epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. HorvitzSafetyNormal side effectsSevere side effectsReproductive healthVaccine-caused deathsEerie fearsVaccine developmentFDA approvalExpected side effects: sore arm, shoulder, fever, etcRare but plausible side effects, severe, potentially long-term: blood clots, myocarditis, etcConcerns about fertility, breast feeding, menstruationFear of deaths caused by COVID vaccineEerie and debunked fears: shedding, magnets, microchips, etcHistory of vaccine development, fear of mRNA technology, ingredients in COVID vaccineFDA approval of COVID vaccinesEffectivenessEfficacy from studiesEfficacy against variantsBreakthrough casesNatural immunityHow effective the vaccine is, how long immunity lasts, how long for vaccine to take effectHow well does vaccine work against variants (mostly Delta)Breakthrough COVID cases, symptoms when vaccinatedIs natural immunity better than vaccine, do I still need vaccineTop categorySubcategoryDescriptionRequirementsTravelEmploymentVaccine proofExemptionFake vaccine proofAnti-mandateVaccine requirements to travel:for cruises, other countries, etcEmployer vaccine mandates: healthcare, government, educators, etcRequired proof of vaccination to enter places: restaurants, gyms, concert venues, etcSeeking exemption on vaccine requirements, religious or medicalSeeking fake proof of vaccinationStates banning mandates, lawsuits against employer mandatesIncentivesVaccine incentivesVaccine incentives: lotteries, gift cards, free groceries, giveaways, etcOtherNew / non-US vaccinesNon-COVID vaccinesPet vaccinesOther COVID vaccines: Novavax, Astrazeneca, SinovaxNon-COVID vaccines: flu, MMR, varicella, meningitis, etcVaccines for pets, mostly dogs and catsInformationDecision-makingComparisonModernaPfizerJohnson & JohnsonPost-vax guidelinesPros and cons of COVID vaccine, should I get the vaccine?Comparing Moderna vs Pfizer vs J&J, side effects, efficacyGeneral news on Moderna vaccine, rollout, side effects, efficacyGeneral news on Pfizer vaccine, rollout, side effects, efficacyGeneral news on J&J vaccine, emphasis on blood clots and efficacyGuidelines after vaccination: masking, testing, quarantineSpecial populationsCOVID-19 vaccine for special populations: autoimmune disease, rheumatoid arthritis, etcCommunityVaccine ratesNews on hesitancyHigh-profile anti-vaxReligious concernsVaccine trackers, rates of vaccination over time: by state, by country, etcReporting on vaccine hesitancy and anti-vaxxers, how to talk to vaccine hesitantAnti-vaccine messages from high-profile figures: politicians, celebrities, etcReligious concerns about the vaccine, seeking advice from religious leadersExpert anti-vaxAnti-vaccine messages from scientists and doctorsAvailabilityLocationsChildrenBoostersWhere to get COVID vaccine (some missed vaccine intent URLs): CVS, Walgreens, etcAre COVID vaccines for children available / recommendedAre boosters available / recommendedFigure A1: Our ontology of vaccine concerns consists of 8 top categories and 36 subcategories.12Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAURL tmed tprechttps://www.chesco.org/4836/61876/COVID-Authorized-Vax 7 10https://patch.com/new-jersey/princeton/all-information-princeton-area-covid-vaccine-sites 9 10https://dph.georgia.gov/locations/spalding-county-health-department-covid-vaccine 9 10https://www.abc12.com/2021/04/22/whitmer-says-covid-19-vaccine-clinics-like-flint-church-are-key-to-meeting-goals/ 7 10https://www.delta.edu/coronavirus/covid-vaccine.html 10 10https://www.lewistownsentinel.com/news/local-news/2021/01/scheduling-a-virus-vaccine-appointment/ 9 10https://www.laconiadailysun.com/news/local/covid-vaccine-clinics-at-lrgh-franklin-now-open-to-public/article_aa4b67e0-601a-11eb-a889-1bd4e6c83de1.html6 10https://www.insidenova.com/headlines/inside-woodbridges-new-mass-covid-19-vaccination-site-the-lines-keep-moving/article_eca45b88-8db0-11eb-a649-4bbeccd82cc3.html9 10https://www.keloland.com/news/healthbeat/coronavirus/avera-opens-covid-19-vaccine-clinic/ 10 9https://bangordailynews.com/2021/04/06/news/maine-to-kick-off-statewide-mobile-covid-19-vaccine-clinics-in-oxford-next-week-sk6sr8zcdk/8 9https://morgancounty.in.gov/covid-19-vaccinations/ 9 10https://www.firsthealth.org/specialties/more-services/covid-19-vaccine 10 10https://healthonecares.com/covid-19/physician-practices/covid-19-vaccine-information.dot 9 10https://patch.com/florida/stpete/drive-thru-covid-19-vaccine-sites-open-florida 9 10https://vaccinate.iowa.gov/eligibility/ 7 10https://www.baynews9.com/fl/tampa/news/2021/03/17/new-walk-in-vaccine-site-at-tpepin-hospitality-centre-opens-today 10 10https://www.doh.wa.gov/Emergencies/COVID19/VaccineInformation/FrequentlyAskedQuestions 10 10https://www.emissourian.com/covid19/vaccine-registration-open-for-franklin-county/article_3638f7a0-5769-11eb-9bba-3f2611173784.html10 10https://www.fema.gov/press-release/20210223/maryland-open-covid-19-vaccination-center-waldorf-fema-support 10 10https://kingcounty.gov/depts/health/covid-19/vaccine/forms.aspx 10 10Table A2: A random sample ( random_state=0 ) of 20 URLs from GNN. tmedandtprecindicate how often the URL passed themedian cutoff and precision cutoff, respectively, out of the 10 trials.20 30 40 50 60 70 80 90 100Rank n from S-PPR0.700.750.800.850.90Prop. positive from rank n-20 to nt=2t=3Figure A2: Comparison of S-PPR rank vs. proportion of URLsaround that rank that are labeled as vaccine intent. t=3andt=2indicate how many positive annotations were requiredto qualify for vaccine intent.A2 BIAS DECOMPOSITION ANDEVALUATIONSA2.1 Decomposition of biasFor a given individual, let v∈{0,1}indicate whether they actuallyhad vaccine intent (up to a certain time) and ˆv∈{0,1}indicatewhether our classifier labels them as having vaccine intent. Fur-thermore, let rrepresent the individual’s home region, such as theirstate or county. We would like to estimate the regional vaccineintent rate, Pr(v|r), but we do not have access to v, only to ˆv. Tounderstand how using ˆvin place ofvmay bias our estimates, letus relate Pr(ˆv|r)toPr(v|r). First, we introduce another variableb, which represents whether the individual is a Bing user. Notethat ˆv=1implies that b=1, since our classifier can only identifyvaccine intent from users who appear in Bing search logs.With these variables, we havePr(ˆv=1|r)=Pr(b=1|r)| {z }Bing coverage of r[Pr(v=1|r)Pr(ˆv=1|b=1,v=1,r)| {z }Classifier TPR for r(1)+Pr(v=0|r)Pr(ˆv=1|b=1,v=0,r)| {z }Classifier FPR for r].Pr(b=1|r)represents the probability that an individual from regionris a Bing user, i.e., the Bing coverage of r. Incorporating b,v, andrintoPr(ˆv|b,v,r)reflects all of the factors that affect whether theclassifier predicts vaccine intent. As discussed, if the user is nota Bing user ( b=0), then the probability is 0, so we only considertheb=1case. Ifv=1, predicting ˆv=1would be a true positive;ifv=0, it would be a false positive. Conditioning ˆvon regionrreflects the possibility that individuals from different regions mayexpress vaccine intent differently and the classifier may be moreprone to true or false positives for different regions. Finally, wemake the assumption here that b⊥v|r; that is, conditioned on theindividual’s region, being a Bing user and having vaccine intent areindependent. This misses potential within-region heterogeneity,13epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitzbut to mitigate this in practice, we use ZCTAs as our regions, whichare relatively fine-grained.Based on this decomposition, we can see that if Bing coverage,TPR, and FPR are uniform across regions, then Pr(ˆv|r)will simplybe a linear function of Pr(v|r). Unfortunately, we know that Bingcoverage is not uniform. However, we observe b=1and can assignusers to regions, so we can estimate Bing coverage per regionand correct by inverse coverage. Thus, our estimate correspondsto a coverage-corrected predicted vaccine intent rate, ̃p(v,r)=Pr(ˆv=1|r)Pr(b=1|r). If we refer to the true vaccine intent rate as p(v,r), thenwe can see that ̃p(v,r)is a linear function of p(v,r)when TPR andFPR are uniform:Pr(ˆv=1|r)Pr(b=1|r)=Pr(v=1|r)TPR+(1−Pr(v=1|r))FPR (2) ̃p(v,r)=FPR+(TPR−FPR)p(v,r).Furthermore, if FPR is low, then ̃p(v,r)is approximately propor-tional top(v,r). Thus, our first two strategies for addressing biasin our estimates are:(1)Estimate Bing coverage per region and weight by inversecoverage, which we discussed in Section 4,(2)Evaluate whether our classifier has similar TPRs and FPRsacross regions and whether FPRs are close to 0, which wediscuss below.These efforts are our first two lines of defense against bias. After this,we furthermore compare our results to established data sources,such as the CDC’s reported vaccination rates and Google searchtrends, where we find strong correlations for both.A2.2 Evaluating bias in vaccine intent classifierOur primary source of bias is uneven Bing coverage, which wefound can vary by more than 2x across ZCTAs. However, aftercorrecting for Bing coverage, we also want to know that our classi-fier does not significantly contribute to additional bias. To do this,we must establish that our classifier’s TPRs and FPRs do not varysignificantly or systematically across regions. The challenge is thatwe cannot perfectly evaluate these rates, because we do not knowall true positives or true negatives. However, we can approximatethese metrics based on the labeled URLs that we do have and fur-thermore make methodological decisions that encourage similarperformance across groups.Evaluating bias in generating URL candidates. Recall that in thefirst step of our pipeline, we generate URL candidates for annota-tion by propagating labels from vaccine intent queries to unlabeledURLs via personalized PageRank on query-click graphs. Since allURL candidates then go through manual inspection in the secondstep, we do not have to worry about the false positive rate at thisstage. However, we do need to worry about the true positive rate(i.e., recall). For example, if we only kept COVID-19 vaccine reg-istration pages for pharmacies that are predominantly in certainregions, then we could be significantly likelier to detect true vaccineintent for certain states over others. So, through the design andevaluation of our label propagation techniques, we aim to ensurerepresentativeness in vaccine intent across the US.The most important design decision is that we construct query-click graphs per state , then we run S-PPR per graph and take the0 50 100 150 200 250 300n0.00.20.40.60.81.0Prop. kept from state's top nUnion of top 200 per stateWyomingAlaskaMontanaDelawareConnecticutT ennessee0 50 100 150 200 250 300nS-PPR over combined graphFigure A3: Comparing our union-over-states (left) to a com-bined graph approach (right) for generating URL candidates.union over states of top URLs as our set of URL candidates. Run-ning this process separately for each state allows us to capturehow vaccine intent varies regionally, with state-specific programsand websites for scheduling the vaccine (Table 1). To demonstratethe risks of not using a state-specific approach, we try an alterna-tive approach where we construct a joint graph that combines thequeries and clicks for 6 states (the same 6 states as those used inthe pre-training experiments of Table A1).To represent our union approach, we take the union over these 6states of the top 200 URLs per state, which results in 935 URLs. Wecompare this to a joint approach, where we take the top 935 URLsfrom running S-PPR on the joint graph. To evaluate each approach,we compute the proportion of each state’s top NURLs that are keptacross different values of N. While we cannot be sure that everyURL in the state’s top Nis truly vaccine intent, from our annotationresults, we saw high positive rates for top-ranking URLs (FigureA2), so we would like to see similar recall at these ranks.By design, our union-over-states approach ensures equivalent,100% recall up to N=200for all states (Figure A3, left). In compari-son, we find that the joint approach yields different recalls as earlyasN=30, with much higher recall for large states than small states(Figure A3, right). For example, it keeps less than 80% of Wyoming’sURLs around rank 50 and less than 60% around rank 100, whilekeeping 100% of Tennessee’s throughout. Furthermore, even pastN=200, where our union-over-states approach no longer has guar-antees, we find that it still achieves far more similar recalls betweenstates than the joint approach. Thus, our design decisions enablesimilar recalls between states, which helps to reduce downstreammodel bias. We also cast a wide net when constructing query-clickgraphs (taking all queries and clicks that co-occur in a sessionwith any query that includes a COVID-19 or vaccine-related word),which may also improve recall and reduce bias, in case our choiceof initial keywords was not representative of all vaccine intentsearches across the US.Evaluating bias in URL expansion from GNN.. In the third step ofour pipeline, we use GNNs to expand our set of vaccine intent URLsbeyond the manually labeled ones. We would like to see that theperformance of GNNs is similarly strong across states, to ensurethat the GNN is not creating additional bias when expanding theURL set. We discussed in Section A1 that, after incorporating pre-training on S-PPR rankings for smaller states, GNNs could achieveAUCs above 0.90 for all 50 states. The main metrics of interest14Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CAwhen considering bias, however, are TPRs and FPRs. Unlike AUC,which is evaluated across decision thresholds, TPR and FPR dependon the chosen threshold tabove which data points are predictedto be positive. In our setting, we set t=max(tmed,tprec), sincewe required new vaccine intent URLs to score above these twothresholds (in at least 6 out of 10 trials): (1) tmed, the median scoreof positive URLs in the test set and (2) tprec, the minimum thresholdrequired to achieve precision of at least 0.9 on the test set. Then,we estimate TPR as the proportion of positive URLs in the test setthat score above tand FPR as the proportion of negative URLs inthe test set that score above t.We find that TPR is highly similar across states and hoversaround 0.5 for all states (Figure 3b, middle). This is because inalmost all cases, tmedis the higher of the two thresholds and thusthe value of t, so the true positive rate lands around 0.5 since tmedis the median score of the true positives. FPR is also highly similaracross states and very low (around 0.01; Figure 3b, right), whichsuggests that the quantity we estimate, ̃p(v,r), is not only a linearfunction of the true vaccine intent rate, p(v,r), but also approxi-mately proportional to it (Eq. 2). The low FPR is encouraged butnot guaranteed by our second threshold, tprec. This threshold en-sures that precision is over 0.9, which is equivalent to the falsepositive rate among the predicted positives being below 0.1, whichtypically corresponds to low false positive rates over all true neg-atives (which is what FPR measures). The GNN’s similar AUCs,TPRs, and FPRs across states, as well as the equivalent recalls inour label propagation stage, increase confidence that our classifieris not adding significant bias to our estimates.A2.3 Comparison to Google search trendsFollowing prior work using Bing data [ 68], we compare Bing andGoogle queries to evaluate the representativeness of Bing data.Search trends over time. First, we compare daily search interestin the US over our studied time period from February 1 to August31, 2021. Google Trends provides normalized search interest overtime on Google, such that 100 represents the peak popularity forthat time period, 50 means the term is half as popular, and 0 means“there was not enough data for this term.” To match this, for a givenquery, we compute the total number of times it was searched onBing in the US per day, then we divide by the maximum numberand multiply by 100. Again, we apply 1-week smoothing to boththe Bing and Google time series. We do not correct the Bing timeseries with Bing coverage here, since we cannot correct the Googletime series with Google coverage, and we want the time series tobe constructed as similarly as possible.We evaluate 30 of the most common vaccine intent queries, in-cluding [cvs covid vaccine] and [covid vaccine finder].4We observestrong Pearson correlations, with a median correlation of r=0.95(90% CI, 0.88-0.99) (Figure A4a). These correlations are similar tothose reported by Suh et al . [68] , who conduct an analogous lon-gitudinal analysis comparing Bing and Google search trends onCOVID-related queries and report correlations from r=0.86to4We identify 30 representative vaccine intent queries from the top 100 vaccine intentqueries, where we choose one standard query for each pharmacy that appears (e.g.,[cvs covid vaccine]) and one for each location-seeking query (e.g., [covid vaccine nearme]), and drop variants such as [cvs covid vaccines] and [covid 19 vaccine near me].0.98. Remaining discrepancies between Bing and Google are likelydue to differences in the populations using these search engines, aswell as potential unreported details on how Google normalizes theirsearch interest trends (e.g., Google may be normalizing differentlyfor [covid vaccine near me], which shows unusual peaks in Googletrends and is the the only query for which we do not observe astrong correlation).Search trends across states. Google also provides normalizedsearch interest across US states, where search interest is defined asthe fraction of searches from that state that match the query andsearch interest is normalized across regions such that 100 representsmaximum popularity. To imitate this process, we first assign eachvaccine intent query to a state based on where the query originated.Then, we approximate the total number of queries (all queries, notjust vaccine intent) from each state by summing over the querycounts of the active users assigned to each state. We compute thefraction of queries from each state that match the query, then wedivide by the maximum fraction and multiply by 100 to normalizeacross states.We observe strong Pearson correlations in this analysis too, witha median correlation of r=0.95(90% CI, 0.57-0.99) across the same30 vaccine intent queries (Figure A4b). The correlations tend to bestronger on the pharmacy-specific queries, where certain regionsdominate, compared to general location-seeking queries such as[covid vaccine near me], which are trickier since they follow lessobvious geographical patterns. For the pharmacy-specific queries,we also observe substantial heterogeneity in terms of which regiondominates. For example, [publix covid vaccine] is more popular insouthern states, with Florida exhibiting the maximum normalizedsearch interest on Google (100), followed by Georgia (26) and SouthCarolina (20). Meanwhile, [cvs covid vaccine] is more popular in theNortheast, with the top states being Massachusetts (100), New Jersey(96), Rhode Island (90), and Connecticut (65). These differences,reflected in the Bing search trends too, once again highlight the needfor regional awareness and representativeness when developingour vaccine intent classifier.A3 ADDITIONAL ANALYSESState-level demographic trends in vaccine intent. To investigatemore granular demographic trends, we measure correlations perstate (only including the ZCTAs in the state) for the 10 largeststates in the US. For this finer-grained analysis, we drop percentRepublican, since we only have vote share at the county-level, butwe keep all other demographic variables, which we have per ZCTA.We find that correlations are mostly consistent in sign across states,but the magnitude differs significantly (Figure A5). For example, thepositive correlation with percent 65 and over is around 2x as high inFlorida as it is in the second highest states, reflecting the large seniorpopulation in Florida and the push for seniors to get vaccinated.In most states, we also see positive correlations for percent Asianand percent White, and negative correlations for percent Blackand percent Hispanic, aligning with prior research on racial andethnic disparities in COVID-19 vaccination rates [51, 63]. Positiveand negative correlations for race are particularly strong in certainstates, including New York and Florida for percent White/Black,and California and New York for percent Hispanic.15epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitz050100Query freq[cvs covid vaccine]r=0.95GoogleBing[rite aid covid vaccine]r=0.90[walgreens covid vaccine]r=0.92[publix covid vaccine]r=0.89[walmart covid vaccine]r=0.89050100Query freq[kroger covid vaccine]r=0.92[giant eagle covid vaccine]r=0.98[safeway covid vaccine]r=0.94[heb covid vaccine]r=0.97[hyvee covid vaccine]r=0.97050100Query freq[costco covid vaccine]r=0.94[meijer covid vaccine]r=0.90[wegmans covid vaccine]r=0.98[shoprite covid vaccine]r=0.99[winn dixie covid vaccine]r=0.97050100Query freq[jewel osco covid vaccine]r=0.96[king soopers covid vaccine]r=0.96[stop and shop covid vaccine]r=0.97[albertsons covid vaccine]r=0.98[covid vaccine near me]r=0.54050100Query freq[covid vaccine appointment]r=0.96[covid vaccine locations]r=0.96[covid vaccine finder]r=0.99[where to get covid vaccine]r=0.88[covid vaccine registration]r=0.940203040506070809Month050100Query freq[covid vaccine sign up]r=0.970203040506070809Month[schedule covid vaccine]r=0.920203040506070809Month[register for covid vaccine]r=0.920203040506070809Month[where can i get a covid vaccine]r=0.880203040506070809Month[sign up for covid vaccine]r=0.94(a) US search trends over time.020406080100Bing query freqCTMDMANJNYRIVA[cvs covid vaccine]r=0.97DEMI NJPA[rite aid covid vaccine]r=0.99CTDEILMEMDNJNY RIWI[walgreens covid vaccine]r=0.95FLGASC[publix covid vaccine]r=1.00ALCODEFLMEMNMORI[walmart covid vaccine]r=0.930 25 50 75 100Google query freq020406080100Bing query freqGAKYMIOHTN[kroger covid vaccine]r=0.990 25 50 75 100Google query freqOHPA[giant eagle covid vaccine]r=1.000 25 50 75 100Google query freqDEFLGAILNYPASCTX[covid vaccine near me]r=0.860 25 50 75 100Google query freqCACTFLMANV NJNYRI[covid vaccine appointment]r=0.830 25 50 75 100Google query freqCOMAMNNYTNWA[covid vaccine finder]r=0.84(b) Search trends across states.Figure A4: Comparing search trends on Google vs. Bing for 30 of the most common vaccine intent queries.16Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA% Bachelor or highermedian incomepop per sq meter% 65 and over% Asian % White% female% Hispanic% Black% under 180.60.40.20.00.20.40.6Correlations within statesProportion with vaccine intent vs. demographic (per ZCTA)CATXFLNYPAILOHGANCMIFigure A5: Correlations between ZCTA vaccine intent rate and demographic variables, for the 10 largest US states. Error barsindicate 95% CIs.Changes in demographic trends over time. To evaluate changesin demographic trends over time, we separate ZCTAs into topand bottom quartiles, e.g., based on ZCTA median income, andcompute each quartile’s daily proportion of users showing their firstvaccine intent. Then, computing the ratio of the top quartile’s overbottom quartile’s time series reveals changes in demographic trendsover time. For example, we estimate that older ZCTAs were muchlikelier to seek the vaccine early in 2021 but this trend fell over time(Figure A6a), reflecting how the US vaccine rollout first prioritizedseniors then expanded to general eligibility [ 4,38]. We also see anincrease in vaccine intent from more Republican ZCTAs in summer2021 (Figure A6b), reflecting new calls from Republican leadersto get vaccinated [ 64] and a self-reported uptick in vaccinationsamong Republicans [62].Examples of URL clusters. To construct our ontology of vaccineconcerns, we begin by automatically partitioning URLs into clus-ters, using the Louvain community detection algorithm [ 12] on thecollapsed URL-URL graph. We find that our automatic approachproduces remarkably coherent clusters, with each cluster coveringa distinct topic. The cluster annotations are provided in the ontol-ogy that we release, with URLs mapped to 156 unique clusters. Weprovide a sample of the clusters in Table A3, listing each cluster’smost frequently clicked URLs and top query, which we obtain bysumming over all queries that led to clicks on URLs in the cluster.From the top query and URLs, we observe distinct topics coveredin each cluster: one on CDC masking guidelines after vaccination,one on the Vaccine Adverse Event Reporting System (VAERS), oneabout religious exemptions for COVID-19 vaccine requirements,and one about side effects of the Johnson & Johnson vaccine.Holdout concerns across demographic groups. We conduct an ad-ditional analysis to analyze variation in holdout concerns acrossdemographic groups. For a given demographic variable, we com-pute its median value across all ZCTAs, split holdouts into thosefrom ZCTAs above the median versus those from ZCTAs below themedian, then compare the vaccine concerns of those two groups ofholdouts (by measuring their click ratios). We find significant vari-ability across demographic groups in terms of holdout concerns (Fig-ure A7). Compared to holdouts from more Republican-leaning ZC-TAs, holdouts from more Democrat-leaning ZCTAs were far moreinterested in requirements around employee mandates and vac-cine proof, which may be because jurisdictions run by Democratswere likelier to have vaccine requirements [ 9,69] while several Re-publican governors in fact banned such requirements. Meanwhile,holdouts from more Republican-leaning ZCTAs were more inter-ested in eerie vaccine fears, fears of vaccine-caused deaths, andvaccine incentives. We also find that, compared to holdouts fromlower-income ZCTAs, holdouts from higher-income ZCTAs weresignificantly more interested in vaccine requirements, vaccine rates,and anti-vaccine messages from experts and high-profile figures,while holdouts from lower-income ZCTAs were more interested invaccine incentives and religious concerns about the vaccine.17epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA S. Chang, A. Fourney, and E. Horvitz0.0000.0010.0020.0030.0040.0050.0060.007Prop. users showingfirst vaccine intentComparing quartiles: % 65 and overtop quartilebottom quartile02 03 04 05 06 07 08 09Month50050Percent difference,top vs bottom(a) Top and bottom quartiles for percent 65 and over.0.0000.0010.0020.0030.0040.0050.0060.007Prop. users showingfirst vaccine intentComparing quartiles: % republicantop quartilebottom quartile02 03 04 05 06 07 08 09Month50050Percent difference,top vs bottom (b) Top and bottom quartiles of percent Republican.Figure A6: We quantify changes over time in demographic trends by estimating average vaccine intent rates per quartile overtime (top) and computing their percent difference (bottom).#URLsTop query Top URLs %Clicks206 [cdc mask guide-lines]https://www.cbsnews.com/news/cdc-mask-guidelines-covid-vaccine 8.0https://www.cdc.gov/media/releases/2021/p0308-vaccinated-guidelines.html 6.9https://www.usatoday.com/story/news/health/2021/05/13/covid-vaccine-cdc-variant-fda-clots-world-health-organization/50665040014.5https://www.nytimes.com/2021/05/13/us/cdc-mask-guidelines-vaccinated.html 4.4139 [vaers database https://www.cdc.gov/vaccinesafety/ensuringsafety/monitoring/vaers/index.html 17.0covid-19] https://rightsfreedoms.wordpress.com/2021/07/22/vaers-whistleblower-45000-dead-from-covid-19-vaccines-within-3-days-of-vaccination-sparks-lawsuit-against-federal-government6.8https://www.theburningplatform.com/2021/07/03/latest-cdc-vaers-data-show-reported-injuries-surpass-400000-following-covid-vaccines5.7https://vaersanalysis.info/2021/08/20/vaers-summary-for-covid-19-vaccines-through-8-13-20214.9137 [religious exemp-tionhttps://www.verywellfamily.com/religious-exemptions-to-vaccines-2633702 16.5for covid-19 vacci-nation]https://www.fisherphillips.com/news-insights/religious-objections-to-mandated-covid-19-vaccines-considerations-for-employers.html5.1https://www.law360.com/articles/1312230/employers-should-plan-for-vaccine-religious-exemptions3.9https://www.kxly.com/who-qualifies-for-a-religious-exemption-from-the-covid-19-vaccine 3.3113 [johnson andjohnsonhttps://www.openaccessgovernment.org/side-effects-johnson-johnson-vaccine/109505 20.3side effects] https://www.healthline.com/health/vaccinations/immunization-complications 8.1https://www.msn.com/en-us/health/medical/these-are-the-side-effects-from-the-johnson-and-johnson-covid-19-vaccine/ar-bb1f03fq4.3https://www.healthline.com/health-news/mild-vs-severe-side-effects-from-the-johnson-and-johnson-covid-19-vaccine-what-to-know4.3Table A3: The 4 highest-modularity clusters with at least 100 URLs. For each cluster, we provide its number of URLs, its mostfrequent query, its top 4 URLs (by click frequency), and percentage of clicks over all clicks on URLs in the cluster that the URLaccounts for.18Accurate Measures of Vaccination and Concerns of Vaccine Holdouts from Web Search Logs epiDAMIK @ KDD’23, August 7 2023, Long Beach, CA0.33x 0.5x 1.0x 2.0xRatio of click probs. (within vaccine clicks)Info about ModernaNormal side effectsComparing vaccinesInfo about PfizerInfo about J&JEfficacy from studiesVaccine ratesVaccine incentivesSpecial populationsSevere side effectsPost-vax guidelinesBreakthrough casesDecision-makingEmployee mandatesNews about hesitancyVaccine proofEfficacy against variantsReproductive healthNatural immunityTravel restrictionsVaccine developmentVaccine-caused deathsFDA approvalFake vaccine proofAnti-mandateExemptionEerie fearsHigh-profile anti-vaxExpert anti-vaxReligious concernsVaccine concerns of holdouts, above vs below medianApr-Jun 2021% 65 and over% Whitemedian income% RepublicanFigure A7: Variability in holdout concerns across demographic groups. For each demographic variable (e.g., percent Republican),we compare the concerns of holdouts from ZCTAs above the variable’s median versus holdouts from ZCTAs below the median.Subcategories are ordered from most holdout-leaning to most early adopter-leaning, following Figure 7c. Error bars indicatebootstrapped 95% CIs.19 |
PhAOtEHLo1 | Consistent Comparison of Symptom-based Methods forCOVID-19 Infection Detection (Extended Abstract)Jesús Rufino1, Juan Marcos Ramírez1, Jose Aguilar1, Carlos Baquero2, Jaya Champati1, DavideFrey3, Rosa Elvira Lillo-Rodríguez4, Antonio Fernández Anta11IMDEA Networks Institute, Madrid, Spain,2Universidade do Minho, Braga, Portugal3INRIA, Rennes, France,4Universidad Carlos III, Madrid, SpainABSTRACTDuring the global pandemic crisis, several COVID-19 diagnosismethods based on survey information have been proposed withthe purpose of providing medical staff with quick detection toolsthat allow them to efficiently plan the limited healthcare resources.In general, these methods have been developed to detect COVID-19-positive cases from a particular combination of self-reportedsymptoms. In addition, these methods have been evaluated usingdatasets extracted from different studies with different characteris-tics. On the other hand, the University of Maryland, in partnershipwith Facebook, launched the Global COVID-19 Trends and ImpactSurvey (UMD-CTIS), the largest health surveillance tool to datethat has collected information from 114 countries/territories fromApril 2020 to June 2022. This survey collected information on vari-ous individual features including gender, age groups, self-reportedsymptoms, isolation measures, and mental health status, amongothers. In this paper, we compare the performance of differentCOVID-19 diagnosis methods using the information collected byUMD-CTIS, for the years 2020 and 2021, in six countries: Brazil,Canada, Israel, Japan, Turkey, and South Africa. The evaluation ofthese methods with homogeneous data across countries and yearsprovides a solid and consistent comparison among them.KEYWORDSCOVID-19 diagnosis, F1-score, light gradient boosting machine,logistic regression, rule-based methods.1 INTRODUCTIONIn December 2019, the coronavirus disease 2019 (COVID-19) emergedin China caused by the severe acute respiratory syndrome coron-avirus 2 (SARS-CoV-2) [17]. Within a few months, this disease ledto a global pandemic crisis that has challenged national health-care systems [ 6]. More precisely, by June 2023, the cumulativenumber of confirmed cases worldwide exceeded 688 million, andofficially over 6,800,000 people have died from COVID-19; https://www.worldometers.info/coronavirus/. In this context, the plan-ning of the healthcare resources (e.g., the estimation of the numberof hospital beds or intensive care units needed for COVID-19 pa-tients) has been determined by the availability of quick and efficientinstruments for the diagnosis of active cases.Thereverse transcriptase-polymerase chain reaction (RT-PCR) testhas been considered the standard tool to detect infected people [ 5].However, real-time disease monitoring based on the RT-PCR test de-mands material and human resources that are not always available.To overcome these limitations, various diagnosis methods based onsurvey information have been proposed that combine multiple indi-vidual features (age, gender, symptoms, demographic data, etc.) tocharacterize COVID-19-infected people [ 1–4,9–12,14–16,18,19].Specifically, most of these methods propose simple rules or buildmachine learning models that evaluate a set of individual attributesto determine a COVID-19-positive case. However, a consistent com-parison framework that evaluates the performance yielded by thedifferent methods is missing since the generated models and thecorresponding conclusions are assessed using different datasetsthat are heterogeneous in size and type.On the other hand, in April 2020, the University of MarylandGlobal COVID-19 Trends and Impact Survey (UMD-CTIS), in part-nership with Facebook, launched the largest global health surveil-lance platform to date [ 8]. More precisely, this project stored theresponses provided by a subset of Facebook invited users aboutdifferent topics related to the COVID-19 pandemic such as the pres-ence of symptoms, RT-PCR outcomes, and vaccination acceptance,among others. This data collection instrument was available in 56languages and it recorded tens of millions of responses from 114countries or territories worldwide.In this paper, we conduct a consistent comparison of differentmethods that detect COVID-19-positive cases from a combinationof features collected from surveys. To this end, we take into accountthe information included in the UMD-CTIS records extracted fromsix countries: Brazil, Canada, Israel, Japan, Turkey, and South Africa.For each country, the models are trained using a randomly selectedsubset of tested individuals who reported at least one symptom.Furthermore, we compare the performance for two years: 2020and 2021, which represent two different periods of the pandemicwithout and with vaccination, respectively. We compare the de-tection methods using four performance metrics: F1-score, sensi-tivity, specificity, and precision (only F1-score is presented in thisextended abstract). Overall, the detection methods exhibiting thebest performances across different groups and metrics are Mika[10] (F1-score: 59.33%),Astley [3] (F1-score: 59.22%),Smith [16](F1-score: 59.22%),Bhattacharya [4] (F1-score: 58.69%),Roland[12] (F1-score: 58.20%),Shoer [15] (F1-score: 58.15%),Menni_1 [9](F1-score: 57.03%), and Menni_2 [9] (F1-score: 56.94%).2 MATERIALS AND METHODS2.1 UMD-CTIS SurveyWe perform a consistent comparative study of various COVID-19active case detection methods from data provided by the UMD-CTISsurvey. More precisely, since April 23, 2020, Facebook worldwideusers were invited to participate in the UMD-CTIS survey. Userswho accepted the invitation were moved to a web survey platform,where potential participants must report age > 18 and consent ofdata use before responding to the survey. The survey instrumentconsists of a web-based questionnaire collecting information onJesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati, Davide Frey, Rosa Elvira Lillo-Rodríguez, Antonio Fernández Antagender, age groups, symptoms, COVID testing, isolation, and vac-cination, among others. Furthermore, the survey instrument wascontinuously updated to aggregate new items. Finally, UMD orga-nized and stored daily microdata that were further processed todevelop our comparative study.2.2 Comparative study designIn this work, we compare the performance of various COVID-19 de-tection methods using the information provided by UMD-CTIS dataextracted from six countries: Brazil, Canada, Israel, Japan, Turkey,and South Africa. These countries are selected based on geographi-cal diversity and the large amount of available data. In addition, thiscomparative study is performed for two non-overlapped periods:(2020) from April 23 to December 31, 2020, and (2021) from January1 to December 31, 2021. Notice that the end of 2020 matches thestart of the first COVID-19 vaccination campaigns. Therefore, wecan compare the performance of the detection methods withoutand with information on vaccination. Table 1 summarizes the char-acteristics of the study population for the various countries and forthe two periods under test.For every country and period, we build a dataset by picking theanswers reporting lab test results in the last 14 days (the surveydoes not collect the test type) and at least one potential COVID-19symptom, i.e., this comparative study selects the tested and symp-tomatic cases. We select symptomatic cases because feature-basedpredictive methods typically aim at finding the combination ofsymptoms that detect infected people. In addition, we choose thetested individuals with the aim of obtaining the ground truth sam-ple set that allows us to evaluate the performance of the differentmethods quantitatively. Since questionnaires contain categoricaldata, we apply binary encoding (dummy coding) to each response.This leads to datasets with 201 features (attributes, columns, or vari-ables) for 2020, and the datasets have between 431 and 452 columnsfor 2021 depending on the selected country. For each dataset, thisstudy evaluates the performance of the various COVID-19 activecase detection methods. To this end, our study divided every datasetinto 100 partitions. For each trial, 80 %of the dataset rows (ques-tionnaires or samples) were randomly selected as training samples,and the remaining 20 %were used to test the various methods.2.3 Detection methods under comparisonIn this work, we compare the performance of various COVID-19diagnosis methods belonging to three categories:(1)Rule-based methods: CDC [ 1], WHO [ 18], Akimbami [ 2],Solomon [14], Perez [11].(2)Logistic regression techniques: Menni [ 9], Roland [ 12], Smith[16], Shoer [15], Bhattacharya [4], Mika [10].(3)Tree-based machine-learning models: Zoabi [ 19], Astley [ 3].In this work, we have implemented two versions of the Mennimethod and two versions of the Zoabi method. Note that UMD-CTIS data did not register whether the respondent skipped meals.Therefore, we modified the Menni method by fixing the skippedmeals variable to zero ( Menni_1 ). Furthermore, we followed theprocedure reported in [ 9] to build the logistic regression modelfrom individual features available in our dataset ( Menni_2 ). Inother words, we built a regression model that considers the features:age, gender, loss of smell and taste, cough, and fatigue. In the caseof the Zoabi method, notice that UMD-CTIS data ranges of agesdo not have a boundary at 60. The boundary is either at 55 or 65.We have created two different models, one for ages greater than55 years ( Zoabi_55 ) and the other for ages greater than 65 years(Zoabi_65 ). Further information regarding the methods under testcan be found in the corresponding references and in the full versionof the article [13].2.4 Benchmarking detection methodsFirst, we use the F1-score to quantitatively assess the performanceof the various detection methods. To this end, our procedure firstlyobtains the predictions over the test set for each trial. From the pre-dicted estimates and the ground truth data, the procedure identifiesthe number of true positives TP, false positives FP, true negativesTN, and false negatives FN. Then, the F1-score is obtained as fol-lows:F1=2TP2TP+FP+FN. (1)Tables 2 and 3 display the ensemble average and the CI of theF1-score for the five countries and for 2020 and 2021, respectively.Specifically, each value in these tables is obtained by averaging100 realizations of the corresponding experiment. Tables with thesensitivity, specificity, and precision values obtained are includedin the full version of the article [13].3 RESULTSAs can be seen in Table 1, 83,238respondents from Brazil reported atest outcome and at least one symptom in 2020. In this cohort, 44,963participants reported a positive test result, and 38,275respondentshad a negative test outcome. Table 1 also includes the test positiverate (TPR) where TPR=(100×positive)/(Tested symptomatic ).For example, the TPR for Brazil 2020 is 54.02%. On the other hand,for Brazil 2021, the dataset was extracted from 262,683participantswho reported at least one symptom and the outcome of a test donein the last 14 days. In this case, 106,471respondents reported apositive test result, and 156,212questionnaires informed a negativetest outcome with a TPR of 40.53%. In summary, the number oftested symptomatic, the number of positive cases, and the numberof negative results for the remaining countries in 2020 and 2021are displayed in Table 1. Additionally, Table 1 shows informationabout other individual features such as gender and age groups.Table 2 shows the ensemble averages with the corresponding 95%confidence intervals (CI) of the F1score yielded by the various detec-tion methods for the different countries and for 2020. In particular,the methods the best F1scores for each country are: Brazil ( Astley :73.72%), Canada ( Menni_1 :54.33%), Israel ( Bhattacharya :62.78%),Japan ( Menni_1 :46.33%), Turkey ( Bhattacharya :67.67%), andSouth Africa ( Roland :67.32%). The F1score in %and the CIsobtained for 2021 are displayed in Table 3. For 2021, the best F1scores are: Brazil ( Menni_2 :66.54%), Canada ( Smith :50.28%), Is-rael ( Bhattacharya :58.76%), Japan ( Mika :52.41%), Turkey ( Bha-ttacharya :64.61%), and South Africa ( Menni_2 :66.50%). As ob-served in Tables 2 and 3, none of the methods achieved an F1scoreof74%or above, indicating that no model is very good. According toTable 1, Brazil, Turkey, and South Africa exhibit TPR values at leasttwofold higher than those obtained from Canada, Israel, and Japan.Consistent Comparison of Symptom-based Methods for COVID-19 Infection Detection (Extended Abstract)Table 1: Characteristics of the study population for the various countries and for two non-overlapped periods (2020 and 2021).CharacteristicBrazil Canada Israel Japan Turkey South Africa2020 2021 2020 2021 2020 2021 2020 2021 2020 2021 2020 20211. Tested symptomatic, N 83238 262683 8927 33997 5944 19063 4698 41010 15952 28896 7883 230382. Test outcome(a) Positive, N 44963 106471 838 3433 1238 2869 532 4011 6167 9228 2866 8459(b) Negative, N 38275 156212 8089 30564 4706 16194 4166 36999 9785 19668 5017 14579(c) TPR, % 54.02 40.53 9.39 10.10 20.83 15.05 11.32 9.78 38.66 31.94 36.35 36.713. Gender(a) Female, N 45357 130235 5438 19472 2941 9290 1679 14283 3939 7185 3923 11291(b) Male, N 24928 76689 2315 9824 2199 6746 2388 20791 8920 15292 2525 67304. Age groups(a) 18-24, N 8270 27474 1136 3248 583 1498 179 871 1716 2267 739 1580(b) 25-34, N 19596 56227 2337 7172 1144 3069 577 3797 4375 5756 2252 4889(c) 35-44, N 21061 57452 1750 6688 1041 3333 997 7527 4043 7110 1801 4721(d) 45-54, N 13776 39122 1210 5215 933 3115 1216 10413 2071 4594 1141 3878(e) 55-64, N 6968 22190 954 4478 880 2634 828 8724 862 2400 491 2124(f) 65-74, N 140 6016 308 2421 510 1957 479 3529 158 719 1667 799(g) 75+, N 233 1025 126 825 143 627 66 846 21 134 27 230Table 2: F1score and its 95%confidence interval for the selected countries for 2020, in %.Method Brazil Canada Israel Japan Turkey South AfricaMenni_1 65.56 (65.48 - 65.64) 54.33 (53.66 - 54.99) 59.76 (59.16 - 60.36) 46.33 (45.33 - 47.33) 63.93 (63.68 - 64.17) 61.39 (61.07 - 61.70)Menni_2 71.13 (71.01 - 71.24) 49.33(48.77 - 49.88) 57.50 (57.04 - 57.97) 39.91 (39.27 - 40.54) 67.41 (67.21 - 67.60) 66.36 (66.10 - 66.62)Roland 69.38 (69.30 - 69.46) 51.44 (50.86 - 52.02) 61.93 (61.46 - 62.41) 40.68 (39.98 - 41.39) 67.06 (66.87 - 67.26) 67.32 (67.05 - 67.58)Smith 71.11 (71.05 - 71.18) 53.43 (52.85 - 54.01) 62.47 (61.98 - 62.97) 45.12 (44.42 - 45.82) 67.30 (67.11 - 67.49) 62.06 (61.80 - 62.32)Zoabi_55 70.71 (70.65 - 70.77) 32.96 (32.37 - 33.54) 47.76 (47.32 - 48.20) 29.95 (29.29 - 30.60) 57.86 (57.69 - 58.03) 59.05 (58.80 - 59.31)Zoabi_65 70.73 (70.67 - 70.79) 32.86 (32.28 - 33.44) 47.79 (47.36 - 48.23) 29.91 (29.27 - 30.55) 57.72 (57.55 - 57.88) 59.00 (58.74 - 59.25)CDC 73.42 (73.36 - 73.48) 23.43 (23.14 - 23.72) 45.84 (45.46 - 46.21) 27.38 (27.00 - 27.75) 62.60 (62.42 - 62.78) 62.13 (61.88 - 62.39)Shoer 70.45 (70.39 - 70.52) 50.95 (50.37 - 51.54) 62.41 (61.93 - 62.89) 44.57 (43.86 - 45.28) 67.49 (67.30 - 67.69) 66.76 (66.52 - 67.00)Bhattacharya 69.77 (69.70 - 69.83) 51.90 (51.31 - 52.50) 62.78 (62.30 - 63.26) 39.41 (38.84 - 39.97) 67.67 (67.48 - 67.87) 66.81 (66.52 - 67.10)WHO 23.92 (23.83 - 24.01) 24.08 (23.45 - 24.70) 24.69 (24.15 - 25.24) 27.29 (26.52 - 28.06) 25.14 (24.90 - 25.38) 30.97 (30.59 - 31.35)Perez 59.47 (59.39 - 59.55) 45.20 (44.56 - 45.83) 52.27 (51.71 - 52.82) 32.93 (32.23 - 33.64) 58.12 (57.89 - 58.35) 61.00 (60.70 - 61.30)Mika 69.43 (69.37 - 69.49) 51.43 (50.86 - 52.01) 62.16 (61.68 - 62.63) 45.29 (44.65 - 45.94) 67.08 (66.89 - 67.28) 66.40 (66.13 - 66.68)Akinbami_1 12.85 (12.77 - 12.94) 11.33 (10.72 - 11.93) 10.22 (9.82 - 10.62) 13.38 (12.58 - 14.18) 11.48 (11.26 - 11.70) 17.70 (17.34 - 18.07)Akinbami_2 14.69 (14.60 - 14.78) 9.41 (8.89 - 9.92) 9.59 (9.16 - 10.01) 13.16 (12.35 - 13.98) 10.81 (10.60 - 11.03) 17.14 (16.80 - 17.49)Akinbami_3 27.84 (27.73 - 27.94) 20.23 (19.66 - 20.81) 21.67 (21.14- 22.19) 18.98 (18.22 - 19.73) 26.31 (26.05 - 26.56) 28.93 (28.57 - 29.29)Salomon 30.97 (30.87 - 31.07) 25.52 (24.84 - 26.20) 27.12 (26.58 - 27.66) 30.64 (29.93 - 31.35) 28.36 (28.10 - 28.61) 39.35 (38.98 - 39.72)Astley 73.72 (73.65 - 73.78) 48.29 (47.58 - 49.00) 62.47 (61.98 - 62.97) 44.13 (43.32 - 44.93) 67.45 (67.24 - 67.65) 66.85 (66.61 - 67.09)Since the F1score is highly affected by imbalanced classes [ 7], wecomputed the averages of the F1score yielded by the detectionmethods for three groups: the broad set of the six countries, theset of countries with high TPR (Brazil, Turkey, and South Africa)and low TPR (Canada, Israel, and Japan) for 2020, 2021, and theentire interval 2020-2021 (Table 4). For 2020, when there was novaccination yet, the most efficient method was Astley (Average:60.49%). In the Astley method, the most relevant are cough, stuffyor runny nose, aches or muscle pain, headache, sore throat, andfever. In 2021, when vaccination began, Mika was the most effectivemethod (Average: 58.35%). In the Mika method, fever, cough, lossof taste and smell, and gastrointestinal problems are consideredfor COVID-19 detection. In the full article [ 13], we compared thevarious detection methods in terms of sensitivity, specificity, andprecision.4 CONCLUSIONSIn this work, we conduct a comparison of various COVID-19 diagno-sis methods based on survey information using datasets extractedfrom the global UMD-CTIS survey. More precisely, we comparethe different methods for six countries and two periods (with andwithout vaccines) using the F1score as a performance metric. Fromthese results, we highlight the techniques showing the best F1score.It is important to mention that, as can be seen in Tables 2 and 3,none of the methods achieve an F1score above 75%indicating thatno model has a superior performance.Additional results and a more extended discussion can be foundin the full version of the article [13].5 ETHICAL DECLARATIONThe Ethics Board (IRB) of IMDEA Networks Institute gave ethi-cal approval for this work on 2021/07/05. IMDEA Networks hassigned Data Use Agreements with Facebook and the Universityof Maryland (UMD) to access their data, specifically, UMD project1587016-3 entitled C-SPEC: Symptom Survey: COVID-19 entitledILI Community-Surveillance Study. The data used in this study wascollected by the University of Maryland through The University ofMaryland Social Data Science Center Global COVID-19 Trends andImpact Survey in partnership with Facebook. Informed consent hasbeen obtained from all participants in this survey by this institution.All the methods in this study have been carried out in accordancewith relevant ethics and privacy guidelines and regulations.6 AVAILABILITY OF DATA AND MATERIALSThe data presented in this paper (in aggregated form) and theprograms used to process it will be openly accessible at https://github.com/GCGImdea/coronasurveys/. The microdata of the CTISsurvey from which the aggregated data was obtained cannot beshared, as per the Data Use Agreements signed with Facebook andthe University of Maryland (UMD).7 FUNDING/SUPPORTThis work was partially supported by grants COMODIN-CM andPredCov-CM, funded by Comunidad de Madrid and the EuropeanUnion through the European Regional Development Fund (ERDF),and grants TED2021-131264B-I00 (SocialProbing) and PID2019-104901RB-I00, funded by Ministry of Science and Innovation - StateJesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati, Davide Frey, Rosa Elvira Lillo-Rodríguez, Antonio Fernández AntaTable 3: F1score and its 95%confidence interval for the selected countries for 2021, in %Method Brazil Canada Israel Japan Turkey South AfricaMenni_1 59.24 (59.18 - 59.31) 49.38 (49.02- 49.74) 57.31 (56.96 - 57.65) 49.24 (49.16 - 49.83) 59.65 (59.44 - 59.87) 58.28 (58.06 - 58.50)Menni_2 66.54 (66.49 - 66.59) 39.82 (39.59 - 40.05) 53.46 (53.21 - 53.70) 42.60 (42.37 - 42.84) 62.71 (62.56 - 62.85) 66.50 (66.33 - 66.68)Roland 65.76 (65.71 - 65.82) 46.28 (46.03 - 46.53) 57.16 (56.86 - 57.46) 42.82 (42.62 - 43.03) 64.13 (63.96 - 64.31) 64.41 (64.23 - 64.59)Smith 63.37 (63.32 - 63.42) 50.28 (49.99 - 50.57) 58.00 (57.68 - 58.33) 51.48 (51.23 -51.74) 64.38 (64.21 - 64.55) 61.62 (61.45 - 61.80)Zoabi_55 59.83 (59.79 - 59.88) 37.31 (37.01 - 37.60) 39.63 (39.28 - 39.98) 33.71 (33.45 - 33.98) 52.14 (51.88 - 52.40) 59.62 (59.47 - 59.77)Zoabi_65 59.78 (59.74 - 59.83) 37.10 (36.81 - 37.39) 39.64 (39.29 - 39.99) 33.36 (33.11 - 33.62) 52.06 (51.80 - 52.31) 59.54 (59.38 - 59.69)CDC 63.22 (63.17 - 63.26) 27.41 (27.28 - 27.55) 38.78 (38.59 - 38.97) 28.54 (28.40 - 28.68) 55.96 (55.81 - 56.11) 61.25 (61.10 - 61.39)Shoer 65.81 (65.76 - 65.87) 41.10 (40.84 - 41.36) 53.67 (53.37 - 53.97) 45.42 (45.07 - 45.78) 64.18 (64.01 - 64.35) 64.97 (64.80 - 65.15)Bhattacharya 64.16 (64.11 - 64.22) 49.22 (48.96 - 49.49) 58.76 (58.48 - 59.03) 45.82 (45.59 - 46.05) 64.61 (64.44 - 64.78) 63.40 (63.22 - 63.59)WHO 23.62 (23.56 - 23.68) 26.01 (25.66 - 26.35) 27.92 (27.59 - 28.24) 34.05 (33.74 - 34.37) 27.72 (27.49 - 27.94) 32.78 (32.58 - 32.98)Perez 54.85 (54.79 - 54.90) 44.70 (44.40 - 45.00) 51.27 (50.93 - 51.61) 39.72 (39.45 - 40.00) 56.03 (55.86 - 56.21) 59.17 (58.98 - 59.35)Mika 65.33 (65.28 - 65.38) 46.76 (46.40 - 47.12) 57.50 (57.22 - 57.79) 52.41 (51.73 - 53.09) 64.13 (63.96 - 64.31) 63.98 (63.81 - 64.15)Akinbami_1 12.02 (11.96 - 12.07) 11.43 (11.17 - 11.70) 10.60 (10.33 - 10.88) 11.11 (10.82 - 11.39) 13.86 (13.69 - 14.03) 15.86 (15.66 - 16.06)Akinbami_2 12.02 (12.05 - 12.16) 8.03 (7.79 - 8.27) 11.48 (11.20 - 11.75) 9.10 (8.83 - 9.31) 11.80 (11.64 - 11.96) 13.61 (13.44 - 13.79)Akinbami_3 26.59 (26.00 - 26.11) 20.96 (20.64 - 21.27) 21.96 21.62 - 22.30) 19.90 (19.63 - 20.17) 26.35 (26.12 - 26.58) 28.08 (27.85 - 28.31)Salomon 30.15 (30.11 - 30.24) 28.06 (27.70 - 28.43) 30.72 (30.39 - 31.05) 37.27 (36.97 - 37.57) 31.31 (31.09 - 31.53) 38.03 (37.83 - 38.23)Astley 65.95 (65.90 - 66.01) 45.07 (44.74 - 45.40) 58.62 (58.29 - 58.94) 50.39 (50.08 - 50.70) 63.67 (63.50 - 63.85) 64.06 (63.88 - 64.24)Table 4: Average F1score (in %) for three country groups: theoverall six countries (overall), the countries with high TPR(High TPR: Brazil, Turkey, and South Africa), and the coun-tries with low TPR (Low TPR: Canada, Israel, and Japan) for2020, 2021, 2020-2021.2020 2021 2020-2021Method OverallLow TPROverallLow HighOverallLow HighTPR TPR TPR TPR TPR TPRMenni_1 58.55 53.47 63.63 55.52 51.98 59.06 57.03 52.73 61.34Menni_2 58.61 48.91 68.30 55.27 45.29 65.25 56.94 47.10 66.78Roland 59.64 51.35 67.92 56.76 48.75 64.77 58.20 50.05 66.34Smith 60.25 53.67 66.82 58.19 53.25 63.12 59.22 53.46 64.97Zoabi_55 49.72 36.89 62.54 47.04 36.88 57.20 48.38 36.89 59.87Zoabi_65 49.67 36.85 62.48 46.91 36.70 57.13 48.29 36.78 59.81CDC 49.13 32.22 66.05 45.86 31.58 60.14 47.50 31.90 63.10Shoer 60.44 52.64 68.23 55.86 46.73 64.99 58.15 49.69 66.61Bhattacharya 59.72 51.36 68.08 57.66 51.27 64.06 58.69 51.32 66.07WHO 26.02 25.35 26.68 28.68 29.33 28.04 27.35 27.34 27.36Perez 51.50 43.47 59.53 50.96 45.23 56.68 51.23 44.35 58.11Mika 60.30 52.96 67.64 58.35 52.22 64.48 59.33 52.59 66.06Akinbami_1 12.83 11.64 14.01 12.48 11.05 13.91 12.65 11.35 13.96Akinbami_2 12.47 10.72 14.21 11.02 9.54 12.51 11.75 10.13 13.36Akinbami_3 23.99 20.29 27.69 23.97 20.94 27.01 23.98 20.62 27.35Salomon 30.33 27.76 32.89 32.59 32.02 33.16 31.46 29.89 33.03Astley 60.49 51.63 69.34 57.96 51.36 64.56 59.22 51.50 66.95Research Agency, Spain MCIN/AEI/10.13039/ 501100011033 andthe European Union “NextGenerationEU”/PRTR.REFERENCES[1]2020. Coronavirus Disease 2019 (COVID-19) 2020 Interim Case Definition, Ap-proved April 5, 2020. National Notifiable Diseases Surveillance System (NNDSS)(Aug. 2020). arXiv:https://ndc.services.cdc.gov/case-definitions/coronavirus-disease-2019-2020/[2]Lara J Akinbami, Lyle R Petersen, Samira Sami, Nga Vuong, Susan L Lukacs,Lisa Mackey, Jenny Atas, and Bonnie J LaFleur. 2021. Coronavirus Disease 2019Symptoms and Severe Acute Respiratory Syndrome Coronavirus 2 AntibodyPositivity in a Large Survey of First Responders and Healthcare Personnel, May-July 2020. Clinical infectious diseases : an official publication of the InfectiousDiseases Society of America 73, 3 (August 2021), e822–e825. https://doi.org/10.1093/cid/ciab080[3]Christina Astley, Gaurav Tuli, Kimberly Mc Cord, Emily Cohn, Benjamin Rader,Tanner Varrelman, Samantha Chiu, Xiaoyi Deng, et al .2021. Global monitoringof the impact of the COVID-19 pandemic through online surveys sampled fromthe Facebook user base. Proceedings of the National Academy of Sciences 118, 51(2021).[4]Aakashneel Bhattacharya, Piyush Ranjan, Arvind Kumar, Megha Brijwal, Ravin-dra M Pandey, Niranjan Mahishi, Upendra Baitha, Shivam Pandey, Ankit Mittal,and Naveet Wig. 2021. Development and Validation of a Clinical Symptom-based Scoring System for Diagnostic Evaluation of COVID-19 Patients Present-ing to Outpatient Department in a Pandemic Situation. Cureus 13, 3 (2021).https://doi.org/10.7759/cureus.13681[5]Matthew P Cheng, Jesse Papenburg, Michaël Desjardins, Sanjat Kanjilal, CarolineQuach, Michael Libman, Sabine Dittrich, and Cedric P Yansouni. 2020. Diagnostictesting for severe acute respiratory syndrome–related coronavirus 2: a narrativereview. Annals of internal medicine 172, 11 (2020), 726–734.[6]Ezekiel J. Emanuel, Govind Persad, Ross Upshur, Beatriz Thome, Michael Parker,Aaron Glickman, Cathy Zhang, Connor Boyle, Maxwell Smith, and James P.Phillips. 2020. Fair Allocation of Scarce Medical Resources in the Time of COVID-19.New England Journal of Medicine 382, 21 (2020), 2049–2055. https://doi.org/10.1056/NEJMsb2005114[7]Haibo He and Yunqian Ma. 2013. Imbalanced Learning: Foundations, Algorithms,and Applications (1st ed.). Wiley-IEEE Press.[8]Frauke Kreuter, Neta Barkay, Alyssa Bilinski, Adrianne Bradford, Samantha Chiu,Roee Eliat, Junchuan Fan, Tal Galili, Daniel Haimovich, Brian Kim, et al .2020.Partnering with Facebook on a university-based rapid turn-around global survey.Survey Research Methods: SRM 14, 2 (2020), 159–163. https://doi.org/10.18148/srm/2020.v14i2.7761[9]Cristina Menni, Ana M Valdes, Maxim B Freidin, Carole H Sudre, Long H Nguyen,David A Drew, Sajaysurya Ganesh, Thomas Varsavsky, M Jorge Cardoso, JuliaS El-Sayed Moustafa, et al .2020. Real-time tracking of self-reported symptomsto predict potential COVID-19. Nature medicine 26, 7 (2020), 1037–1040. https://doi.org/10.1038/s41591-020-0916-2[10] Justyna Mika, Joanna Tobiasz, Joanna Zyla, Anna Papiez, Małgorzata Bach, Alek-sandra Werner, Michał Kozielski, Mateusz Kania, Aleksandra Gruca, DamianPiotrowski, et al .2021. Symptom-based early-stage differentiation betweenSARS-CoV-2 versus other respiratory tract infections—Upper Silesia pilot study.Scientific reports 11, 1 (2021), 1–13. https://doi.org/10.1038/s41598-021-93046-6[11] Beatriz Pérez-Gómez, Roberto Pastor-Barriuso, Mayte Pérez-Olmeda, Miguel AHernán, Jesús Oteo-Iglesias, Nerea Fernández de Larrea, Aurora Fernández-García, Mariano Martín, Pablo Fernández-Navarro, Israel Cruz, et al .2021. ENE-COVID nationwide serosurvey served to characterize asymptomatic infectionsand to develop a symptom-based risk score to predict COVID-19. Journal ofclinical epidemiology (2021). https://doi.org/10.1016/j.jclinepi.2021.06.005[12] Lauren T Roland, Jose G Gurrola, Patricia A Loftus, Steven W Cheung, and Jolie LChang. 2020. Smell and taste symptom-based predictive model for COVID-19diagnosis. In International Forum of Allergy & Rhinology , Vol. 10. Wiley OnlineLibrary, 832–838. https://doi.org/10.1002/alr.22602[13] Jesús Rufino, Juan Marcos Ramírez, Jose Aguilar, Carlos Baquero, Jaya Champati,Davide Frey, Rosa Elvira Lillo, and Antonio Fernández-Anta. 2023. Consistentcomparison of symptom-based methods for COVID-19 infection detection. Inter-national Journal of Medical Informatics (2023), 105133.[14] Joshua Salomon, Alex Reinhart, Alyssa Bilinski, Eu Jing Chua, Wichada La Motte-Kerr, Minttu Rönn, Marissa Reitsma, Katherine Morris, et al .2021. The US COVID-19 Trends and Impact Survey: Continuous real-time measurement of COVID-19symptoms, risks, protective behaviors, testing, and vaccination. Proceedings ofthe National Academy of Sciences 118, 51 (2021).[15] Saar Shoer, Tal Karady, Ayya Keshet, Smadar Shilo, Hagai Rossman, Amir Gavrieli,Tomer Meir, Amit Lavon, Dmitry Kolobkov, Iris Kalka, et al .2020. Who shouldwe test for COVID-19? A triage model built from national symptom surveys.Medrxiv (2020). https://doi.org/10.1101/2020.05.18.20105569[16] David S Smith, Elizabeth A Richey, and Wendy L Brunetto. 2020. A symptom-based rule for diagnosis of COVID-19. SN Comprehensive Clinical Medicine 2, 11(2020), 1947–1954. https://doi.org/10.1007/s42399-020-00603-7[17] Roman Wölfel, Victor M Corman, Wolfgang Guggemos, Michael Seilmaier, SabineZange, Marcel A Müller, Daniela Niemeyer, Terry C Jones, Patrick Vollmar,Camilla Rothe, et al .2020. Virological assessment of hospitalized patients withCOVID-2019. Nature 581, 7809 (2020), 465–469.[18] World Health Organization. 2020. Coronavirus disease (COVID-19) Q&A. https://www.who.int/news-room/q-a-detail/coronavirus-disease-covid-19. Accessed:2021-06-02.[19] Yazeed Zoabi, Shira Deri-Rozov, and Noam Shomron. 2021. Machine learning-based prediction of COVID-19 diagnosis based on symptoms. npj Digital Medicine4, 1 (2021), 1–5. https://doi.org/10.1038/s41746-020-00372-6 |
u9zVZTg_Ky | Physics-informed neural networks integrating compartmentalmodel for analyzing COVID-19 transmission dynamicsXiao NingState Key Laboratory ofBioelectronics, School of BiologicalScience and Medical Engineering,Southeast UniversityNanjing, P.R. Chinaningxiao@seu.edu.cnYongyue WeiPublic Health and EpidemicPreparedness and Response Center,Peking UniversityBeijing, P.R. Chinaywei@pku.edu.cnFeng Chen∗Center for Global Health,Departments of Epidemiology andBiostatistics, Nanjing MedicalUniversityNanjing, P.R. Chinafengchen@njmu.edu.cnABSTRACTModelling and predicting the behaviour of infectious diseases isessential for early warning and evaluating the most effective in-terventions to prevent significant harm. Compartmental modelsproduce a system of ordinary differential equations (ODEs) that arerenowned for simulating the transmission dynamics of infectiousdiseases. However, the parameters in compartmental models areoften unknown, and they can even change over time in the realworld, making them difficult to determine. This paper proposes anadvanced artificial intelligence approach based on physics-informedneural networks (PINNs) to estimate time-varying parameters fromgiven data for the compartmental model. Our proposed PINNsapproach captures the complex dynamics of COVID-19 by integrat-ing a modified Susceptible-Exposed-Infectious-Recovered-Death(SEIRD) compartmental model with deep neural networks. Theexperimental findings on synthesized data have demonstrated thatour method robustly and accurately learns the dynamics and fore-casts future states. Moreover, as more data becomes available, ourproposed PINNs approach can be successfully extended to otherregions and infectious diseases.CCS CONCEPTS•Computer systems organization →Embedded systems ;Re-dundancy ; Robotics; •Networks→Network reliability.KEYWORDSCompartmental models, COVID-19 transmission, Physics-informedneural networks, Forward-inverse problemACM Reference Format:Xiao Ning, Yongyue Wei, and Feng Chen. 2023. Physics-informed neuralnetworks integrating compartmental model for analyzing COVID-19 trans-mission dynamics. In Proceedings of Make sure to enter the correct conference∗corresponding authorPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from permissions@acm.org.Conference acronym ’XX, June 03–05, 2023, Woodstock, NY©2023 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00https://doi.org/XXXXXXX.XXXXXXXtitle from your rights confirmation emai (Conference acronym ’XX). ACM,New York, NY, USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONThe emergence of severe acute respiratory syndrome coronavirus 2(SARS-CoV-2) has presented an unprecedented and complex publichealth challenge, with emerging and re-emerging infectious dis-eases posing a significant threat. Compartmental models, governedby a nonlinear system of ordinary differential equations (ODEs),simulate multi-state population transitions by incorporating do-main knowledge and mathematical assumptions to characterize thetransmission dynamics of infectious diseases. These models are apowerful tool for detecting, understanding, and combating infec-tious disease outbreaks and have been widely used to evaluate theimpact of various public health interventions during the COVID-19pandemic [ 24]. However, since real-world data can be inherentlystochastic, noisy, and even inaccessible, model optimization andmethodological innovation are urgently needed to handle imperfectdata and provide early warning of major public health emergencies.Modeling and predicting the behavior of infectious diseases iscrucial for early warning and evaluating effective interventionsto mitigate damage. The first compartmental model, Susceptible-Infectious-Removed (SIR), was proposed by Kermack and McK-endrick to study the epidemics of the Black Death in London andthe plague in Mumbai [ 12]. Compartmental models allow the addi-tion of compartments or transmission parameters to explore andestimate the impact of different assumptions regarding interven-tions. These parameters, included in the compartmental model,determine the transmission progress between different disease sta-tuses and can generate essential characteristics of an epidemic [ 2].Finding the best-fit parameters from the system, given availabledata, is an inverse problem. Several numerical methods have beendeveloped to infer constant model parameters from available data.These methods convert the inverse problem into an optimizationproblem and formulate an estimator by minimizing an objectivefunction. However, since various non-pharmaceutical interventions(NPIs) are employed during the evolution of COVID-19, some modelparameters are time-varying.Identifying time-varying parameters in compartmental mod-els is a complex inverse problem, making it challenging to accu-rately model outbreak dynamics [ 1,10]. Recent advances in Physics-informed machine learning have shown promise in COVID-19 trans-mission modelling by incorporating prior knowledge into deepneural networks to enhance their accuracy and robustness [ 11]. ForConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.example, Kharazmi et al. used PINNs to identify time-dependentparameters and data-driven fractional differential operators in sev-eral epidemiological models [ 13]. Long et al. proposed a variantof PINNs to fit daily reported cases and identify time-varying pa-rameters in the susceptible-infectious-recovered-deceased modelfor the spread of COVID-19 [ 15]. Nascimento et al. introduced anapproach that combines physics-informed and data-driven kernelsto reduce the gap between predictions and observations [ 17]. Caiet al. employed fractional physics-informed neural networks torefine the classical susceptible–exposed–infected–removed (SEIR)model, infer time-dependent parameters, and identify unobserveddynamics of the fractional SEIR model [ 3]. However, most of theseapproaches only consider the transmission rate as a function oftime, while setting other parameters to fixed values. Additionally,they mainly use time-varying parameters for prediction and lack asystematic epidemiological analysis.The primary focus of this paper is to introduce a novel method forevaluating time-varying parameters in ODEs-based compartmentalmodels and to assess the impact of the NPIs based on the estimatedparameters. We constructed a SEIRD compartmental model thattakes an incubation period and the corresponding infectivity intoaccount, including both unknown time-varying and constant pa-rameters. Given many unknown parameters and limited data, wemodeled the system of ODEs as one network and the time-varyingparameters as another network to reduce the parameter of neuralnetworks. Furthermore, such structure of the PINNs approach is inline with the prior epidemiological correlations. We then tested theeffectiveness of our methodology using real-world reported data,simulation experiments showed that our proposed PINNs methodeffectively performs data-driven parameter estimation for mod-elling COVID-19 transmission. Moreover, as more data becomesavailable, it can be successfully extended to model and analyzeinfectious disease transmission dynamics in various regions andfor different infectious diseases.2 METHODOLOGY2.1 Compartmental modelCompartmental models enable the simulation of multi-state popu-lation transitions by incorporating domain knowledge and math-ematical assumptions to characterize the dynamics of infectiousdiseases. These models are generally represented as the followingnonlinear dynamical system: dU(t)dt=F(t,U(t);Ξ)U(t0)=U0(1)where U(t)∈RD(typicallyD≫1) is the state variable, t∈[t0,T]is the time range, U(t0)is the initial state, and Ξstands for theparameters of the dynamical system.The SIR compartmental model provided the simplest frameworkthat matched the reporting structure with the least underlyingassumptions. Many variations of the SIR model have been proposedto analyze the transmission of COVID-19. In this paper, we considera geographical region as isolated from other regions, and withinsuch region we divide the population ( N) of study region into fivecompartments, susceptible ( S, vulnerable to COVID-19 infection),exposed (E, latent individual or asymptomatic infective), infected(I, symptomatic infected), recovered ( R, immune to COVID-19), anddead (D, death due to COVID-19). The details of the SEIRD modelare described below: dS(t)dt=−βS(t)(εE(t))+I(t)NdE(t)dt=βS(t)(εE(t)+I(t))N−E(t)αdI(t)dt=E(t)α−γI(t)−μI(t)dR(t)dt=γI(t)dD(t)dt=μI(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(2)WhereS(t),E(t),I(t),R(t),D(t)denote the number of suscepti-ble, exposed, infectious, recovered, and deceased individuals overtime respectively, along with non-negative initial conditions S(0)=S0,E(0)=E0,I(0)=I0,R(0)=R0,D(0)=D0.β≥0representsthe transmission rate, which represents the probability of infectionper exposure when a susceptible individual ( S) has contact withan infected patient ( I) and becomes a latent exposed individual(E). A coefficient parameter εis introduced since the transmissioncapacity of exposed and onset populations may be different. εβrepresents the potential rate per exposure when a susceptible indi-vidual (S) has mutual contact with an exposed individual ( E), andtransmits it to another exposed individual ( E).αis the averageduration of incubation period, 1/αis the rate of latent individualsbecoming infectious Besides, γ≥0represents the recovery rate,μ≥0represents the death rate, and Nis the total population.The assumption that the parameters in Eqs. 2 are time-constant,which is a highly restrictive and unrealistic one for the real-worldepidemic where various interventions exist. The associated inter-ventions implemented by authorities, and/or mutations of the virus,et al. make the compartmental model require time-varying parame-ters to capture the dynamic of dynamics of COVID-19. Therefore,by considering transmission rate β, recovery rate γand death rateμas functions of time β(t),γ(t),μ(t), the re-written SEIRD modelis as follows: dS(t)dt=−β(t)S(t)(εE(t))+I(t))NdE(t)dt=β(t)S(t)(εE(t))+I(t))N−E(t)αdI(t)dt=E(t)α−γ(t)I(t)−μ(t)I(t)dR(t)dt=γ(t)I(t)dD(t)dt=μ(t)I(t)N=S(t)+E(t)+I(t)+R(t)+D(t)(3)Among them, the five variables S(t),E(t),I(t),R(t),D(t)havethe same meanings as in Eq. 2. If we assume that the total populationNis constant, then the sum of the increase or decrease of the stateof each population is 0, namely,dS(t)dt+dI(t)dt+dR(t)dt+dD(t)dt=0.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYThe basic reproduction number R0is a constant epidemiologicalparameter that provides an estimation of the contagiousness of theinfectious disease. It also serves as a threshold parameter, whenR0>1, one infected individual can trigger an outbreak, while whenR0<1, the infection will not spread in the population. Given acompartmental model, R0can be calculated by the Next GenerationMatrix (NGM) approach [7].If the related parameters in the compartmental model are time-varying as in Eq. 3, the reproduction number R0is expected to keepchanging, as a function of time called the effective reproductionnumberRt.Rtfor the course of SEIRD model using the NGM ap-proach, which yields the following expressions in the proposedSEIRD model:Rt=ε·β(t)α+β(t)γ(t)+μ(t)(4)Rtprovides an estimation of the contagiousness of the infectiousdisease, during the course of an outbreak, where not every individ-ual is considered susceptible.2.2 Deep neural networksDeep neural networks (DNNs) have emerged as a reliable and effec-tive method for nonlinear function approximation, demonstratingremarkable capabilities in scientific computation and engineeringapplications, as evidenced by their widespread utilization. Manytypes of DNNs have been developed such as recurrent neural net-works, convolutional neural networks, and Transformer et al [ 16],and here we only consider fully-connected deep neural networks(FDNN). Neural networks can be viewed as discretizations of contin-uous dynamical systems, making them well-suited for dealing withdynamic systems. Mathematically, an FDNN defines a mapping ofthe formF:x∈Rd=⇒y=F(x)∈Rc, (5)wheredandcare the input and output dimensions, respectively.Generally, a standard neural unit of an FDNN receives an inputx∈Rdand produces an output y∈Rm,y=σ(Wx+b)withW∈Rm×dandb∈Rmbeing weight matrix and bias vector,respectively. σ(·), which is referred to as the activation function,is designed to add element-wise non-linearity to the model. AnFDNN with lhidden layers can be considered a nested compositionof sequential standard neural units. For convenience, we denotethe output of the DNN by y(x;θ)with θstanding for the set of allweights and biases. Specifically, the jthneuron inllayer can beformulated asy[l]j=n[l−1]∑︁k=1w[l]jkσ[l−1](y[l−1]k)+b[l]j, (6)wherey[l−1]krepresents the value of the kthneuron in the l−1layer,n[l−1]represents the number of neurons in the l−1layer,σ[l−1]is the activation function of the l−1layer,w[l]jkis the weightbetween the kthneuron in the l−1layer and the jthneuron in thellayer, andb[l]jis the bias of the jthneuron in the llayer.The nonlinear activation function enhances the ability of DNNto model various non-linear problems, selecting the suitable acti-vation function matters greatly for DNN applied in all domains.Particularly, the activation function has an extremely significantxInputσ...σσσ...σσ............σ...σσfn(x)...f2(x)f1(x)Hidden Layers Output LayerFigure 1: Illustration of the FDNN. A neural network consistsof an input layer (the input x), several hidden layers (com-posed of weights Wl, biasbl, and activation function σ), andan output layer.impact on the success of training PINNs. ReLU activation functionhas been widely used in many deep learning applications due toits dealing well with vanishing gradients problems [ 19]. However,for solving differential equations, the first and second derivativesof the neural networks would serve as inputs to calculate the lossfunction, which means that the activation function of the DNN inPINNs framework requires the second derivative to be satisfied asnon-zero. Definitely, many research works have demonstrated thatsigmoid function and tanh function are suited for effective PINNsframework training tasks.2.3 PINNs for SEIRD modelPhysics-informed neural networks (PINNs) approach is a data-driven approach to approximate the solution of differential equa-tions and estimate unknown parameters. The main idea of PINNs isto integrate a priori knowledge as physical laws or domain exper-tise modelled by differential equations into deep neural networks.Equations in the compartmental model possess coupling and thecoefficients are not independent of each other through the lens ofbiological and epidemics. In this context, we employ two separateDNNs with input tto represent the stats U(t)and time-varying pa-rameters, respectively. For the two unknown constant parameters(α,ε), we designed the modified tanh activation function to repre-sent them. The expression of the tanh function istanh(x)=ex−e−xex+e−x,and the range of values belong to [-1, 1]. Considering that α>0and0≤ε≤1, thus we designed the expression of εastanh(x),the expression of αas21·tanh(x),xis a random sample withuniform distribution generated from the interval [0, 3]. Meanwhile,COVID-19 transmission involves the analysis of real-world data,for which the available data size tends to be small and sparse. Sucha PINNs architecture enables a well-trained model with a limiteddata set.The PINNs framework is required to fit the data and simultane-ously satisfy the equations, whereby the loss function includes twoparts. The first part is the mismatch between the network outputand the available data, and another part is the residual of ODEs. Inthis study, we employ the approximation UN N (t;ΘU)≈U(t)toConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.represent the time-varying SEIRD equations (Eqs 3). The parame-tersΘare optimized to achieve the best fit with the observed data.Considering the available data Ujat timest1,t2,...,tnas trainingpoints (ground truth), the mean squared error (MSE) is calculatedas follows:MSEu=1NN∑︁j=1ˆUNN(tj)−U(tj)2, (7)Another component of the loss function is the residual of the sys-tems of Eqs. 1, we define the residual of equations as RNN(t)=dU(t)dt−F(UN N,t;Ξ). The residual, denoted as R(t;ΘU), serves asa metric for assessing the accuracy of the approximation UNN(t;ΘU)in satisfying the ordinary differential equations (ODEs). Evaluatingthe residual involves computing the time derivative of the neu-ral network output, which can be accomplished using automaticdifferentiation [ 20]. Automatic differentiation is a computationaltechnique that efficiently computes derivatives by applying thechain rule. It breaks down functions into elementary operationsand calculates their derivatives, allowing for accurate and efficientcomputation of the overall function’s derivative with respect to itsinput variables.MSEr=1NN∑︁j=1RNN(tj)2, (8)In summary, the loss function of proposed PINNs approach is de-fined as:L=ωuMSEu+ωrMSEr (9)The weight coefficients, ωu,ωr, in the loss function play a crucialrole in balancing the optimization process between learning fromthe data and satisfying the ODEs. These parameters allow fine-tuning of the model’s behaviour and trade-off between the twoobjectives. By adjusting the values of ωu,ωr, the emphasis can beplaced on either accurately fitting the available data or ensuringthe ODE constraints are well-satisfied.Consequently, this PINNs model strives to minimize the lossfunction, effectively learning the underlying physics encoded inthe ODEs while accurately capturing the patterns and relationshipsin the available data.3 EXPERIMENTSIn this section, we will provide a description of the collected dataand present the results obtained from parameter estimation andpredictions using the proposed PINNs approach.3.1 Data sourceFor the COVID-19 epidemic in Italy, the first official report of in-digenous case was on February 21, 2020 in Lodi province, whileseveral epidemiological-linked cases were traced back to February20, 2020. The data considered in our study is downloaded fromItalian Civil Protection (http://www.protezionecivile.gov.it/media-comunicazione/comunicati-stampa) and Ministry of Health (http://www.salute.gov.it/portale/home.html).It is comprised of commutative infected, recovered, and deceasedcases for the period from February 20, 2020 (day 1), to June 30,2020 (day 132) [ 8]. To avoid weekly fluctuations induced by thework-leisure shift and nature noise in the real-world data, a 7-dayData()fx....................................dSdtdEdtdIdtdRdtdDdt...EE0dNdtNo inflow conditionNSS,RR ID()t()t()t2 1()rN N jMSE tN 2 1() ()NNuj jMSE U t U tNuu rr LM S E M S E ( : , ) updateNN w b,txDNNsautomatic differentiation MinimizeMismatch of data and UNN Residual of ODEsODE s-based ODEs-based SEIRD modelodeFigure 2: Schematic diagram of the PINNs framework for theSEIRD compartmental model with unknown (time-varyingand constant) parameters. The green-shaded DNNs repre-sents the states UN N (t)to fit the available data and infer theunobserved dynamics. The yellow-shaded DNNs representstime-varying parameters β(t),γ(t),μ(t). The two constant pa-rameters (α,ε) are represented by the modified tanh(t)acti-vation function.moving average was used to smooth the reported data by averagingthe values of each day with those of the 7 days before. In order tocontrol the transmission of COVID-19 in Italy, lockdown and manyrestriction measures were implemented from February 23, 2020, asthe developed timeline shown in Fig. 3. All events and interventionsare available from official websites https://mn.gov/governor/covid-19/news/.Key EventsFormal start date of COVID-19: localized lockdown for certain regionsFeb 21 March 8 2022 April 1 10 May 3 18 June 15Ban parks, public gardens, and open-air recreational activityAll non-essential or non-strategic industrial activities are closedLockdown Lockdown LockdownDPCM: initial release of some restriction measuresDPCM: general opening in effect, social distancing and other measures remainFirst DPCM: localized national lockdown, ban of gathering and sports events.National lockdown, commercial activities shutdown11 262020DPCM: general opening in effect, social distancing and other measures remain23First official report caseFigure 3: Timeline of NPIs implemented in Italy to controlCOVID-19. DPCM: Decree of the Prime Minister.3.2 Experimental settingsWe train the PINNs model on a personal laptop running the Win-dows 10 operating system, equipped with an Intel (R) Core (TM)i7-8550U CPU operating at 1.8GHz. We implement the PINNs ap-proach using Python and the PyTorch framework [ 21]. For thenumerical experiment, we train the neural networks using theAdam optimizer with an initial learning rate of 2×10−3and a decayrate of 95%every 2000 epochs. The entire training process takesabout 10 minutes to run 50,000 epochs on all training data, andpredictions can be made within seconds.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NY3.3 Results3.3.1 Data fitting. In this subsection, we present the evaluation ofhow well the estimated parameters fit the SEIRD compartmentalmodel on the available data. Fig.4 shows the fitting of the dynamicof the SEIRD model to the available real-world reported data (after7-day smoothing), which demonstrates that the proposed PINNsapproach can accurately fit the different fluctuations in the data.02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(a)0100002000030000400005000060000700008000090000100000110000No. of current infectiveobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(b)020000400006000080000100000120000140000160000180000No. of recoveredobservations7-day rollingPINNs fitted02-24 02-29 03-05 03-10 03-15 03-20 03-25 03-30 04-04 04-09 04-14 04-19 04-24 04-29 05-04 05-09 05-14 05-19 05-24 05-29 06-03 06-08 06-13 06-18 06-23 06-28(c)05000100001500020000250003000035000No. of deathsobservations7-day rollingPINNs fittedData fitting during trainingFigure 4: Data fitting during training. (a.) Fitting to the avail-able data of current infectious. (b.) Fitting to the availabledata of cumulative recovered. (c.) Fitting to the available dataof cumulative deaths. Dot: observed data. Line: 7-day rollingof observed data. Dashed: PINNs’ prediction of dynamics.3.3.2 Inference. We aim to infer the time-varying parameters β(t),γ(t),μ(t), as well as the constants αandε, through the inverseproblem solving of the SEIRD compartmental model. The incuba-tion period and the infectiousness during this period are parametersspecific to the virus, which can be obtained from clinical case in-formation or inferred using statistical or mathematical modellingbased on available data. In our study, we estimate the incubationperiod of COVID-19 to be approximately 5.8 days, and the infec-tiousness during the incubation period is found to be nearly equalto 99.9% of the infection period.The transmission dynamics of infectious diseases are influencedby multiple factors, such as government interventions, individualbehaviour, and medical resources. In order to accurately modelthe spread of infectious diseases using compartmental models, it isnecessary to update certain parameters over time to account for theevolving impact of interventions. These parameters include β(t),γ(t), andμ(t), which represent the time-varying rates of transmis-sion, recovery, and mortality, respectively. In Figure 5, we presentthe inference results of these time-varying parameters in Italy fromFebruary 20 to June 30, 2020. This analysis provides insights intohow the values of β(t),γ(t), andμ(t)change over the specifiedtime period, reflecting the impact of interventions and other factorson the dynamics of the disease.Note that the events that have an impact on β(t)have to do withpeople’s adaption to preventive interventions and the interactionsamong individuals, whereas μ(t)relates to the availability and ef-fectiveness of health care, as well as on the resource availability inhospitals.γ(t)is known to be a disease-specific parameter (inverseof the infectious period) but is also affected by the capacity of thehealthcare system to accommodate hospitalization. As shown inFig.5 (a), the transmission rate β(t)can fit well with what would beexpected given such events. The earliest traceable first confirmedcase of COVID-19 on February 20, 2020, the authorities of Italystarted imposing the localized lockdown for certain regions on Feb-ruary 23, 2020, these control measures achieved a certain success, asdemonstrated by a significant reduction in transmission rates β(t).As far asγ(t)andμ(t), hospitals’ ability particularly emergencyrooms had a considerable impact. In the context of COVID-19, hos-pitals are at full capacity in the first months of the outbreak, andas months went by, healthcare professionals learned more aboutpossible treatments to treat the disease’s symptoms and effects.This usually results in a decrease in the proportion of individualsthat died from the disease (decrease of μ(t)) and in a decrease inthe recovery time (an increase of γ(t)). As shown in Fig.5 (b) andFig.5 (c), in qualitative terms, was an increasing trend in γ(t)and adecreasing trend in μ(t).The effective reproduction number is a crucial parameter in theSEIRD model that helps to predict the spread of infectious diseases.Rtless than 1 indicates that the transmission of the infectiousdisease will gradually disappear. By monitoring changes in Rtovertime, public health officials can make informed decisions aboutinterventions to control the spread of the disease. Fig. 6 (a) showsthe evolution of Rt=ε·β(t)α+β(t)γ(t)+μ(t)in the proposed SEIRDcompartmental model from February 20 to June 30, 2020. In the firstseveral days of the outbreak, the effective reproduction numberRtwas greater than 8, which resulted in a substantial outbreak.On February 25, Rtgradually decreased as localized lockdown forcertain regions and the awareness of the epidemic. However, Rtwasstill greater than 1, which may be due to the partially incompletelockdown, or the movement of people from northern to southernItaly when the country-wide lockdown was announced but not yetenforced. When the national lockdown was fully operational andstrictly enforced, Rtkeeps decreasing and finally reached below 1.Moreover,Rtsteadily declined at the end of March due to a widertesting campaign that identified more mildly symptomatic infectedindividuals. Since June 15, Rtshows a growing trend due to DPCMdeclaring that general opening was in effect, social distancing, andother measures remained. Additionally, to validate the estimated Rt,a serial Bayesian model was implemented to produce the Rtof ItalyConference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(a)0.00.10.20.30.40.50.6(t)transmission rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(b)0.020.040.060.080.10(t)recovery rate02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29(c)0.000.010.020.030.040.050.060.070.08(t)Death rateTime-varying parameters (t), (t), (t)Figure 5: The time-varying transmission rate of SEIRD modelbased on PINNs approach on Italy data from February 20 toJune 30, 2020. (a): transmission rate β(t). (b): recovery rateγ(t). (c): death rate μ(t)at the same time period [ 5], as shown in Fig. 6 (b). Parameters forthe serial interval distribution in the model were set according tothe published literature (mean = 7.5 d; SD = 3.4 d) [ 18,23]. As shownin 6, theRtestimated by the proposed PINNs approach is essentiallythe same as that estimated by the Bayesian model. Besides, the resultof the proposed approach provides a more detailed and accuratecapture of the dynamics.3.3.3 Forecasting. Modeling results can provide reliable feedbackinformation for the authorities to make future decisions. The ODEs-based compartmental model requires determined initial conditionsand model parameters to make predictions. To test the performanceof the proposed PINNs approach, we performed predictions for theearly outbreak of COVID-19 in Italy at one-month, two-month, andthree-month, respectively. As the initial conditions can be obtainedfrom the training data and the model parameters are already cali-brated, we can forecast the epidemic dynamics by performing theforward process. In the prediction part, the value of β(t),γ(t)andμ(t)are assumed to be their final value of the training time window.Fig. 7 displays the one-week prediction and corresponding obser-vations for three time periods produced by using the SEIRD modelwith the estimated parameters. Note that the number of recoveredand death states in the SEIRD model are terminal states, which02-20 02-25 03-01 03-06 03-11 03-16 03-21 03-26 03-31 04-05 04-10 04-15 04-20 04-25 04-30 05-05 05-10 05-15 05-20 05-25 05-30 06-04 06-09 06-14 06-19 06-24 06-29Date (February 20, 2020 to June 30, 2020)0246810RtEffective reproduction numberFigure 6:Rtin Italy from February 24 to June 30, 2020. (a.)Rt estimated by proposed PINNs approach for SEIRD model.(b.)Rtestimated by serial Bayesian model.means that the changes in the number of recovered and death peo-ple are always non-decreasing. In turn, the infected people may seeperiods of increase and decrease due to it being a state of transition.Fig.7 (a) displays the one-week prediction based on the reporteddata from February 20 to March 20, 2020, Fig.7 (b) displays the one-week prediction based on the reported data from February 20 toApril 19, 2020, and Fig.7 (c) displays the one-week prediction basedon the reported data from February 20 to May 19, 2020. The perfectmatch between the predictions and the observations demonstratesthe parameters inferred by the learned network are very plausible,as well as the generalization ability of the model.Furthermore, to quantitatively test the prediction performanceof the proposed approach, We use three evaluation metrics to makefair and effective comparisons. They are mean absolute error (MAE),root mean square error (RMSE), and mean absolute percentage error(MAPE). The calculation method is shown in Eq. (10)(12)(11).MAE =1nn∑︁i=1|ˆyi−yi|, (10)RMSE =vt1nn∑︁i=1(ˆyi−yi)2, (11)MAPE =1nn∑︁i=1|ˆyi−yi|ˆyi∗100%, (12)Interventions to control COVID-19 keep adjusting, which mayresult in uncertainty, experimental results as represented in Table1show the highly accurate forecasting capability of the proposedapproach.Physics-informed neural networks integrating compartmental model for analyzing COVID-19 transmission dynamics Conference acronym ’XX, June 03–05, 2023, Woodstock, NYTable 1: The forecasting performance in 3-day, 5-day and 7-day.MetricsAfter March 20, 2020 After April 19, 2020 After May 19, 20203-day 5-day 7-day 3-day 5-day 7-day 3-day 5-day 7-dayMAE(I) 5411 5790 6419 2503 3258 2792 1352 2170 3046RMSE(I) 5431 5819 6519 3705 2618 3275 1567 2515 3514MAPE(I) 11.60% 11.52% 11.78% 2.32% 3.04% 2.61% 2.20% 3.70% 5.41%MAE(R) 813 1728 2944 2934 5704 9001 1643 2700 4170RMSE(R) 959 2128 3706 3321 6821 10936 1880 3151 4972MAPE(R) 11.93% 20.07% 31.04% 5.57% 10.00% 14.83% 1.23% 1.96% 2.97%MAE(D) 423 543 927 330 235 318 147 109 95RMSE(D) 527 637 1151 349 279 379 147 122 109MAPE(D) 8.36% 8.98% 12.64% 1.35% 0.95% 1.24% 0.45% 0.34% 0.30%03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date35000400004500050000550006000065000No. of infectitioni(t)-predictioni(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date400060008000100001200014000160001800020000No. of recoveredr(t)-predictionr(t)-observation03-20 03-21 03-22 03-23 03-24 03-25 03-26 03-27Date3000400050006000700080009000100001100012000No. of deathsd(t)-predictiond(t)-observation04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date9500097500100000102500105000107500110000112500115000No. of infectition04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date400005000060000700008000090000No. of recovered04-19 04-20 04-21 04-22 04-23 04-24 04-25 04-26Date230002400025000260002700028000No. of deaths05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date450005000055000600006500070000No. of infectition05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date120000125000130000135000140000145000150000155000160000No. of recovered05-19 05-20 05-21 05-22 05-23 05-24 05-25 05-26Date320003220032400326003280033000No. of deaths7-day forecastingFigure 7: Forecasting results of the SEIRD models based onestimated parameters. In the first column are plotted the pre-dicted current infections, in the second column are plottedthe predicted cumulative recovered, in the third column areplotted the predicted cumulative deaths, and the dotted boxesrepresent the corresponding observations. a. 7-day forecast-ing results based on the February 20 to March 20, 2020 timewindow. b. 7-day forecasting results based on the February20 to April 19, 2020 time window. c. 7-day forecasting resultsbased on the February 20 to May 19, 2020 time window.4 DISCUSSIONTransmission modelling is increasingly being used to support publichealth decision-making in the control of infectious diseases. In thispaper, a modified SEIRD compartmental model with time-varyingparameters is introduced to describe and predict the dynamics ofCOVID-19 transmission in Italy.Estimating the unknown parameters of this model is a complexinverse problem, for the solution of which we proposed a domain-specific PINNs approach.The proposed approach has been applied to modelling the COVID-19 transmission in Italy, the estimated parameters resulted effectivein fitting the COVID-19 contagion data and in providing accuratepredictions of the evolution. Besides, these results, the proposedPINNs approach allows us to have a more detailed understandingof the contagion mechanism.In Fig. 5 (a) is that the control measures imposed by the authori-ties seem to have been effective in reducing the key transmissionrate parameter β(t). Fig. 5 (b) and (c) show that the recovery ratetends to increase with time and the death rate to decrease. Thisphenomenon, which seems not directly related to the lockdown,can be attributed to different causes, among which a better un-derstanding of the disease and consequent improvement in theeffusiveness of the response from the national health system, andpossibly a change in the nature, virulence, and lethality of the virus.Furthermore, we evaluate how the estimated parameters fit theSEIRD compartmental model by comparing the results of previouspublications. We compare our results to those obtained using themethodology of the rolling regression framework [ 4], where theorder of magnitude of the time-varying parameters β(t),γ(t)andμ(t)is in agreement and the trend is almost identical. A compre-hensive meta-analysis demonstrated that the median incubationperiod for general transmissions in early outbreaks was 5.8 days[95% confidence interval (95% CI): 5.3, 6.2] [ 25]. Li et al. analyzeddata on the first 425 confirmed cases in Wuhan to determine the epi-demiologic characteristics of NCIP, the results show that the meanincubation period was 5.2 days (95% confidence interval [CI], 4.1 to7.0) [ 14]. Yang et al. collected contact tracing data in a municipalityin Hubei province during a full outbreak period to estimate theincubation period and serial interval of COVID-19, the estimatedmedian incubation period of COVID-19 is 5.4 days (bootstrapped95% confidence interval (CI) 4.8–6.0) [ 26]. The estimated αby theproposed PINNs approach is 5.8, which is consistent with the re-sults of the above research. The estimated εby the proposed PINNsapproach is 0.99, which means that the transmission capacity ofexposed and onset populations are nearly identical [ 9]. Numer-ous related studies demonstrate that the incubation period and theinfection period carry almost the same capacity for transmission[6, 22].Conference acronym ’XX, June 03–05, 2023, Woodstock, NY Trovato and Tobin, et al.The goal of modeling the transmission dynamics of an infec-tious disease is to capture the mechanisms of a host passing onthe infection to other individuals. Once the information is clear,a model can be used as a sort of experimental system to simu-late what would happen to the evolution of disease with differentinterventions implemented. While the proposed PINNs approachindeed offers many advantages, it does have some limitations. Oneof the main limitations is that PINNs architecture requires priorknowledge of the physical laws and constraints that govern theproblem being solved. The structure of compartmental models maychange depending on the question of interest and impact their ac-curacy. That means if the underlying epidemiological laws are notwell understood or if the available data is not consistent with theknown epidemiological laws, the model may not work well. But itshould be noted that the emphasis on infectious disease models ison the application of public health, not the mathematics of thesemodels. As world-renowned Statistician George E. P. Box made thefollowing statement. "All models are wrong, but some are useful."5 CONCLUSIONSIn this paper, we proposed a novel PINNs approach to estimatethe unknown parameters (including time-varying and constantparameters) for the ODEs-based compartmental model to depictthe dynamic of the COVID-19 transmission. The experiment resultwith real-world report data reveals that the proposed COVID-19modeling approach enables to yield of epidemiological models thatcan describe the real-time dynamics of the contagion, providingreliable predictions and valuable insight into the contagion mech-anisms. We have provided a completed workflow for analyzinginfectious disease transmission systems described by a system ofODEs produced compartmental model. We emphasize that the pro-posed PINNs approach can easily be implemented without anybackground knowledge about numerical analysis (for example, sta-bility conditions) but about some libraries for implementing neuralnetworks. For a given scenario that we consider, the proposedPINNs approach can be effective for simulating different epidemicscenarios, testing various hypotheses, and for designing suitablecontrol measures.6 ACKNOWLEDGMENTSThe study was supported by the National Natural Science Founda-tion of China (82041024 to Feng Chen and 81973142 to YongyueWei). This study was also partially supported by the Bill & MelindaGates Foundation (INV-006371).REFERENCES[1]Toheeb A Biala and AQM Khaliq. 2021. A fractional-order compartmental modelfor the spread of the COVID-19 pandemic. Communications in Nonlinear Scienceand Numerical Simulation 98 (2021), 105764.[2]Fred Brauer. 2008. Compartmental models in epidemiology. Mathematicalepidemiology (2008), 19–79.[3]Min Cai, George Em Karniadakis, and Changpin Li. 2022. Fractional SEIR modeland data-driven predictions of COVID-19 dynamics of Omicron variant. Chaos:An Interdisciplinary Journal of Nonlinear Science 32, 7 (2022), 071101.[4]Giuseppe C Calafiore, Carlo Novara, and Corrado Possieri. 2020. A time-varyingSIRD model for the COVID-19 contagion in Italy. Annual reviews in control 50(2020), 361–372.[5]Anne Cori, Simon Cauhemez, Neil Fergunson, Christophe Freiser, ElizabethDahlqwist, Alex Demarsh, Thibaut Jombart, Zhian Kamvar, Justin Lessler, ShikunLi, et al .2020. Estimate time varying reproduction numbers from epidemic curves.R Project for Statistical Computing. R package version 2, 4 (2020).[6]Fabio Della Rossa, Davide Salzano, Anna Di Meglio, Francesco De Lellis, MarcoCoraggio, Carmela Calabrese, Agostino Guarino, Ricardo Cardona-Rivera, PietroDe Lellis, Davide Liuzza, et al .2020. A network model of Italy shows thatintermittent regional strategies can alleviate the COVID-19 epidemic. Naturecommunications 11, 1 (2020), 5106.[7]Odo Diekmann, JAP Heesterbeek, and Michael G Roberts. 2010. The constructionof next-generation matrices for compartmental epidemic models. Journal of theroyal society interface 7, 47 (2010), 873–885.[8]Giulia Giordano, Franco Blanchini, Raffaele Bruno, Patrizio Colaneri, AlessandroDi Filippo, Angela Di Matteo, and Marta Colaneri. 2020. Modelling the COVID-19epidemic and implementation of population-wide interventions in Italy. Naturemedicine 26, 6 (2020), 855–860.[9]Malú Grave, Alex Viguerie, Gabriel F Barros, Alessandro Reali, and Alvaro LGACoutinho. 2021. Assessing the spatio-temporal spread of COVID-19 via compart-mental models with diffusion in Italy, USA, and Brazil. Archives of ComputationalMethods in Engineering 28 (2021), 4205–4223.[10] Charles W Groetsch and CW Groetsch. 1993. Inverse problems in the mathematicalsciences . Vol. 52. Springer.[11] George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, SifanWang, and Liu Yang. 2021. Physics-informed machine learning. Nature ReviewsPhysics 3, 6 (2021), 422–440.[12] William Ogilvy Kermack and Anderson G McKendrick. 1927. A contribution tothe mathematical theory of epidemics. Proceedings of the royal society of london.Series A, Containing papers of a mathematical and physical character 115, 772(1927), 700–721.[13] Ehsan Kharazmi, Min Cai, Xiaoning Zheng, Zhen Zhang, Guang Lin, andGeorge Em Karniadakis. 2021. Identifiability and predictability of integer-andfractional-order epidemiological models using physics-informed neural networks.Nature Computational Science 1, 11 (2021), 744–753.[14] Qun Li, Xuhua Guan, Peng Wu, Xiaoye Wang, Lei Zhou, Yeqing Tong, Ruiqi Ren,Kathy SM Leung, Eric HY Lau, Jessica Y Wong, et al .2020. Early transmissiondynamics in Wuhan, China, of novel coronavirus–infected pneumonia. NewEngland journal of medicine (2020).[15] Jie Long, AQM Khaliq, and Khaled M Furati. 2021. Identification and predictionof time-varying parameters of COVID-19 model: a data-driven deep learningapproach. International Journal of Computer Mathematics 98, 8 (2021), 1617–1632.[16] Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Daniel Fink,Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy,et al.2019. Evolving deep neural networks. In Artificial intelligence in the age ofneural networks and brain computing . Elsevier, 293–312.[17] Renato G Nascimento, Kajetan Fricke, and Felipe AC Viana. 2020. A tutorialon solving ordinary differential equations using Python and hybrid physics-informed neural network. Engineering Applications of Artificial Intelligence 96(2020), 103996.[18] World Health Organization et al .2020. Coronavirus disease 2019 (COVID-19):situation report, 73. (2020).[19] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficultyof training recurrent neural networks. In International conference on machinelearning . PMLR, 1310–1318.[20] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang,Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer.2017. Automatic differentiation in pytorch. (2017).[21] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, GregoryChanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al .2019.Pytorch: An imperative style, high-performance deep learning library. Advancesin neural information processing systems 32 (2019).[22] Sebastian Stockmaier, Nathalie Stroeymeyt, Eric C Shattuck, Dana M Hawley,Lauren Ancel Meyers, and Daniel I Bolnick. 2021. Infectious diseases and socialdistancing in nature. Science 371, 6533 (2021), eabc8881.[23] Biao Tang, Xia Wang, Qian Li, Nicola Luigi Bragazzi, Sanyi Tang, Yanni Xiao,and Jianhong Wu. 2020. Estimation of the transmission risk of the 2019-nCoVand its implication for public health interventions. Journal of clinical medicine 9,2 (2020), 462.[24] Yongyue Wei, Feng Sha, Yang Zhao, Qingwu Jiang, Yuantao Hao, and Feng Chen.2021. Better modelling of infectious diseases: lessons from covid-19 in China.bmj375 (2021).[25] Yongyue Wei, Liangmin Wei, Yihan Liu, Lihong Huang, Sipeng Shen, RuyangZhang, Jiajin Chen, Yang Zhao, Hongbing Shen, and Feng Chen. 2022. Compre-hensive estimation for the length and dispersion of COVID-19 incubation period:a systematic review and meta-analysis. Infection 50, 4 (2022), 803–813.[26] Lin Yang, Jingyi Dai, Jun Zhao, Yunfu Wang, Pingji Deng, and Jing Wang. 2020.Estimation of incubation period and serial interval of COVID-19: analysis of 178cases and 131 transmission chains in Hubei province, China. Epidemiology &Infection 148 (2020). |
Unyf3QsNmx | Hierarchical Clustering and Multivariate Forecasting for HealthEconometricsAtika Rahman Paddoapaddo@iu.eduIndiana University Purdue UniversityIndianapolisIndianapolis, Indiana, USASadia Afreenfnsadia@iu.eduIndiana University Purdue UniversityIndianapolisIndianapolis, Indiana, USASaptarshi Purkayasthasaptpurk@iupui.eduIndiana University Purdue UniversityIndianapolisIndianapolis, Indiana, USAABSTRACTData science approaches in Health Econometrics and Public Healthresearch are limited, with a lack of exploration of state-of-the-artcomputational methods. Recent studies have shown that neuralnetworks and machine learning methods outperform traditional sta-tistical methods in forecasting and time-series analysis. In this study,we demonstrate the use of unsupervised and supervised machinelearning approaches to create "what-if" scenarios for forecasting thelong-term impact of changes in socio-economic indicators on healthindicators. These indicators include basic sanitation services, im-munization, population ages, life expectancy, and domestic healthexpenditure. To begin, we utilized Hierarchical Cluster Analysisto group 131 countries into 9 clusters based on various indicatorsfrom the World Bank Health Statistics and Nutrition dataset. Thisstep allowed us to create clusters of countries. In order to showcasethe feasibility of our approach, we performed a time series analysisusing multivariate prophet on the most significant features froma cluster consisting of Bahrain, Kuwait, Oman, Qatar, and SaudiArabia. The study developed robust models ( R2=0.93+) capableof forecasting 11 health indicators up to 10 years into the future.By employing these "what-if" scenarios and forecasting models,policymakers and healthcare practitioners can make informed deci-sions and effectively implement targeted interventions to addresshealth-related challenges.CCS CONCEPTS•Computing methodologies →Modeling methodologies ;•Applied computing →Health informatics ;•Informationsystems→Clustering ;Information systems applications .KEYWORDSClustering, forecasting, health econometrics, data scienceACM Reference Format:Atika Rahman Paddo, Sadia Afreen, and Saptarshi Purkayastha. 2023. Hier-archical Clustering and Multivariate Forecasting for Health Econometrics.InProceedings of epiDAMIK @ SIGKDD Workshop. ACM, New York, NY,USA, 8 pages. https://doi.org/XXXXXXX.XXXXXXXPermission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from permissions@acm.org.epiDAMIK @ SIGKDD Workshop, 2023©2023 Association for Computing Machinery.ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00https://doi.org/XXXXXXX.XXXXXXX1 INTRODUCTIONHealth econometrics is a multidisciplinary field that combines eco-nomics and statistics to study various aspects of healthcare systems,policies, and outcomes. Traditionally, econometric methods havebeen employed to analyze healthcare data, including regressionmodels, panel data analysis, and instrumental variable techniques[20, 7]. However, there is a growing recognition of the potentialbenefits of incorporating these advanced techniques into healtheconometrics research.In today’s interconnected society, understanding the factors thataffect health outcomes is crucial for effective policymaking andhealthcare treatments. With the availability of extensive healthdata, advanced analysis methods can provide valuable insights tosupport evidence-based decision-making. The World Bank’s HealthStatistics collection offers a wealth of data on various health in-dices across nations [26]. In this study, we aim to develop a betterunderstanding of the predefined Gulf Cooperation Council (GCC)countries, which share similar economies and development goals[15]. By utilizing a clustering algorithm, we have identified simi-larities in their health statistics [34]. However, this study does notinclude one of the GCC countries, the United Arab Emirates (UAE).Katoue et al. argued that the health issues faced in the MiddleEast and North Africa regions must be highlighted, as these coun-tries still face challenges in providing equitable and high-qualityhealthcare services. Limited literature supports evidence of im-provements in these areas [13]. To address the health challengesin the GCC countries, including Bahrain, Kuwait, Oman, Qatar,Saudi Arabia, and the UAE, innovative strategies are necessary toimprove the overall health status of the Middle Eastern countries[15, 19]. A United Nations report highlights disparities and com-monalities in health factors among different regions in the Arabworld [31]. While the report suggests that the GCC countries havemade progress in maintaining sanitation and safe drinking water,it is unclear whether all countries in the region will continue withthe same policies in the future [31].This study aims to identify any disparities between countries re-garding uniform healthcare provision. The 2015 World Bank reportemphasizes the impact of health outcomes on health policies andexpenditure in the GCC countries [28]. Changes in health outcomes,such as non-communicable diseases and life expectancy, coupledwith inflation, may create disparities in health expenditure amongthese countries [2].It remains uncertain which countries can improve overall health-care and which may lag behind in developing uniform health poli-cies [8]. Additionally, our research study focuses on populationwell-being, particularly in different age groups, and factors suchepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and Purkayasthaas expenditure, immunization, and survival rates. Understandingthe association between age and other health factors is crucialfor targeting "age-specific" policies in healthcare management anddisease prevention [9]. This is significant in terms of healthcaremanagement and disease prevention.This research paper combines cluster analysis, feature impor-tance analysis, and multivariate time series modeling to uncover theunderlying factors influencing health outcomes within a selectedcluster comprising five GCC countries: Bahrain, Kuwait, Oman,Qatar, and Saudi Arabia. The findings contribute to a deeper under-standing of the complex dynamics of health indicators and provideactionable insights for policymakers and healthcare professionals.2 RELATED WORKSBalçik et al. [5] conducted a study on clustering algorithms thatis similar to ours. They focused on the hierarchical clustering ofEuropean Union countries based on preselected features to analyzehealthcare development. Their clustering results were evaluatedusing statistical differences between indicator values. Similarly,Raheem et al. [29] approached their objective using the silhouettescore, providing a clearer context for distinguishing clusters. Whileboth approaches seemed reasonable, we opted to use the silhouettescore in our study to understand the distinctiveness of our clusters,which yielded high accuracy in identifying cluster formation.Several studies have been conducted on a national level usingclustering approaches to determine differences in health indicatorsand gain insights into various countries. Proksch et al. [27] analyzedthe clustering of 30 OECD countries to identify the varying aspectsof health that differentiate these clusters. Muldoon et al. [23] andLefèvre et al. [17] explored similarities among countries and theircontributions to health factors. The former focused on mortalitysignificance, while the latter employed a multivariate clusteringapproach to identify patterns in population and healthcare systems.In contrast to these studies, our research includes a forecastingapproach, which provides predictive conclusions for policymakers,analysts, and health practitioners.Levantesi et al. [18] also utilized a multivariate forecasting ap-proach to develop a predictive understanding of healthcare, albeitnot aligned with the Prophet model. Khan & Noor [14] explored theapplication of the Prophet time series approach to visualize futurehealth outcomes, but their study employed a univariate Prophet ap-proach. In our study, we employed a multivariate Prophet approach,which offered a unique perspective by determining the relationshipbetween changes in one indicator and another more accurately.Ahmed et al. [1] and Ampofo & Boateng [4] also adopted interest-ing approaches using multivariate Prophet, focusing specifically oncardiovascular and diabetes health sectors, respectively.Therefore, our research aims to establish a comprehensive as-sociation among predicted population well-being, which can beutilized to advance our understanding of healthcare outcomes.3 METHODOLOGYThe methodology utilized in this research paper followed a se-quential process to analyze health data. Firstly, the data underwentpreprocessing. Next, a dendrogram was constructed using the Wardmethod to identify clusters. A threshold was applied using the ’fclus-ter’ function to determine the number of clusters. Afterward, theimportant features for each cluster were identified using a thresholdof 0.615. We employed the multivariate Prophet method for time se-ries forecasting and predicting future trends. Finally, statistical testswere conducted on the features to identify significant differencesin the upcoming years.3.1 Data CollectionWe obtained the Health Statistics and Nutrition dataset from TheWorld Bank, which offers comprehensive health indicators for vari-ous countries from 1960 to 2021.3.2 Data Preprocessing3.2.1 Data Cleaning. Initially, the original dataset contained in-formation for 266 countries/regions and 255 indicators. To focuson a specific midway time shot, we selected data from 2000. Weexcluded regional aggregations from the dataset (EU, AFRO, etc.)and countries with significant missing values for most indicators(e.g., United Arab Emirates, Aruba, Afghanistan, Poland, Barbados,Guinea). Additionally, we removed indicators with extensive nullvalues across countries. Any remaining null values for a countrywere imputed using the median of that column. After cleaning, thedataset comprised 134 countries and 128 variables.3.2.2 Data Scaling using Min-Max Scaler. To ensure consistencyand prevent any single feature from dominating the analysis, wescaled the data using the Min-Max Scaler [6]. This scaling techniquetransformed the data to a predefined range of 0 to 1 by subtract-ing the minimum value and dividing by the range. This processnormalized the data within the [0, 1] range.3.3 Clustering3.3.1 Linkage Matrix. Next, we computed the linkage matrix usingthe linkage function from the scipy.cluster.hierarchy module. Thelinkage matrix represents the hierarchical clustering structure ofthe data based on pairwise distance calculations.3.3.2 Creating a Dendrogram using Ward’s Method. We employedWard’s method to construct a dendrogram, which visually displaysthe hierarchical relationships among the data points [24]. Ward’smethod minimizes the total within-cluster variance at each step ofdendrogram creation. The resulting dendrogram exhibited hierar-chical clustering patterns from a distance scale of 0 to 27, aiding inunderstanding the grouping patterns within the data (see Fig. 1).3.3.3 Determining the Number of Clusters using fcluster. The num-ber of clusters was determined by assigning data points to clustersbased on a given threshold using the fcluster function. A thresholdvalue of 5 was chosen to define the clusters within the dataset. Thefcluster function, with the specified threshold, provided the clusterassignments for each data point. The above threshold resulted in 9clusters.3.3.4 Evaluation Metrics for Each Cluster: To assess the qualityof the clustering results and evaluate the fit of each data point toits assigned cluster, we calculated the Silhouette score for eachcluster. The Silhouette score measures both the cohesion withinHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023Figure 1: Linkage matrix of nine clusters for the countries in a dendrogrameach cluster and the separation between clusters [32, 25]. The scorewas calculated using equation 1.Silhoutte =Í bi−aimax(ai,bi)n(1)where,aiis the average distance between each sample for i=1,2,3,...n and all other points in its cluster. For each other cluster inthe dataset, the average distance between the sample and all pointsin that cluster is noted and the minimum of these distances is b.nis the total number of samples. To calculate per cluster Silhouettescore,arepresents the average distance between the data pointand other data points within the same cluster and brepresents theaverage distance between the data point and the data points in thenearest neighboring cluster.The Silhouette score ranges from -1 to 1, with a higher scoreindicating better clustering results. A score close to 1 signifies well-separated clusters, while a score close to -1 suggests overlapping orincorrectly assigned clusters. The average silhouette score of all datapoints within the cluster was calculated to obtain the silhouettescore for each cluster. Based on the silhouette score and moreattainable count of the cluster, cluster-8 was chosen for furtheranalysis of time series forecasting.3.3.5 Using hierarchical clustering over other clustering methods:We chose hierarchical clustering using Ward’s method for our anal-ysis of the health statistics and nutrition dataset. Hierarchical clus-tering allows us to explore the data in a hierarchical structure,capturing both global and local patterns of similarity. It is well-suited for datasets with arbitrary cluster shapes and sizes, makingit suitable for analyzing health indicators across countries.3.4 Feature SelectionFollowing the clustering of the countries, our focus shifted to pin-pointing the most crucial characteristics. We accomplished this byimplementing the sklearn library to perform feature selection. Weevaluated 26 key features within the selected cluster, which rankedwithin the top percentile (Table 1).3.4.1 Feature Importance Analysis for Each Cluster. Centroids, orrepresentative data points for each cluster, were determined byaveraging the scaled data. The significance of each feature wasascertained by arranging the feature values in descending order.A threshold of 0.815 yielded fewer features and did not provide acomprehensive outlook for health predictions. As a result, we optedfor a threshold of 0.615, which allowed us to conduct a time seriesforecast with a broader feature set.3.5 Statistical TestsOur reference timeframe was set to the year 2000 for initiating thetime series forecast, and we examined the data for each indicatorwithin the clustered countries. The Kruskal Wallis non-parametrictest served as an effective method for determining value signifi-cance [36]. We utilized this test to discern statistically significantdiscrepancies among the indicators’ values across different coun-tries. After projecting the values for the next decade (2022-2031), werepeated the statistical test on these forecasted values to highlightsignificant differences between countries.3.6 Time-Series Forecasting3.6.1 Data Processing for Time-Series Analysis. Several factors wereconsidered when preparing this data for modeling.Selection of Time Frame: To forecast future health statistics forthe clustered countries, we opted for the most recent data to trainthe multivariate Prophet model. Our dataset encompassed healthdata from 1960 to 2021, but for our purposes, we narrowed thetimeframe to 2000 to 2021. This eliminated the need for imputingdata from distant years.Reduction of Features: The initial feature importance analysisidentified 26 features for the study. However, two features (Cause ofdeath, by non-communicable diseases (% of total) and Internationalmigrant stock (% of population)) had a high percentage of missingvalues across all clustered countries, accounting for up to 81.82%of total data. That is why excluded these indicators and kept 24features.Imputation of Time-series Data: We identified missing valueswithin our set of 26 features, necessitating imputation for a completetime-series dataset. We used Naïve forecasting to fill in the missingdata for the years from 2000 to 2021. If a specific year’s data wasmissing for a particular country’s indicator, we filled the gap usingepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and PurkayasthaTable 1: FEATURE IMPORTANCE FOR CLUSTERIndicator Name Indicator CodeFeatureImportanceValue1 People using at least basic sanitation services (% of population)§SH.STA.BASS.ZS 0.97432 Immunization, measles (% of children ages 12-23 months) SH.IMM.MEAS 0.96063 People using at least basic drinking water services (% of population) SH.H2O.BASW.ZS 0.95854 Immunization, DPT (% of children ages 12-23 months)§SH.IMM.IDPT 0.92575 Survival to age 65, male (% of cohort) SP.DYN.TO65.MA.ZS 0.87536 Survival to age 65, female (% of cohort)†SP.DYN.TO65.FE.ZS 0.87527 Population ages 25-29, male (% of male population) SP.POP.2529.MA.5Y 0.85838 Population ages 20-24, female (% of female population) SP.POP.2024.FE.5Y 0.84379 Life expectancy at birth, total (years)†SP.DYN.LE00.IN 0.822710 Population ages 25-29, female (% of female population)†SP.POP.2529.FE.5Y 0.821611 Life expectancy at birth, female (years)†SP.DYN.LE00.FE.IN 0.795412 Population ages 30-34, male (% of male population) SP.POP.3034.MA.5Y 0.765113 Cause of death, by non-communicable diseases (% of total)∗SH.DTH.NCOM.ZS 0.756714 Population ages 20-24, male (% of male population)§SP.POP.2024.MA.5Y 0.752715 Population ages 30-34, female (% of female population) SP.POP.3034.FE.5Y 0.731816 Population ages 15-64, male (% of male population) SP.POP.1564.MA.ZS 0.72217 Population ages 15-64 (% of total population)†SP.POP.1564.TO.ZS 0.707718Domestic general government health expenditure(% of current health expenditure)SH.XPD.GHED.CH.ZS 0.700719 Population ages 35-39, male (% of male population) SP.POP.3539.MA.5Y 0.691420 Population growth (annual %)¶SP.POP.GROW 0.68921 International migrant stock (% of population)∗SM.POP.TOTL.ZS 0.684222 Population ages 05-09, female (% of female population) SP.POP.0509.FE.5Y 0.673423 Population ages 10-14, female (% of female population)†SP.POP.1014.FE.5Y 0.668624 Population ages 0-14, female (% of female population)†SP.POP.0014.FE.ZS 0.661525 Population, male (% of total population)†SP.POP.TOTL.MA.ZS 0.659526 Population ages 15-19, female (% of female population)†SP.POP.1519.FE.5Y 0.6388∗Removed because of having 81.82% values as missing from the year 2000 to 2021.†Removed because of having highly correlation with other important feature(s) which were in higher rank according to feature importance.§Removed for poor predictions from the univariate Prophet model and were not used in multivariate model training.¶Removed because of having negative values in some years, thus log transform scaling could not be done, thus removed in the forecasting.the preceding year’s data for that same indicator. This resulted in acomplete time-series dataset with 24 features for five countries.Logarithmic Scaling on Time-series Data: Prior to forecasting,we performed a logarithmic transformation for data scaling andreverted to the original values for performance measurement. Al-though the MinMax Scaling algorithm was used initially, we choselogarithmic scaling for the time series forecast. This decision wasbased on the lower error rate found with logarithmic scaling whenreturning to the original data [20].3.6.2 Prophet Forecasting Model to Predict Indicator Values. Ourapproach to predicting yearly indicators’ values for the clusteredcountries and important features involved using multivariate mod-eling in Prophet. This is what enables "what-if" analysis for forecast-ing health indicators. If we simulate or forecast individual predictorindicators and guide policy, we can see the effects of those simula-tions on our final multivariate model. This is crucial to understandhow these indicators’ forecasts varied per country and whether theProphet model’s results were consistent for all clustered countries.Univariate Prophet Model. The univariate Prophet model focuseson forecasting a single time series taking into account the historicalvalues of the target variable and identifies patterns and trends tomake future predictions. The model captures seasonality ( s(t)),trend (g(t)), holiday effects ( h(t)) (if any) and error ( ε(t)) usingadditive regression components.y(t)=g(t)+s(t)+h(t)+ε(t) (2)In our work, we have used a Univariate Prophet model to forecastthe predictor values for the future. However, if existing econometricmodels of varied types are more suited for a particular indicator,then those can also be used. The univariate model for each predictorbuilt the future dataframe for the years 2022 to 2031 (10 years).Multivariate Prophet Model. The multivariate Prophet model ex-tends the univariate model by incorporating additional exogenousvariables or features as regressors that can influence the target vari-able. These additional exogenous variables ( f1(t),f2(t),...,f n(t))can be other time series data or external factors such as economicindicators. In this work, we have incorporated other indicators inthe health statistics data as regressors to predict specific indicatorsone by one. By including these variables, the model can capturetheir impact on the target variable and improve the accuracy ofpredictions.y(t)=g(t)+s(t)+h(t)+f1(t)+f2(t)+...+fn(t)+ε(t)(3)By incorporating relevant external factors, the multivariate modelcan capture additional information and dependencies that impactthe target variable. This can lead to more accurate and reliable pre-dictions. Including additional variables provides insights into theHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023factors driving the target variable’s behavior. It enables a better un-derstanding of the system’s relationships and dependencies amongdifferent variables. This also allows for customization based on thespecific requirements of the forecasting problem. But to incorpo-rate multivariate forecasting, we also found additional complexity,such as complex data preprocessing, feature selection, and potentialcorrelation considerations.The code to replicate this study can be found at:https://github.com/iupui-soic/WB-cluster-forecast.4 RESULTS4.1 ClusteringWith a distance threshold set at 5, our cluster dendrogram (Fig. 1)presented nine (9) visually distinct clusters.The Silhouette score, a measure used to evaluate the clusters andthe countries within the nine clusters, is displayed in Table 2.Table 2: CLUSTERED COUNTRIES AND EVALUATION MET-RICCluster #ClusterSilhouetteScoreCountries1(European Countries)0.2914Bulgaria, Belarus, Czechia, Estonia,Croatia, Hungary, Lithuania,Latvia, Slovenia, Ukraine2(European,North American,Oceanian Countriesand Japan)0.4851Australia, Austria, Belgium, Canada,Switzerland, Germany, Denmark, Spain,Finland, France, United Kingdom, Greece,Ireland, Iceland, Italy, Japan, Luxembourg,Netherlands, Norway, New Zealand,Portugal, Sweden, United States3(East & West African,South Asian andOther Countries)0.4227Benin, Bangladesh, Congo, Comoros,Eritrea, Ghana, Gambia, Haiti,Cambodia, Madagascar, Mauritania,Nepal, Pakistan, Senegal, Togo, Yemen4(Southern AfricanCountries)0.2484 Botswana, Lesotho, Namibia, Eswatini5(African Countries)0.3309Burundi, Burkina Faso, Cameroon,Ethiopia, Kenya, Liberia, Mali,Mozambique, Malawi, Niger, Nigeria,Rwanda, Sierra Leone, Chad, Tanzania,Uganda, Zambia6(Ensemble ofCountries fromDifferent Regions)0.6693Albania, Argentina, Armenia, Bahamas,Bosnia and Herzegovina, Brazil, Barbados,Chile, Colombia, Costa Rica, Cuba, Cyprus,Georgia, Israel, Jamaica, Kazakhstan,Sri Lanka, Moldova, Malta, Mauritius,Panama, Singapore, Seychelles, Thailand,Uruguay7(Large EconomyCountries in Asia)0.5667 China, India8(Middle EasternCountries)0.6597 Bahrain, Kuwait, Oman, Qatar, Saudi Arabia9(Ensemble ofCountries fromDifferent Regions)0.3282Azerbaijan, Belize, Bolivia, Algeria, Ecuador,Egypt, Fiji, Guatemala, Guyana, Indonesia,Iran, Jordan, Kyrgyz Republic, Kiribati,Lebanon, Morocco, Maldives, Mexico,Myanmar, Mongolia, Malaysia, Peru,Philippines, Paraguay, Solomon Islands,El Salvador, Turkmenistan, Tonga,Tunisia, Uzbekistan, Vietnam, VanuatuFigure 2: Time-series Yearly Data and Future Forecasts for Qatarusing Univariate Prophet ModelFigure 3: Time-series Yearly Data and Future Forecasts for Qatarusing Multivariate Prophet Model4.2 Feature RelevanceWe analyzed correlations between the features. If an indicatordemonstrated a strong positive or negative correlation with anyother indicators in the dataset, we excluded it. We retained onlythose indicators that didn’t correlate highly with others. This pro-cess yielded 15 indicators out of the original 26 in Cluster-8 shownin Table 1.4.3 Time-Series ForecastingOur secondary objective was to apply a multivariate time seriesforecasting Prophet model to the significant indicators of the fivecountries within a cluster [35]. A preliminary statistical test high-lighted similarities in the indicators’ values for the year 2000.4.3.1 Outcome of Feature Reduction. Due to many missing values,we excluded two features identified through feature importance.We also removed nine indicators that exhibited a high correlationwith other significant features and one indicator that displayed neg-ative values, which was unsuitable for logarithmic transformation.Consequently, we proceeded with univariate forecasting for theremaining 14 indicators.epiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and PurkayasthaTable 3: ACCURACY METRICS FOR THE FORECASTED INDICATOR VALUES AMONG THE COUNTRIESIndicatorsRMSE MAPE R2Adjusted R2Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Prophet(Avg±SD)LSTM(Avg±SD)Population ages 30-34, male 0.0001 ±0.0001 0.5941±0.3203 0±0 0.0401±0.0259 1±0 0.5997±0.3132 1±0 0.5497±0.3523Population ages 30-34, female 0.0001 ±0.0001 0.2563±0.0961 0±0 0.0216±0.0109 1±0 0.6592±0.468 1±0 0.6166±0.5265Population ages 35-39, male 0.0002 ±0.0002 0.3445±0.1631 0±0 0.0259±0.0093 1±0 0.6581±0.2856 1±0 0.6154±0.3213Population ages 25-29, male 0.0059 ±0.0127 1.1566±0.7031 0.0006±0.0013 0.071±0.0374 1±0 0.6031±0.252 1±0.0001 0.5535±0.2835Population ages 20-24, female 0.0287 ±0.0637 0.4546±0.3414 0.0032±0.0072 0.0421±0.0381 0.9979±0.0046 0.5067±0.3441 0.9956±0.0097 0.445±0.3871Population ages 15-64, male 0.001 ±0.0012 1.2822±0.8086 0±0 0.0143±0.0092 1±0 0.4109±0.6021 1±0 0.3372±0.6774Population ages 05-09, female 0.0001 ±0.0001 0.5177±0.1904 0±0 0.0458±0.0212 1±0 0.0855±1.1177 1±0 -0.0288±1.2574Survival to age 65, male 0.001 ±0.0007 1.1749±0.7324 0±0 0.0125±0.0089 1±0 0.5497±0.6035 1±0 0.4935±0.6789Domestic general governmenthealth expenditure0.4999±0.498 2.3871±1.0832 0.0058±0.0059 0.0282±0.015 0.9681±0.0409 0.4775±0.2928 0.933±0.0859 0.4122±0.3294Immunization, measles 0.0009 ±0.0008 1.0328±0.6968 0±0 0.0086±0.0054 1±0 0.2123±0.2276 1±0 0.1139±0.256People using at least basicdrinking water services0.0008±0.0006 0.2138±0.3582 0±0 0.002±0.0035 0.7997±0.4471 0.5849±0.3723 0.5794±0.9388 0.533±0.41894.3.2 Statistical Testing on the Existing Indicator Values. We per-formed the Kruskal Wallis test on the values of the 15 indicators forthe countries within the clusters. The resulting p-values were allgreater than 0.05, suggesting no statistically significant differencesamong the values of the indicators within the clustered countries.Since these indicators demonstrated similar values across countries,we continued with time series forecasting.4.3.3 Univariate & Multivariate Prophet.Future Dataframe. Univariate Prophet modeling produced reli-able predictions for most indicators, yielding low RMSE & MAPEand betterR2value. However, three indicators demonstrated infe-riorR2values compared to others, leading us to exclude them fromthe multivariate models. These indicators were: Population ages20-24, male (% of male population), Immunization, DPT (% of chil-dren ages 12-23 months), and People using at least basic sanitationservices (% of the population).Future Forecasts. The multivariate Prophet model generated fore-casts for each of the 11 indicators under consideration. In eachforecast, the multivariate model included 10 additional regressorscorresponding to the other 10 indicators, serving as predictorsexcluding the target indicator. The accuracy metrics for the multi-variate models are detailed in Table 3. The univariate forecastingmodel predicted 15 indicators for a sample country (Qatar), andthe multivariate model predicted 11 indicators (see Fig. 2 and Fig.3 respectively). These figures illustrate the multivariate Prophetmodel’s superior forecasting performance. The combined forecastsfor the clustered countries (Bahrain, Kuwait, Oman, Qatar, andSaudi Arabia from the year 2000 to 2031 for all 11 indicators areillustrated in Fig.4 with continuous error bar plots. The differencesin the indicators in the future years can be seen in Fig. 44.3.4 Statistical Analysis on the Forecasting. The future forecastedindicator values also showed statistically significant differences(p<0.05) among the countries, highlighting that the forecasted tra-jectory of the countries might be changing in the future based on thealready changing nature of predictors. Using univariate forecasting,such modeling would not have been possible.5 DISCUSSIONHealth econometrics analyses have traditionally relied on cross-country surveys like the National Family Health Survey (NFHS) andthe Demographic Health Survey (DHS). They often employ logisticregression and other statistical techniques for comparing countries[20, 33]. Among unsupervised statistical approaches, I-distance[11, 12] has been utilized for ranking purposes, including coun-tries based on health indicators. However, our study presents thepotential of enhanced clustering machine learning techniques formanaging multiple related variables, particularly for large datasets.[21].Notably, certain clusters, such as Cluster-4 and Cluster-8, displaygeographical and cultural similarities. The cluster linkage cutoffwould need to be significantly lowered to establish more readilyapparent similarities within each cluster. However, this could leadto fewer predictor indicators, affecting our features of importance.If we expand the indicators used in feature selection, we risk com-plicating the model and reducing its interpretability. [16].Other clustering algorithms, especially spectral clustering, whilea powerful technique in certain cases, may not always be the mostappropriate choice. It operates based on graph theory principlesand requires constructing a similarity matrix and computing eigen-vectors, which can be computationally expensive and memory-intensive for larger datasets. Spectral clustering also contains astochastic factor which was avoided by using hierarchical cluster-ingGiven the size and nature of our dataset, hierarchical clusteringwith Ward’s method proved to be a more scalable and efficient op-tion. It aligns well with our goals of exploring hierarchical patternsand capturing diverse cluster shapes in the health and nutritiondataset. Hierarchical clustering also provided meaningful insightsinto the health indicators across countries. Along with this, Loga-rithmic scaling on the dataset provided less mean squared error ona whole in the prediction of the future features’ values comparedto Min-Max scaling.While our models present robust and meaningful findings, theyalso highlight some challenges that need to be considered in futurestudies. A critical point is the trade-off between the granularity ofclustering and the complexity of multivariate models. While deeperclustering might yield more nuanced insights, it can also reduceHierarchical Clustering and Multivariate Forecasting for Health Econometrics epiDAMIK @ SIGKDD Workshop, 2023the number of predictor indicators and increase model complexity.It calls for a balanced approach to ensure the interpretability andpractical utility of the models.Additionally, our multivariate forecasting model is predicatedon current and past trends. The dynamic nature of health indi-cators and their susceptibility to various external factors such aspolitical changes, economic fluctuations, or global health crises,might alter these trends significantly. Future research must con-sider these potential disruptions and explore methods to accountfor such unpredictability.Further, we could determine certain associations by understand-ing the identification of statistical differences amongst featuresthat we obtained after analysis and predictions from a multivariatemodel. Viewing Fig.4i, where Qatar’s future prediction on healthexpenditure seems to decline, and Fig.4j also indicates a declinein immunization. Similar declines are seen in female populationages who are potentially at a maternal period (Fig.4c and 4e). Wedrew validating conclusions that our multivariate prophet modeldetermines the reliance of a feature on another feature for a country[22]. This can aid the several health assessment research associatedwith various indicators such as work by Amoatey et al. [3].Recognizing these trends and connections could guide policy-makers or health practitioners toward effective strategies for im-proving overall health outcomes. Moreover, our predictions con-sider various population age groups, offering a comprehensiveperspective on health prospects [9]. Our study’s application of mul-tivariate forecasting allowed us to predict future health outcomesbased on current trends and patterns. This model has allowed us toproject possible trajectories for various health indicators in the Mid-dle Eastern countries cluster, aiding in long-term strategic healthplanning for the region. The associations identified between differ-ent features underline the interconnectedness of health outcomes,signaling the necessity for an integrated approach to healthcarepolicy.5.1 LimitationsThis study has its limitations. Although we selected 26 indicatorsfrom the World Bank dataset’s total of 128, not all could be incorpo-rated into our multivariate prediction model. For example, the Popu-lation Growth indicator was excluded because it contained negativevalues incompatible with logarithmic transformation. However, our(a)Population ages 35-39, male (% of malepopulation)(b)Population ages 30-34, male (% of malepopulation)(c)Population ages 30-34, female (% of femalepopulation)(d)Population ages 25-29, male (% of malepopulation)(e)Population ages 20-24, female (% of fe-male population)(f)Population ages 15-64, male (% of malepopulation)(g)Population ages 05-09, female (% of fe-male population) (h)Survival to age 65, male (% of cohort)(i)Domestic general government health ex-penditure (% of current health expenditure)(j)Immunization, measles (% of children ages12-23 months)(k)People using at least basic drinking waterservices (% of population)Figure 4: Forecasts of each indicator for five clustered countries∗∗Blue forecast lines are for Bahrain; Orange forecast lines are for Kuwait, Green forecast lines are for Oman, Red forecast lines are for Qatar, and Purple forecast lines are for SaudiArabiaepiDAMIK @ SIGKDD Workshop, 2023 Paddo, Afreen and Purkayasthamodel’s predictions could be significantly influenced by the inclu-sion of this indicator. Similarly, other omitted indicators could haveoffered additional insights into overall health outcomes.5.2 Future WorkFuture work could involve constructing a more informative modelwith an expanded set of features or a larger cluster of countries.Techniques like Neural Prophet [37], DeepAR [30], or even simplermodels like Random Forest Regressor [10] could be explored. Alter-native approaches to constructing future data frames, such as AutoARIMA, could yield more reliable results.6 CONCLUSIONIn conclusion, our study has identified key factors influencing healthoutcomes in selected Gulf Cooperation Council (GCC) countries(Bahrain, Kuwait, Oman, Qatar, and Saudi Arabia). We highlightedthe importance of population wellness and age-specific strategiesin healthcare management and disease prevention. Our method in-volved data preprocessing, clustering using Ward’s method, featureselection, and time series forecasting with multivariate Prophet.This research provides a comprehensive approach to health dataanalysis, identifying crucial health outcome influencers, and deliv-ering actionable insights for policymakers and healthcare profes-sionals using machine learning and forecasting techniques.REFERENCES[1] Usman Ahmed, Jerry Chun-Wei Lin, and Gautam Srivastava. 2023. Multivariatetime-series sensor vital sign forecasting of cardiovascular and chronic respira-tory diseases. Sustainable Computing: Informatics and Systems , 38, 100868.[2] Abdelaziz Abdelmegid ALI and Mohamed Noureldin SAYED. 2020. Determi-nants of healthcare expenditures in gcc countries: a panel data analysis. TheJournal of Asian Finance, Economics and Business (JAFEB) , 7, 8, 705–714.[3] Patrick Amoatey, Hamid Omidvarborna, Mahad Said Baawain, Abdullah Al-Mamun, Aynul Bari, and Warren B Kindzierski. 2020. Association betweenhuman health and indoor air pollution in the gulf cooperation council (gcc)countries: a review. Reviews on Environmental Health , 35, 2, 157–171.[4] Ama G Ampofo and Emmanuel B Boateng. 2020. Beyond 2020: modellingobesity and diabetes prevalence. Diabetes research and clinical practice , 167,108362.[5] Pınar YALÇIN BALÇIK, Şenol DEMİRCİ, and Murat KONCA. 2021. Comparisonof european countries’ health indicators and health expenditures by clusteringanalysis. Ömer Halisdemir Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi ,14, 2, 365–377.[6] Ekaba Bisong and Ekaba Bisong. 2019. Introduction to scikit-learn. BuildingMachine Learning and Deep Learning Models on Google Cloud Platform: AComprehensive Guide for Beginners , 215–229.[7] Partha Deb, Edward C Norton, and Willard G Manning. 2017. Health economet-rics using Stata . Vol. 3. Stata press College Station, TX.[8] Tristan de Boysson, Rabih Khouri, and Gregory Garnier. [n. d.] A HealthcarePrescription for the GCC focusing on a few key areas can create winningscenarios for governments, payers, providers and patients. https://www.bain.com/insights/a-healthcare-prescription-for-the-gcc/. Accessed: 2023-06-07. ().[9] Theresa Diaz et al. 2021. A call for standardised age-disaggregated health data.The Lancet Healthy Longevity , 2, 7, e436–e443.[10] Aurelien Geron. 2022. Hands-on machine learning with Scikit-Learn, Keras, andTensorFlow . " O’Reilly Media, Inc.".[11] Branislav Ivanović. 1974. A method of establishing a list of development indi-cators. Economic Analysis , 8, 1-2, 52–64.[12] Veljko Milojko Jeremic and Zoran Radojicic. 2010. A new approach in theevaluation of team chess championships rankings. Journal of QuantitativeAnalysis in Sports , 6, 3.[13] Maram Gamal Katoue, Arcadio A Cerda, Leidy Y Garcıa, and Mihajlo Jakovljevic.2022. Healthcare system development in the middle east and north africa region:challenges, endeavors and prospective opportunities. Frontiers in Public Health ,10, 4937.[14] Fairuz Bilquis Khan and Antika Noor. 2021. Prediction and classification ofhuman development index using machine learning techniques. In 2021 5th In-ternational Conference on Electrical Information and Communication Technology(EICT) . IEEE, 1–6.[15] Tawfiq Khoja, Salman Rawaf, Waris Qidwai, David Rawaf, Kashmira Nanji,Aisha Hamad, and Tawfik Khoja. 2017. Health care in gulf cooperation councilcountries: a review of challenges and opportunities. Cureus , 9, 8.[16] Jennifer Koran. 2020. Indicators per factor in confirmatory factor analysis:more is not always better. Structural Equation Modeling: A MultidisciplinaryJournal , 27, 5, 765–772.[17] Thomas Lefèvre, Claire Rondet, Isabelle Parizot, and Pierre Chauvin. 2014.Applying multivariate clustering techniques to health data: the 4 types ofhealthcare utilization in the paris metropolitan area. PloS one , 9, 12, e115064.[18] Susanna Levantesi, Andrea Nigri, Gabriella Piscopo, and Alessandro Spelta.2023. Multi-country clustering-based forecasting of healthy life expectancy.Quality & Quantity , 1–27.[19] Ahmed Mandil, Monique Chaaya, and Dahlia Saab. 2013. Health status, epi-demiological profile and prospects: eastern mediterranean region. Internationaljournal of epidemiology , 42, 2, 616–626.[20] Willard G Manning and John Mullahy. 2001. Estimating log models: to trans-form or not to transform? Journal of health economics , 20, 4, 461–494.[21] Nemanja Milenkovic, Jovanka Vukmirovic, Milica Bulajic, and Zoran Radojicic.2014. A multivariate approach in measuring socio-economic development ofmena countries. Economic Modelling , 38, 604–608.[22] Ranjan Kumar Mohanty and Deepak Behera. 2020. How effective is publichealth care expenditure in improving health outcome? an empirical evidencefrom the indian states. Work Pap , 1–29.[23] Katherine A Muldoon, Lindsay P Galway, Maya Nakajima, Steve Kanters,Robert S Hogg, Eran Bendavid, and Edward J Mills. 2011. Health system deter-minants of infant, child and maternal mortality: a cross-sectional study of unmember countries. Globalization and health , 7, 1, 1–10.[24] Fionn Murtagh and Pierre Legendre. 2014. Ward’s hierarchical agglomerativeclustering method: which algorithms implement ward’s criterion? Journal ofclassification , 31, 274–295.[25] Godwin Ogbuabor and FN Ugwoke. 2018. Clustering algorithm for a healthcaredataset using silhouette score value. Int. J. Comput. Sci. Inf. Technol , 10, 2, 27–37.[26] World Health Organization. 2023. World health statistics 2023: monitoring healthfor the SDGs, sustainable development goals . World Health Organization.[27] Dorian Proksch, Julia Busch-Casler, Marcus Max Haberstroh, and AndreasPinkwart. 2019. National health innovation systems: clustering the oecd coun-tries by innovative output in healthcare using a multi indicator approach.Research Policy , 48, 1, 169–179.[28] Firas Raad. 2015. Shaping healthier societies and building higher performinghealth systems in the gcc countries. Washington DC: World Bank Group .[29] Enayetur Raheem, Jahidur Rahman Khan, and Mohammad Sorowar Hossain.2019. Regional disparities in maternal and child health indicators: cluster anal-ysis of districts in bangladesh. PLoS One , 14, 2, e0210697.[30] David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. 2020.Deepar: probabilistic forecasting with autoregressive recurrent networks. In-ternational Journal of Forecasting , 36, 3, 1181–1191.[31] UN ESCWA Secretariat et al. 2007. The millennium development goals in thearab region 2007: a youth lens.[32] Ketan Rajshekhar Shahapure and Charles Nicholas. 2020. Cluster quality anal-ysis using silhouette score. In 2020 IEEE 7th international conference on datascience and advanced analytics (DSAA) . IEEE, 747–748.[33] Liming Shao, Yiting Wang, Xuhui Wang, Lu Ji, and Rui Huang. 2022. Factorsassociated with health insurance ownership among women of reproductiveage: a multicountry study in sub-saharan africa. Plos one , 17, 4, e0264377.[34] Michael Sturm, Jan Strasky, Petra Adolf, and Dominik Peschel. 2008. The gulfcooperation council countries-economic structures, recent developments androle in the global economy. ECB occasional paper , 92.[35] Sean J Taylor and Benjamin Letham. 2018. Forecasting at scale. The AmericanStatistician , 72, 1, 37–45.[36] Elvar Theodorsson-Norheim. 1986. Kruskal-wallis test: basic computer pro-gram to perform nonparametric one-way analysis of variance and multiplecomparisons on ranks of several independent samples. Computer methods andprograms in biomedicine , 23, 1, 57–62.[37] Oskar Triebe, Hansika Hewamalage, Polina Pilyugina, Nikolay Laptev, ChristophBergmeir, and Ram Rajagopal. 2021. Neuralprophet: explainable forecasting atscale. arXiv preprint arXiv:2111.15397 . |
BNU_N-7EIR | Pandemic Data Collection, Management, Analysis andDecision Support:A Large Urban University RetrospectiveNamrata Banerjibanerji.8@osu.eduThe Ohio State UniversityColumbus, Ohio, USASteve Changschang@osc.eduOhio Supercomputer CenterColumbus, Ohio, USAAndrew Perraultperrault.17@osu.eduThe Ohio State UniversityColumbus, Ohio, USATanya Y. Berger-Wolfberger-wolf.1@osu.eduThe Ohio State UniversityColumbus, Ohio, USAMikkel Quamquam.7@osu.eduThe Ohio State UniversityColumbus, Ohio, USAFigure 1. Archived OSU Safe & Healthy COVID-19 Dashboard for November 2, 2020AbstractThe COVID-19 pandemic has disrupted the world. Duringthis crisis, data has emerged as a critical resource for un-derstanding, monitoring, and mitigating the impact of thedisease. We present The Ohio State University’s data-drivenframework for comprehensive monitoring of the COVID-19pandemic. We discuss the challenges associated with datacollection and investigate the roles and limitations of dataPermission to make digital or hard copies of part or all of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contactthe owner/author(s).epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA©2023 Copyright held by the owner/author(s).analysis in supporting intervention choice and implemen-tation strategies amid the complexities of the pandemic asit unfolded. Balancing privacy, consent, and transparencyand ensuring the responsible handling of sensitive infor-mation is crucial in maintaining public trust. We examineprivacy-preserving techniques, ethical frameworks, and legalregulations aimed at safeguarding individuals’ rights whileharnessing the power of data. In our experience, conscien-tious data architecture provided a foundation for meaningfulethical applications of data products, which not only helpedmitigate the current crisis, but also can provide valuable in-sights for better addressing future public health emergencies.CCS Concepts: •Information systems →Database ad-ministration ;•Applied computing →Health care infor-mation systems .Keywords: datasets, public health, data management, ethicsepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamACM Reference Format:Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam. 2023. Pandemic Data Collection, Man-agement, Analysis and Decision Support: A Large Urban Univer-sity Retrospective. In epiDAMIK 2023: 6th epiDAMIK ACM SIGKDDInternational Workshop on Epidemiology meets Data Mining andKnowledge Discovery, August 7, 2023, Long Beach, CA, USA. ACM,New York, NY, USA, 8 pages.1 IntroductionThe onset of the COVID-19 pandemic in early 2020 was oneof the most significant and life changing events for everyoneon the planet, impacting everything from small businessesto entire countries. In case of educational institutions, the in-definite suspension of classes, upending of every traditionalaspect of academic and student life, and the transition tovirtual education was stressful for students, staff, and facultyalike. The Ohio State University (OSU), a large urban edu-cational institution, undertook a massive policy response tosupport the continuing function of the university by moni-toring and managing the dynamics of the pandemic on andaround its campuses. Putting together a coalition of epidemi-ologists, data scientists, public health policy makers wasonly the first step of what shaped up to be at least a threeyear marathon. Data was at the center of the whole process,both as the decision enabler and as the product of many ofthe contributing efforts. To make data actionable requiredthe work of many teams and several iterations of cleaning,analysis and inference, and visualization. In this paper, wepresent the overall data-focused aspects of the process, high-lighting the achievements and the hindrances, as well asthe major takeaways, so that we are better prepared for fu-ture public health emergencies or other large scale collectiveresponses. This manuscript, besides serving as a piece ofinstitutional memory, communicates in detail the variousobstacles encountered in the handling of the mammoth datafor the data science community to be aware of. Among themain takeaways we consider the effectiveness of the datadriven approaches for managing the pandemic response, theneed for an institutional data infrastructure, and the impor-tance of a well organized team of experts and professionalsworking together towards a well-defined goal.2 OverviewThe Ohio State University stood up the Comprehensive Mon-itoring Team (CMT) [ 4] to include a framework of supportfor data driven decisions for pandemic management, includ-ing robust case finding (via serial mass administration ofindividual PCR tests with rapid in-house processing), lo-cally administered isolation of cases, contact tracing andquarantine of close contacts, as well as data integration, anal-ysis, modelling, risk evaluation, policy recommendations,and intervention implementation based upon knowledge de-rived from individual case management, subsequent viral(genomic) sequencing, large scale syndromic surveillanceand evidence of environmental (wastewater and dust) shed-ding [ 6,12,14,15]. Here we present the core of the datacomponent of this system that integrated data from varioustesting centers, conducted daily analyses, and representeddata in formats usable by the leadership to support bothindividual level contact tracing and the university’s policyresponse to the public health emergency. In the coming sec-tions, we discuss the goal of setting up such a system, theimplementation pipeline, data sources and some of the chal-lenges and takeaways.3 GoalsBuilding and maintaining such a huge framework and em-ploying a whole workforce including faculty, students, health-care workers consumes university resources at a large scale.The goals were the result of several rapid iterations of con-vergent conversations between the university administrationand members of the CMT, as well as the consultations withexternal experts. The specific aims of the data componentsof the framework were as follows:•Tracking the positivity rate. Positivity rate or testingpositivity rate, defined as the percentage of tests reportedthat are positive [ 10], emerged early in the pandemic asthe agreed upon indicator of the state of the populationand the basis for comparing different populations [ 9]. Weused the positivity rate, throughout the monitoring processdue to a number of reasons, one of them being that thispercentage (sometimes a fraction) was the most expressiveand conveyed a more complete story than other measuressuch as absolute number of positive cases. It is true that100% of the university population was not being tested,because there were exemptions (medical and otherwise)and non-compliants, but we had the data necessary to de-termine exactly what fraction of the population was beingtested. This was the best metric that we could monitorfrom the data and information available to us at the time,and it never became a cause for concern.•Contact tracing. Removal of positive and potentially pos-itive cases from the population is key for suppressing thespread of the virus [ 8,17]. It was necessary to providecontact information for people who tested positive and toidentify and contact their close contacts in order to isolateand quarantine them, respectively.•Understanding the micro trends and risks based onevents. To understand the dynamics, the risks, and theimplications of the pandemic for various subpopulations itwas necessary to provide the ability to zoom in on specifictime intervals and subgroups in the data. Examples of thequestions asked include: How does fall break or Halloweenbehaviour change/impact infection rates? Is there an in-creased risk of students in a 4-person suite over a 2-personPandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAdorm room? How do the risks associated with in-personclasses compare with hybrid or remote classes?•Supporting daily policy decisions of a large urbanuniversity. Daily decisions supported by data includedthe choice of a testing strategy and protocol, transition tohybrid vs online only classes, occupancy in classrooms,vaccination and masking requirements, etc. Having accessto the right data was essential. The testing protocol [ 3,16] was more strict in the early days of the pandemic,requiring all students who live in residence halls or whohave at least one in-person class to test at least once everyweek. The requirements were relaxed in the subsequentsemesters. Testing mandates were also in place aroundholidays, for example, students were required to test beforea Thanksgiving break and after. The WiFi data was oftenutilized to get a sense of how many students were stillresiding in the dorms over the break, and how many wenthome.•Reducing burden in the wider population. OSU Colum-bus campus is a large urban campus with highly permeableboundary in the center of a city. In order to contain thepandemic, the infection rates needed to be controlled bothon and around campus. Moreover, the university sought tomitigate the export of infections to communities beyond itscampuses. College students mix with the city populationand visit their family over academic breaks, potentially in-creasing the risk of transmission to vulnerable communitymembers. Recommending and at times requiring testingbefore the academic breaks was one such measure takento reduce the burden on vulnerable immuno-compromisedpopulation outside the university.4 ImplementationOSU has 68,000 students, 12,000 of which reside in residencehalls during a regular year. During the pandemic, about 8,000students were in residence halls and were required to testweekly. Additional students, faculty, and staff were testingvoluntarily. At its peak, more than 30,000 tests per weekwere processed.Multiple teams across Information Technology support,Student Life, Translational Data Analytics Institute (TDAI),Infectious Disease Institute (IDI), University Medical Centers,College of Public Health, and many more were responsiblefor standing up a system that would be in place for at leastthe next 3 years. The data environment was a secure and flex-ible environment that allowed for dynamic data definitionand integration of data from at least 56 sources when it wasintroduced. (The number of data sources grew to over 100by the end of 2022.) Initial data sources included testing datatogether with the administrative data of student information,residence and permanent addresses, demographics, class reg-istration, residence layout, class and college affiliations, WiFiaccess point information, and much more. The pipeline isillustrated in Figure 2 and is described very briefly below.•Primary test data was transmitted into the internal securedata environment via electronic file transfer multiple timesa day.•Additional attributions from other internal OSU systems(Identity management (IDM), Student Information Systems(SIS), Student Life, etc.) were preloaded and updated accord-ing to the system’s change protocol (e.g. each semester).•Test results and internal data were combined into a cohe-sive reusable dataset (AKA the “gold table").•Analysts and dashboard builders utilized a common sourcefor all reports and visualizations.•Data was also sent to Helpspot/Salesforce to support caseinvestigation and contact tracing efforts.4.1 Data description and daily analysisAmong the 50+ tables and views that were maintained onAWS, there were 10-12 datasets, described below, that weremost frequently accessed for daily analysis reports.•‘Gold’ dataset of people : This view is derived from mul-tiple tables, that contain individuals’ unique identifiers,demographic information such as gender, race, ethnicity,age, home and campus address, affiliation with the univer-sity, affiliation with an OSU campus, indicators of whethertheir on or off campus, student housing residence, etc.There are roughly 2.5 million entries in this dataset, withupdates at regular time intervals of changing affiliations,addresses, and other variables.•‘Gold’ dataset of tests : Similar to the gold person table,this is also a derived view of data on tests administered bythe university that combines variables like test providername, test administered time, test result time, test result,type of test conducted, etc. It also contained some of thedemographic information and addresses so that quick re-sults could be obtained by running simple queries, withoutjoining multiple tables.•Dataset on off campus residence housing : This datasetcontains information on what organizations individualsare a member of, whether they are an active member,whether they live in the organization housing, etc. Thiswas a particularly useful dataset at the beginning of thepandemic as many outbreaks occurred in off-campus resi-dence houses, which were analyzed for patterns [13].•Dataset on contact tracing : Each actionable positive testresult generated a ticket, which is entered into a Sales-Force(TM) dataset of tickets. The metadata associated witheach ticket included a unique ticket identifier, the personwhose close contact this is, the person who is the close con-tact, both their information, the time and result of the test,whether that person had symptoms, whether that person isan OSU affiliate, etc. This dataset was important through-out the pandemic, since these tests and contacts were thefocus of most of the analyses. Also, this dataset containedepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel QuamFigure 2. Data flow in the OSU COVID-19 monitoring pipeline.data on positive tests even if they were not present inthe gold test data table. This is because while the goldtable only recorded tests that were administered by theuniversity, the SalesForce(TM) tickets datasets containedinformation on other tests, some outside the university, aslong as they were positive. This dataset was thus a goodsource for absolute number of positives in the universitycommunity, but not very good for computing rates, due tothe absence of a denominator.•Datasets on class enrollment : When the university re-opened for the Fall after the summer of 2020, a lot of classeswere online, some were hybrid, and few were in-person.It was important to know if there was additional risk ofinfection for students enrolled in classes conducted in per-son, and decisions had to be made to combat the risk andspread of infections. The class enrollment datasets werekey in this effort.•Datasets on vaccination : Two datasets were maintainedthat contained vaccination information, one for studentsand one for employees (including staff). Although con-taining the same information in essence, the two werestructured differently. The tables for students containedtwo date variables, one denoting the date of dose received,and the other indicating the date when the individual be-comes fully vaccinated according to CDC guidelines. It alsohad variables corresponding to whether the individual hada vaccination exemption, whether the dose was CDC ap-proved, the CDC code (e.g, 208 for Pfizer) [ 2], whether theshot was a booster, etc. On the other hand, the employeevaccination table contained columns on first vaccinationdate, second vaccination date, up to seventh vaccinationdate and the provider information for each in additionto the exemption and booster indications. Thus, the dataanalysis needed to produce the same results from the twotables needed to be different.The initial daily analysis included breakdown of test posi-tivity rate in each of the residence halls, between demograph-ics, majors, and campuses. This was for internal consump-tion, pattern identification, and insight derivation. Much ofthis data and the derived analysis was private and was notmade public. The results that did make it to the dashboard[3], as shown in Figure 1, were the aggregate and summarynumbers on reproduction number, which is a standard epi-demiological metric [ 7], the daily number of cases, the 7-dayaverage, etc.1. Identification of close contacts of studentsresiding in dorms was a large part of the daily analysis andthe gold datasets were utilized to that end to produce a listof roommates and suitemates. A concise description of theanalysis performed was first published in an initial report [ 4]in October 2020 and updated in a second report[ 5] in March2021 by the CMT.5 ChallengesThe novelty, scale, and duration of the recent and ongoingpandemic were major challenges. Data collection, manage-ment, and analysis pipelines at this scale had no modernprecedent and had to be designed as they were beginning tobe used. Moreover, the timelines were drastically compressedand the requirements initially were changing frequently. Inaddition, some areas, such as close contacts or attendance ofevents, lacked data collection, and some critical data streams,including off-campus testing, were initially completely ab-sent. Further, as most teams around the world, we initiallylacked the full understanding of how to translate the ques-tions into data and how to prioritize the variables and theanalysis for decision support, particularly in the context ofhuman behavior. Below are some of the issues that posedsignificant challenges to the team.1The dashboard was awarded the A+ rating and selected as the best COVID-19 university dashboard by the “We Rate Covid Dashboards" panel of aca-demics [1]Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA5.1 Data cleaningThe data was collected from numerous sources, some ofwhich were manual entries and consequently had unavoid-able human error. For example, a table of people in the data-base had the OSU unique identification (name.#) as the pri-mary key, and the table of test results was supposed to havethe same as foreign key. Typographical errors or null valuesin this identifier column resulted in our inability to corre-spond a test to an individual, causing a non negligible shiftin the summary statistics. Once the problem had been iden-tified, there was joint effort to clean it up, combining morethan four data streams and reducing the number of uniden-tified tests to a number that would not change the inference.Yet, there were still a few individually unidentifiable entriesin the datasets, albeit not a high number to raise a concern.Minimizing manual entry to data sources can reduce suchissues by a considerable amount.A similar problem was found in the table for employeevaccination records, with clearly wrong dates of doses. Whilemost were due to errors, in some cases, employees were actu-ally part of vaccination trials and had received a dose beforeany vaccination received emergency use authorization orapproval for distribution to the general public. These caseswere indistinguishable from the erroneous cases withoutcareful manual investigation and knowledge of the regula-tory frameworks and timing of numerous vaccine candidatesfrom all over the world.One of the challenges that the team immediately encoun-tered while using demographic data was that there were anumber of similar datasets, curated by different organiza-tions at OSU, and used for different operational purposes.Re-purposing these for COVID-19 demographics analysisrequired that specific datasets and methodologies were em-ployed for consistency. Part of the Human Infrastructurethat was critical here were experts of the use of these legacydatasets to be able to share what nuances may have beenencoded in the data, and to help determine the least wrongdatasets and methods to use. This investigation eventuallyled to the creation of the "gold" datasets, which were sonamed because they were the COVID project’s Gold Stan-dard demographic associated with an individual or test.These examples illustrate the need for expert data curation,close scrutiny of analysis outputs that consumed these datasources, efforts to minimize manual data entry, and for closecollaboration with domain experts at every step.5.2 Data storage, backup, documentation, andrecoveryThe volume of data generated by testing mandates as well asvoluntary testing required careful consideration of large, yetquickly accessible and continuously backed up data storage.The ability to look up prior data was critical to understandingtrends and the dynamics of trends, as well as comparing theoutcomes of various past decisions. For continuously chang-ing data, such as the daily updated test data, it is needed tomaintain regular snapshots, checkpoints, and versions. Thisaspect was not fully appreciated initially and required sig-nificant efforts to redesign data architecture. We maintainedtwo ‘gold’ datasets, one corresponding to people and demo-graphics and one corresponding to tests’ metadata. Thesederived datasets were cleaned and organized to our stan-dards that would be the basis of further analysis. This cutdown on the work of individual analysts so that those clean-ing/organization steps would not need to be repeated. The‘gold’ data of people, consisting of faculty, staff, students,and everyone else affiliated in some way with the university,updates significantly every semester overwriting previousdata in the database (S3 environment). We would save a snap-shot of the data every semester, but unfortunately initiallythe snapshots were taken towards the end of the semesterswhen students had already started leaving the campus. As aresult of this, recently when we wanted to get a time seriesof positivity rates in residence halls, it was different fromthe original since we do not have the correct denominator.Recovering this information is possible, but requires integra-tion of other data sources, demanding significant investmentof resources, effort, and time. Majority of the people whowere part of the university supporting the CMT and were re-sponsible for setting up the system are no longer working atOSU. Moreover, early in the reopening of the university, theprimary focus was on managing the pandemic and bringingdown the positivity rate, and detailed documentation wasnot prioritized.Mid semester migration from one homegrown case datamanagement solution to an outside vendor was a major issuethat required major investment and retraining and we arecontinuing to deal with this today from a data and analysisperspective. Roughly from August 2020 to November 2020,we had our positive test (case) data ingested and case inves-tigation/contact tracing notes stored in a secured instanceof a HelpSpot database integrating in some instances withREDCap surveys and pushing out to several communicationplatforms, but later we shifted to a Salesforce Health Cloudbuild, which assisted with future testing data variations,vaccine information, as well as some automatic remindercommunications. The data had been migrated from the oldtable to the new one in theory, but in part user generated het-erogeneity, as well as version control issues in the HelpSpotsource data meant there continued to be gaps in the dataingested by Health Cloud (Salesforce) which do not have sim-ple workarounds for analysis of all variables. We maintainseveral tables for the test information storage, but there areinconsistencies across those tables. More than one tables ex-ist mainly because we derived simpler versions of tables withmany columns that are not relevant for day-to-day analysis.One of the (intermediate) mother tables recently had oneof its very important columns (the test specimen collectionepiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quamtime/date column) dropped from an integration during anupdate, and it should have been okay to just look it up ina derived or other related testing table had there not beenmajor differences in the number of entries in the others.The IT organization at OSU, then known as the Officeof the CIO (OCIO) had embarked on a project prior to theCOVID epidemic to move OSU Enterprise data off premisesand onto Amazon Web Services (AWS). AWS was the obviouschoice as the data storage platform, as much of the data werealready present on the platform, and tools such as AmazonAthena were able to provide a layer of data abstraction sothat disparate datasets could be queried in a consistent man-ner. That OCIO project to house these data in a consistentmanner was fortunate; it would otherwise have added anadditional layer of processing to export and synthesize datafrom various legacy systems. The other major considera-tion is that there are significant costs of using a commercialcloud service. While these were covered in part by the OCIOproject, additional data storage for COVID data and the useof AWS tools such as Athena were incurred by the COVIDproject.5.3 Data governance and ethical considerationsThe university has a complex set of data governance regula-tions as do individuals’ private health information, whetherused in the healthcare or public health applications. Whilespecial authorization was granted to use some of the datain the pandemic emergency, security and privacy remainedstrict requirements. Each team member had training in han-dling secure and private data.In addition to the standard data governance issues, deal-ing with the high resolution personal data has its own setof ethical issues. Ultimately, the main question was: what isthe benefit of using a particular data source or performing aparticular analysis and would it change the decisions or thepandemic dynamics? If so, was it necessary to use individualand identifiable data for decision making or could aggregateor coded information have similar utility? For example, whileit is within the rights of the university to use the WiFi accesspoint information to “follow" an individual or to understandwho is within the same room, such information has a high‘icky factor’ and should be used sparingly. Moreover, whileinitially it seemed that WiFi data would provide a good proxyfor contact tracing, it turned out that the resolution of thedata did not correspond well to the physical definitions of acontact. Ultimately, it was decided to use WiFi data in aggre-gate to assess population movements rather than individuals’proximity to other individuals. For example, WiFi data wasused to estimate the number of students leaving campus overthe weekend or the number of students present in an “inperson" classroom. Moreover, the aggregate trends proved tobe much more robust than the individual-based analysis andwere significantly less time consuming. Additionally, adher-ence to the current applicable statutory guidelines for caseinvestigation, subsequent case management, and/or contacttracing may require some variation depending upon indi-viduals’ occupation, travel history, personal risk factors, im-munocompetence, vaccination status, which could includecertain specific preexisting conditions, medications, clini-cal care received, viral (variant/sub-variant) lineage, and/ordisease severity. However, specific individuals’ health infor-mation related to their experience with COVID-19 wouldlargely not meaningfully determine macro-level preventionpolicy or interventions in the university context indepen-dently from aggregate trends and information in the widerpublic health policy guidance, which are separately informedby individuals’ public health, laboratory testing and clini-cal health records. Therefore, particularly those sensitiveindividual level data, especially health data were collectedand subsequently shared only to the extent they would have‘meaningful use’, within the data user groups’ spheres ofcontrol, stated goals, and purview (i.e. healthcare providerswould have access to information relevant for managingpatient care; public health authorities would have access toinformation relevant to determining specific application ofdisease management protocols for individuals and/or groups;occupation health, workplace, and student life safety per-sonnel would have limited access to information relevantto adherence with applicable disease prevention laws andpolicies aimed at risk reduction, such as adherence to testing,vaccination, and isolation/ quarantine requirements in someinstances).6 Takeaways6.1 Behavior over analyticsThe main takeaway of our data-supported pandemic monitor-ing framework is the same as the main takeaway for dealingwith the COVID-19 pandemic world-wide: ultimately, themain determinant of the success of the system hinges onmodifiable human behavior, rather than the sophisticationof the analysis. No improvement in the accuracy of the anal-ysis of the effect of masking in a given setting (i.e. library,classroom, laboratory, or healthcare setting) is meaningfulif people would not (continue to) comply with an indoormask mandate. Similar limitations became apparent withboth pharmaceutical and non-pharmaceutical interventions,even as evidence increasingly substantiated benefits and newsub-variants emerged, populations’ apparent risk tolerancegrew and spread.6.2 Communication is keyWorking with a team this large, with people from vastlydiverse backgrounds, communication between the teamsbecomes an essential component. A major part of the anal-ysis was being carried out by graduate student employees,who were sometimes not aware of things like floor struc-ture in dorms, testing protocols, vaccination mandates, etc.,Pandemic Data Collection, Management, Analysis and Decision Support:A Large Urban University Retrospective epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USAwhich were important analysis components. Similarly, themodelling team was involved in building risk models, mod-els for testing strategy development, etc. that relied on do-main knowledge outside of mathematics or computer science.Clearly, experts in every relevant domain (epidemiology,public health, student residence life, university logistics andoperations, etc.) need to be constant partners in the analysis.6.3 Equity Considerations and singling outdemographic groupsWhen patterns appear to be emerging within specific groupsor sub-demographic, there may be an equity oriented op-portunity for targeting or strengthening an intervention butthere may also be a bias in the observed signal. One groupmay in fact be more often in situations involving exposure toinfectious persons, or engaged in more risky behavior thanothers, as we occasionally discovered from data analysis.However, available policy level changes may not have beenfeasible solutions and were not always ultimately enacted.What we started to see in the data raised questions on ethicsand trustworthiness of data enabled interventions, withoutcontext or corroboration. Some solutions aimed to addressone groups perceived or real deficiency in access to resourcesor excessive exposure could foster stigma, or loss of otherresources in unanticipated ways. After careful consideration,it was agreed that singling out a group was often not enoughof a value addition or could do more harm than good. In somecases, trends observed initially in one population or groupwere indicative of larger trends that could be addressed bypolicy shifts relevant to the whole community, which wouldaddress both the observed inequity and mitigate for knownunintended consequences.6.4 Micropatterns significant, but not usable inhindsightThe reflections on the decisions made over the course of threeyears showed that the micropatterns and the microtrendsobserved in the data had little to no effect on those decisions.Observations that a certain subgroup engaged in activitiesthat increased the risk of the spread of the infection did notprompt the authorities to take measures to shut down thoseactivities in many cases because it was either not cost effec-tive or unethical to do so. These data nuances did provideinformation but it was not actionable. In retrospect, however,the information’s main utility was in the fact that no singlecritical subgroup was the key to the solution. The scale of thephenomena did not lend itself to a single pathway of solutionor a single target group. Patterns that we learned in settingslike an early long term care facility were also observed laterin dorms, sorority and fraternity houses and athletics teamsand they led to better population level responses. A goodexample would be the limitations of certain kinds of testsfor transmission suppression. The Big10 testing programinvolved daily testing of athletes during their competitionseason, given that team members were often unable to maskand physically distance in some sports. Unfortunately, whentransmission started to increase rapidly in late autumn 2020as sports teams re-started their compressed seasons, evendaily testing with rapid results was insufficient to suppresstransmission, largely because the particular test used did notdetect all infectious individuals immediately. By the timeone tests positive on an antigen test, like those in use at thattime, a person may have already been infected and infec-tious for a few days, meaning potentially exposing othersand continuing transmission chains. Antigen tests are use-ful for rapid diagnosis particularly when symptomatic butare not always ideally suited for early enough detection toreduce spread in a serial testing model. OSU opted for devel-oping and deploying swift, minimally invasive (saliva based),highly specific, highly sensitive, PCR testing, shown to beable to detect pre-symptomatic and asymptomatic infections(eventually even processing results with its own PCR testingand sequencing lab capable of thousands of tests per day).Although they were not as fast as antigen tests, the aver-age turnaround time was less than 24 hours during much ofthe semesters’ most populated period. This was a scenariowhere tracking a micropattern in a particular well-observedand well-resourced group gave us really good information ofwhat and how we should be optimizing testing resources andworking within their limitations with the larger universitycommunity’s population.6.5 Data infrastructureThe overall data infrastructure consists of cyberinfrastruc-ture (compute, storage, networking, cloud and web services),information infrastructure (data and metadata management,search, archiving, cataloging, and digital services), and ana-lytics infrastructure (data integration, harmonization, andanalysis). The large volume of data collected, collection rate,distributed team setting, potential errors, inconsistencies,and variations in reporting standards, and changing objec-tives all strained and challenged existing data infrastructureat OSU and necessitated expansion of that infrastructure.Moreover, COVID-19 management provided a great case-study and emphasis on the fact that data infrastructure inte-grates cyber-, information, and data services infrastructuresthrough human infrastructure . Building the human infras-tructure is both the most critical aspect and the hardest toimplement of any data infrastructure. We have seen person-nel migrate out of the team, and the university, and whenthat happens, they take institutional knowledge with them.Replacing personnel in such a fast paced environment entailsa lot of rigorous training that newer team members have togo through within a very short period of time. Even afterbeing on board, it takes significant time to bring them up tospeed, which often creates a bottleneck.epiDAMIK 2023, Aug 7, 2023, Long Beach, CA, USA Namrata Banerji, Steve Chang, Andrew Perrault, Tanya Y. Berger-Wolf, and Mikkel Quam6.6 ScaleThe sheer volume of COVID-19 data generated from testingand vaccination overwhelmed existing data managementsystems of the university as well as the state. Scaling up datainfrastructure and analytical capabilities to handle large-scale data collection and analysis proved to be a significantchallenge, but one that can definitely be overcome.7 Comparison between similar systems inplace nationwideThe COVID-19 pandemic was monitored worldwide, andany attempt to track rates or contain the outbreaks had toinvolve systems governing huge amounts of data. Among thehumongous number of research papers out there utilizingthe pandemic data, very few of them talk about the nuancesof the data collection and storage mechanisms deployed. Forexample, a paper [ 18] from University of Michigan talksabout collecting environmental surveillance data in orderto estimate infection risk. This direction of research andanalysis was popular in a lot of organizations and was a goodmeans of estimating risk of infection within the campus fromsources like dust and sewage water, including OSU [ 6,14,15].Another paper [ 11] discusses digital health research andtracking in general, but in the light of the pandemic and howit impacted practices. Their concerns are very similar to ours,but unlike their generic view, we provide a complete storyof a real experience with a series of issues faced and tackledin an urban institute.8 ConclusionWe hope that the COVID-19 pandemic was a one-off uniqueevent, never to be repeated. Yet, we should be prepared torespond to a similar event by learning from our experience.We hope that the OSU CMT work presented here can servenot only as a blueprint, but as a guide for considerations,priorities, and potential pitfalls, should the response at thisscale be ever needed.AcknowledgmentsWe would like to acknowledge the work of many people whohave contributed to the effort of enabling the data driven ap-proach to monitoring and managing the COVID-19 pandemicat the Ohio State University: the entire Comprehensive Mon-itoring Team (CMT), Case Investigation and Contact TracingTeam, CMT student analysts, CMT/IDI Modeling Team, Ap-plied Microbiology Services Lab, Testing Operations Team,Student Life Isolation and Quarantine Team, Student HealthServices, Employee Health Services, local and state publichealth authorities, dashboard developers, and the OTDI team,including D&A data engineers, data governance team, net-work administrators, and enterprise security.References[1]A deeper dive into Ohio State’s top-rated COVID-19 testing datadashboard. https://news.osu.edu/a-deeper-dive-into-ohio-states-top-rated-covid-19-testing-data-dashboard . Accessed July 31, 2023.[2]IIS: HL7 Standard Code Set Mapping CVX to Vaccine Groups. https://www2.cdc.gov/vaccines/iis/iisstandards/vaccines.asp .[3]Safe and Healthy Buckeyes COVID-19 Dashboard (archived). https://safeandhealthy.osu.edu/dashboard . Accessed July 31, 2023.[4]Safe Campus Scientific Advisory Subgroup Recommendations.https://safeandhealthy.osu.edu/sites/default/files/2020/07/safe-campus_6.30.pdf . Accessed July 31, 2023.[5]The Ohio State University Comprehensive Monitoring Team — Report2. March 2, 2021. https://safeandhealthy.osu.edu/sites/default/files/2021/03/the_ohio_state_university_comprehensive_monitoring_team_-_report_2.pdf . Accessed July 31, 2023.[6]Tracking COVID-19 with dust at the ohio state university.https://sapac.illumina.com/company/news-center/feature-articles/tracking-covid-19-with-dust-at-the-ohio-state-university.html .Accessed July 31, 2023.[7]Achaiah, N. C., Subbarajasetty, S. B., and Shetty, R. M. R0 andre of COVID-19: Can we predict when the pandemic outbreak willbe contained? Indian journal of critical care medicine : peer-reviewed,official publication of Indian Society of Critical Care Medicine 24 , 11(Nov. 2020), 1125–1127.[8]Centers for Disease Control and Prevention . COVID-19Overview and Infection Prevention and Control Priorities in non-U.S. Healthcare Settings. https://www.cdc.gov/coronavirus/2019-ncov/hcp/non-us-settings/overview/index.html .[9]Dallal, A. A., Dallal, U. A., and Dallal, J. A. Positivity rate: anindicator for the spread of covid-19. Current Medical Research andOpinion 37 , 12 (2021), 2067–2076.[10] Doraiswamy, S., Mamtani, R., and Cheema, S. An in-depth analysisof 10 epidemiological terminologies used in the context of covid-19.SAGE Choice 50 , 6 (Dec. 2021), 819–826.[11] Dron, L., Kalatharan, V., Gupta, A., Haggstrom, J., Zariffa, N.,Morris, A. D., Arora, P., and Park, J. Data capture and sharing in theCOVID-19 pandemic: a cause for concern. The Lancet Digital Health 4 ,10 (Oct. 2022), E748–E756.[12] Dusen, J. V., LeBlanc, H., Renninger, N., Nastas, N., Panescu, J.,Smith, J. W., Sovic, M. G., Williams, A., Quam, M., Faith., S., andDannemiller, K. Identification of sars-cov-2 variants in indoor dust.InAssociation of Environmental Engineering and Science ProfessorsResearch and Education Conference 2023 (2022).[13] Krantz, M., Bleichrodt, A., and Quam, M. Housing diversityand sars-cov-2 transmission in a university setting. In QuantitativeMethodology Center 2022 Conference: Why Quantitative Research Mat-ters(2022).[14] Renninger, N., Nastasi, N., Bope, A., Cochran, S. J., Haines, S. R.,Balasubrahmaniam, N., Stuart, K., Bivins, A., Bibby, K., Hull, N. M.,and Dannemiller, K. C. Indoor Dust as a Matrix for Surveillance ofCOVID-19. ASM Journals 6 , 2 (Apr. 2021).[15] Wascher, M., Klaus, C., Alvarado, C., Bope, A., Panescu, J., Quam,M., Dannemiller, K., and Joseph, T. A mechanistic modeling andestimation framework for environmental pathogen surveillance. InSociety of Mathematical Biology Meeting, Mini-Symposium (2022).[16] Wascher, M., Schnell, P. M., Khudabukhsh, W. R., Quam, M., Tien,J. H., and Rempała, G. A. Monitoring SARS-COV-2 transmission andprevalence in population under repeated testing. medRxiv (2021).[17] World Health Organization . Clinical management of COVID-19.https://www.who.int/teams/health-care-readiness/covid-19 .[18] Zhang, X., Wu, J., Smith, L. M., Li, X., Yancey, O., Franzblau, A.,Dvonch, J. T., Xi, C., and Neitzel, R. L. Monitoring SARS-CoV-2 inair and on surfaces and estimating infection risk in buildings and buseson a university campus. Journal of Exposure Science and EnvironmentalEpidemiology 32 (2022), 751–758. |