,Unnamed: 0.1,TweetID,AuthorID,AuthorName,Tweets,arxiv_link,Abstract,Title,Thread_length,Tweets_coarse,year,month,tweet_length
0,145,1494147269796540418,1128095824473276418,Kai Shu,"Identifying disinformation in a new domain is harder than an existing domain? Our recent accepted @TheWebConf paper on detecting cross-domain fake news with reinforcement learning. W/ Ahmadreza Mosallanezhad, @Mansoureh_K Michelle V. Mancenido, @liuhuan ",https://arxiv.org/abs/2202.08159,"With social media being a major force in information consumption, accelerated propagation of fake news has presented new challenges for platforms to distinguish between legitimate and fake news. Effective fake news detection is a non-trivial task due to the diverse nature of news domains and expensive annotation costs. In this work, we address the limitations of existing automated fake news detection models by incorporating auxiliary information (e.g., user comments and user-news interactions) into a novel reinforcement learning-based model called \textbf{RE}inforced \textbf{A}daptive \textbf{L}earning \textbf{F}ake \textbf{N}ews \textbf{D}etection (REAL-FND). REAL-FND exploits cross-domain and within-domain knowledge that makes it robust in a target domain, despite being trained in a different source domain. Extensive experiments on real-world datasets illustrate the effectiveness of the proposed model, especially when limited labeled data is available in the target domain. ",Domain Adaptive Fake News Detection via Reinforcement Learning,1,"['Identifying disinformation in a new domain is harder than an existing domain? Our recent accepted @TheWebConf paper on detecting cross-domain fake news with reinforcement learning.\nW/ Ahmadreza Mosallanezhad, @Mansoureh_K Michelle V. Mancenido, @liuhuan \n']",22,02,260
1,47,1240845694421721089,92182169,Ian Manchester,New paper (with Ray Wang and Roland Toth) introducing virtual control contraction metrics: Extends CCM to a wider class of problems. Generalises both local (gain scheduling) and global (exact linearisation) LPV control with stronger theoretical guarantees,https://arxiv.org/abs/2003.08513,"This paper proposes a novel approach to nonlinear state-feedback control design that has three main advantages: (i) it ensures exponential stability and $ \mathcal{L}_2 $-gain performance with respect to a user-defined set of reference trajectories, and (ii) it provides constructive conditions based on convex optimization and a path-integral-based control realization, and (iii) it is less restrictive than previous similar approaches. In the proposed approach, first a virtual representation of the nonlinear dynamics is constructed for which a behavioral (parameter-varying) embedding is generated. Then, by introducing a virtual control contraction metric, a convex control synthesis formulation is derived. Finally, a control realization with a virtual reference generator is computed, which is guaranteed to achieve exponential stability and $ \mathcal{L}_2 $-gain performance for all trajectories of the targeted reference behavior. Connections with the linear-parameter-varying (LPV) theory are also explored showing that the proposed methodology is a generalization of LPV state-feedback control in two aspects. First, it is a unified generalization of the two distinct categories of LPV control approaches: global and local methods. Second, it provides rigorous stability and performance guarantees when applied to the true nonlinear system, while such properties are not guaranteed for tracking control using LPV approaches. ","Virtual Control Contraction Metrics: Convex Nonlinear Feedback Design
via Behavioral Embedding",1,['New paper (with Ray Wang and Roland Toth) introducing virtual control contraction metrics:\n\nExtends CCM to a wider class of problems. Generalises both local (gain scheduling) and global (exact linearisation) LPV control with stronger theoretical guarantees'],20,03,262
2,124,1156466481833676800,22392129,Jasmijn Bastings,We released Joey NMT đš a minimalist NMT toolkit for novices. Small & easy to understand code in @PyTorch with top quality translations. Paper includes a user study! #NLProc @StatNLP_HD @AmsterdamNLP #nmt @KreutzerJulia #pytorch GitHub: ,https://arxiv.org/abs/1907.12484,"We present Joey NMT, a minimalist neural machine translation toolkit based on PyTorch that is specifically designed for novices. Joey NMT provides many popular NMT features in a small and simple code base, so that novices can easily and quickly learn to use it and adapt it to their needs. Despite its focus on simplicity, Joey NMT supports classic architectures (RNNs, transformers), fast beam search, weight tying, and more, and achieves performance comparable to more complex toolkits on standard benchmarks. We evaluate the accessibility of our toolkit in a user study where novices with general knowledge about Pytorch and NMT and experts work through a self-contained Joey NMT tutorial, showing that novices perform almost as well as experts in a subsequent code quiz. Joey NMT is available at this https URL . ",Joey NMT: A Minimalist NMT Toolkit for Novices,2,"['We released Joey NMT đš a minimalist NMT toolkit for novices. Small & easy to understand code in @PyTorch with top quality translations. Paper includes a user study! #NLProc @StatNLP_HD @AmsterdamNLP #nmt @KreutzerJulia #pytorch ', 'GitHub: https://t.co/JaBanwwbRQ']",19,07,256
3,218,1417851297764626432,356676252,John Regan,"On the @arxiv today () @Fabio_Pacucci , @marmezcua_astro and I looked at the active fraction of MBHs in dwarf galaxies. We find, using a physical model of MBH accretion, that the active fraction is between 20% and 30% for the most massive dwarfs,decreasing as the stellar mass decreases. Note that these are active fractions - the occupation fraction will be (much) higher! We also give a prediction for the probability of detecting a MBH in a given dwarf galaxy based on the stellar mass and angular momentum of gas within the galaxy! @royalsociety @sfi @MUTheorPhys @MaynoothUni",https://arxiv.org/abs/2107.09069,"The population of massive black holes (MBHs) in dwarf galaxies is elusive, but fundamentally important to understand the coevolution of black holes with their hosts and the formation of the first collapsed objects in the Universe. While some progress was made in determining the X-ray detected fraction of MBHs in dwarfs, with typical values ranging from $0\%$ to $6\%$, their overall active fraction, ${\cal A}$, is still largely unconstrained. Here, we develop a theoretical model to predict the multiwavelength active fraction of MBHs in dwarf galaxies starting from first principles and based on the physical properties of the host, namely, its stellar mass and angular momentum content. We find multiwavelength active fractions for MBHs, accreting at typically low rates, ranging from $5\%$ to $22\%$, and increasing with the stellar mass of the host as ${\cal A} \sim(\log_{10}M_{\star})^{4.5}$. If dwarfs are characterized by low-metallicity environments, the active fraction may reach $\sim 30\%$ for the most massive hosts. For galaxies with stellar mass in the range $10^7) @Fabio_Pacucci , @marmezcua_astro and I looked at the active fraction of MBHs in dwarf galaxies. We find, using a physical model of MBH accretion, that the active fraction is between 20% and 30% for the most massive dwarfs,decreasing ', 'as the stellar mass decreases. Note that these are active fractions - the occupation fraction will be (much) higher! We also give a prediction for the probability of detecting a MBH in a given dwarf galaxy based on the stellar mass and angular momentum of gas within the galaxy!', '@royalsociety @sfi @MUTheorPhys @MaynoothUni']",21,07,594
4,55,1384396211248013312,56468788,Ehsan Adeli,"Check our paper @CVPR 2021: Metadata Normalization (MDN), a new batch-level operation (end2end training) to correct the influence of metadata (#bias, #confounder, you name it) on feature distributions. W/ @drfeifei @jcniebles et al. Code will be released soon: Many thanks to the team from @StanfordAILab @StanfordSVL @StanfordMed, especially Mandy Lu and Qingyu Zhao, for this great work!",https://arxiv.org/abs/2104.09052,"Batch Normalization (BN) and its variants have delivered tremendous success in combating the covariate shift induced by the training step of deep learning methods. While these techniques normalize feature distributions by standardizing with batch statistics, they do not correct the influence on features from extraneous variables or multiple distributions. Such extra variables, referred to as metadata here, may create bias or confounding effects (e.g., race when classifying gender from face images). We introduce the Metadata Normalization (MDN) layer, a new batch-level operation which can be used end-to-end within the training framework, to correct the influence of metadata on feature distributions. MDN adopts a regression analysis technique traditionally used for preprocessing to remove (regress out) the metadata effects on model features during training. We utilize a metric based on distance correlation to quantify the distribution bias from the metadata and demonstrate that our method successfully removes metadata effects on four diverse settings: one synthetic, one 2D image, one video, and one 3D medical image dataset. ",Metadata Normalization,2,"['Check our paper @CVPR 2021: Metadata Normalization (MDN), a new batch-level operation (end2end training) to correct the influence of metadata (#bias, #confounder, you name it) on feature distributions. W/ @drfeifei @jcniebles et al.\n\n ', 'Code will be released soon: https://t.co/BSvhIFzbBg\nMany thanks to the team from @StanfordAILab @StanfordSVL @StanfordMed, especially Mandy Lu and Qingyu Zhao, for this great work!']",21,04,417
5,21,1420910037774438403,990478024188485633,Yukei Murakami,"New paper day!! We release PIPS â a new, fast, state-of-the-art Python platform for the period detection in photometric time-series. A fully automated detection algorithm can help you discover variable stars and exoplanets! Check it out on arxivđ ",https://arxiv.org/abs/2107.14223,"We describe the \texttt{Period detection and Identification Pipeline Suite} (PIPS) -- a new, fast, and user-friendly platform for period detection and analysis of astrophysical time-series data. PIPS is an open-source Python package that provides various pre-implemented methods and a customisable framework for automated, robust period measurements with principled uncertainties and statistical significance calculations. In addition to detailing the general algorithm that underlies PIPS, this paper discusses one of PIPS' central and novel features, the Fourier-likelihood periodogram, and compares its performance to existing methods. The resulting improved performance implies that one can construct deeper, larger, and more reliable sets of derived properties from various observations, including all-sky surveys. We present a comprehensive validation of PIPS against artificially generated data, which demonstrates the reliable performance of our algorithm for a class of periodic variable stars (RR Lyrae stars). We further showcase an application to recently obtained data of variable stars in the globular cluster M15. ","PIPS, an advanced platform for period detection in time series -- I.
Fourier-likelihood periodogram and application to RR Lyrae Stars",1,"['New paper day!! \nWe release PIPS â a new, fast, state-of-the-art Python platform for the period detection in photometric time-series. A fully automated detection algorithm can help you discover variable stars and exoplanets! Check it out on arxivđ\n']",21,07,253
6,115,1437655382994956292,795620317532143616,Jonas Pfeiffer,"In our paper xGQA: Cross-Lingual Visual Question Answering We propose a new multilingual multimodal benchmark, covering 7 new typologically diverse languages đ đ @GregorGeigle @ashkamath20 @jmsteitz @stefanroth @licwu @IGurevych We further propose new adapter-based approaches to adapt multimodal transformer-based models to become multilingual, andâvice versaâmultilingual models to become multimodal. While our adapter-based architecture outperforms the SotA M3P model in cross-lingual zero-shot scenarios, the overall transfer performance remains low across the board, with an average drop of around 38 accuracy points across target languages. This demonstrates the inherent difficulty of the task, even though the corresponding questions are ar- guably simple, containing only 8.5 words on average. In few-shot scenarios we find that utilizing an increasing amount of data instances in the target language consistently improves accuracy, culminating in an improvement of up to 20 accuracy points when specializing the model with only 48 images in the target language. Our results suggest that simple cross-lingual transfer of multimodal models yields latent multilingual multimodal misalignment, which can only be partially recovered through few-shot learning, calling for more sophisticated methods for vision and multilingual language modeling. @srchvrs I agree that itâs surprising that it works well for text-only tasksâwhich has been throughly investigated in the past (i.e. @PDufter)âhowever, what surprised me even more, was this doesnât seem to translate to scenarios where we add an additional modality.",https://arxiv.org/abs/2109.06082,"Recent advances in multimodal vision and language modeling have predominantly focused on the English language, mostly due to the lack of multilingual multimodal datasets to steer modeling efforts. In this work, we address this gap and provide xGQA, a new multilingual evaluation benchmark for the visual question answering task. We extend the established English GQA dataset to 7 typologically diverse languages, enabling us to detect and explore crucial challenges in cross-lingual visual question answering. We further propose new adapter-based approaches to adapt multimodal transformer-based models to become multilingual, and -- vice versa -- multilingual models to become multimodal. Our proposed methods outperform current state-of-the-art multilingual multimodal models (e.g., M3P) in zero-shot cross-lingual settings, but the accuracy remains low across the board; a performance drop of around 38 accuracy points in target languages showcases the difficulty of zero-shot cross-lingual transfer for this task. Our results suggest that simple cross-lingual transfer of multimodal models yields latent multilingual multimodal misalignment, calling for more sophisticated methods for vision and multilingual language modeling. ",xGQA: Cross-Lingual Visual Question Answering,7,"['In our paper\n\nxGQA: Cross-Lingual Visual Question Answering\n\nWe propose a new multilingual multimodal benchmark, covering 7 new typologically diverse languages\nđ \nđ \n@GregorGeigle @ashkamath20 @jmsteitz @stefanroth @licwu @IGurevych ', 'We further propose new adapter-based approaches to adapt multimodal transformer-based models to become multilingual, andâvice versaâmultilingual models to become multimodal. https://t.co/omuxSK3E17', 'While our adapter-based architecture outperforms the SotA M3P model in cross-lingual zero-shot scenarios, the overall transfer performance remains low across the board, with an average drop of around 38 accuracy points across target languages. https://t.co/EIjLKsQ01e', 'This demonstrates the inherent difficulty of the task, even though the corresponding questions are ar- guably simple, containing only 8.5 words on average.', 'In few-shot scenarios we find that utilizing an increasing amount of data instances in the target language consistently improves accuracy, culminating in an improvement of up to 20 accuracy points when specializing the model with only 48 images in the target language. https://t.co/PL68ttBlgT', 'Our results suggest that simple cross-lingual transfer of multimodal models yields latent multilingual multimodal misalignment, which can only be partially recovered through few-shot learning, calling for more sophisticated methods for vision and multilingual language modeling.', '@srchvrs I agree that itâs surprising that it works well for text-only tasksâwhich has been throughly investigated in the past (i.e. https://t.co/gDLr8gFZeK @PDufter)âhowever, what surprised me even more, was this doesnât seem to translate to scenarios where we add an additional modality.']",21,09,1665
7,34,1496049149212606483,1069045039,daniele marinazzo,"New paper! Frontal effective connectivity increases with task demands and time on task: a DCM study on electrocorticogram in macaque monkeys by @katwegner, in collaboration with @crewilson , E. Procyk, @Frederikvds_ , K. Friston, @dimitrispp ",https://arxiv.org/abs/2202.10021,"In this paper, we provide a computational account of changes in synaptic connectivity within two regions of the fronto-parietal network, the dorsolateral prefrontal cortex and the pre-supplementary motor area, applying Dynamic Causal Models to electrocorticogram recordings from two macaque monkeys performing a problem-solving task that engages working memory, and induces time-on-task effects. We find that forward connections between the two regions increased in strength when task demands were increased, and as the experimental session progressed. Similarities in the effects of task demands and time on task allow us to interpret changes in frontal connectivity in terms of increased effort allocation that compensates cognitive fatigue. ","Frontal effective connectivity increases with task demands and time on
task: a DCM study on electrocorticogram in macaque monkeys",1,"['New paper!\n\nFrontal effective connectivity increases with task demands and time on task: a DCM study on electrocorticogram in macaque monkeys\n\n\n\nby @katwegner, in collaboration with @crewilson , E. Procyk, @Frederikvds_ , K. Friston, @dimitrispp ']",22,02,255
8,26,1288833841478750208,752184524121993216,Menelaos Kanakis,"Our paper ""Reparameterizing Convolutions for Incremental Multi-Task Learning without Task Interference"" has been accepted to #ECCV2020. We reparameterize the convs to eliminate task interference and allow for the incremental learning of new tasks. Paper: ",https://arxiv.org/abs/2007.12540,"Multi-task networks are commonly utilized to alleviate the need for a large number of highly specialized single-task networks. However, two common challenges in developing multi-task models are often overlooked in literature. First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning). Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference). In this paper, we show that both can be achieved simply by reparameterizing the convolutions of standard neural network architectures into a non-trainable shared part (filter bank) and task-specific parts (modulators), where each modulator has a fraction of the filter bank parameters. Thus, our reparameterization enables the model to learn new tasks without adversely affecting the performance of existing ones. The results of our ablation study attest the efficacy of the proposed reparameterization. Moreover, our method achieves state-of-the-art on two challenging multi-task learning benchmarks, PASCAL-Context and NYUD, and also demonstrates superior incremental learning capability as compared to its close competitors. ","Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference",1,"['Our paper ""Reparameterizing Convolutions for Incremental Multi-Task Learning without Task Interference"" has been accepted to #ECCV2020. We reparameterize the convs to eliminate task interference and allow for the incremental learning of new tasks.\nPaper: ']",20,07,261
9,92,1273628976112533505,438692340,"Donald Martin, Jr.","Understanding societal systems is more important than ever. Iâm super-excited to announce that my new paper with @vinodkpg, @jillkuhlberg, and @andrewthesmart on extending the ML design abstraction boundary to incorporate societal context is available: Building off of work by @aselbst, we leverage complex adaptive systems theory to propose a taxonomic model of societal context and introduce the concept of collaborative causal theory formulation (CCTF) as a key discipline for discovering and considering societal context. This paper was the basis for our Participatory Problem Formulation paper presented at #ICLR2020. ... and + @wsisaac out other key co-author. Sorry about that William!",https://arxiv.org/abs/2006.09663,"Machine learning (ML) fairness research tends to focus primarily on mathematically-based interventions on often opaque algorithms or models and/or their immediate inputs and outputs. Such oversimplified mathematical models abstract away the underlying societal context where ML models are conceived, developed, and ultimately deployed. As fairness itself is a socially constructed concept that originates from that societal context along with the model inputs and the models themselves, a lack of an in-depth understanding of societal context can easily undermine the pursuit of ML fairness. In this paper, we outline three new tools to improve the comprehension, identification and representation of societal context. First, we propose a complex adaptive systems (CAS) based model and definition of societal context that will help researchers and product developers to expand the abstraction boundary of ML fairness work to include societal context. Second, we introduce collaborative causal theory formation (CCTF) as a key capability for establishing a sociotechnical frame that incorporates diverse mental models and associated causal theories in modeling the problem and solution space for ML-based products. Finally, we identify community based system dynamics (CBSD) as a powerful, transparent and rigorous approach for practicing CCTF during all phases of the ML product development process. We conclude with a discussion of how these systems theoretic approaches to understand the societal context within which sociotechnical systems are embedded can improve the development of fair and inclusive ML-based products. ","Extending the Machine Learning Abstraction Boundary: A Complex Systems
Approach to Incorporate Societal Context",4,"['Understanding societal systems is more important than ever. Iâm super-excited to announce that my new paper with @vinodkpg, @jillkuhlberg, and @andrewthesmart on extending the ML design abstraction boundary to incorporate societal context is available: ', 'Building off of work by @aselbst, we leverage complex adaptive systems theory to propose a taxonomic model of societal context and introduce the concept of collaborative causal theory formulation (CCTF) as a key discipline for discovering and considering societal context. https://t.co/XNLMbdg0rg', 'This paper was the basis for our Participatory Problem Formulation paper presented at #ICLR2020. https://t.co/9SpVA223q5', '... and + @wsisaac out other key co-author. Sorry about that William!']",20,06,713
10,21,1011549383471575041,797888987675365377,Tom Rainforth,"Check out our new paper ""Inference Trees: Adaptive Inference with Exploration"" . We introduce a completely new class of adaptive inference algorithms that uses ideas from Monte Carlo tree search and carries out target exploration of the parameter space @jwvdm @yeewhye @frankdonaldwood @hyang144",https://arxiv.org/abs/1806.09550,"We introduce inference trees (ITs), a new class of inference methods that build on ideas from Monte Carlo tree search to perform adaptive sampling in a manner that balances exploration with exploitation, ensures consistency, and alleviates pathologies in existing adaptive methods. ITs adaptively sample from hierarchical partitions of the parameter space, while simultaneously learning these partitions in an online manner. This enables ITs to not only identify regions of high posterior mass, but also maintain uncertainty estimates to track regions where significant posterior mass may have been missed. ITs can be based on any inference method that provides a consistent estimate of the marginal likelihood. They are particularly effective when combined with sequential Monte Carlo, where they capture long-range dependencies and yield improvements beyond proposal adaptation alone. ",Inference Trees: Adaptive Inference with Exploration,2,"['Check out our new paper ""Inference Trees: Adaptive Inference with Exploration"" . We introduce a completely new class of adaptive inference algorithms that uses ideas from Monte Carlo tree search and carries out target exploration of the parameter space ', '@jwvdm @yeewhye @frankdonaldwood @hyang144']",18,06,308
11,254,1403330996041363459,1159548392839753729,Hugo Yeche,"Can contrastive learning help in applications where similar samples don't share downstream task labels? In our #ICML2021 paper, we propose a simple framework for such problems and apply it to the monitoring of patients in the ICU Paper: Inspired by the similarity existing between contiguous states of a patient, we introduce a new contrastive objective. It allows a trade-off between preserving or discarding predefined attributes and this independently of data-augmentation. We call samples sharing these chosen attributes âneighborsâ and define it as a binary function. It allows unifying prior works, unsupervised and supervised, under a common framework. This is my first work as a Ph.D. student at @CSatETH and Iâm very grateful to @gideoknite, our collaborators @FrancescoLocat8 and Matthias HĂŒser, and my supervisor @gxr for their help and guidance throughout this process!",https://arxiv.org/abs/2106.05142,"Intensive care units (ICU) are increasingly looking towards machine learning for methods to provide online monitoring of critically ill patients. In machine learning, online monitoring is often formulated as a supervised learning problem. Recently, contrastive learning approaches have demonstrated promising improvements over competitive supervised benchmarks. These methods rely on well-understood data augmentation techniques developed for image data which do not apply to online monitoring. In this work, we overcome this limitation by supplementing time-series data augmentation techniques with a novel contrastive learning objective which we call neighborhood contrastive learning (NCL). Our objective explicitly groups together contiguous time segments from each patient while maintaining state-specific information. Our experiments demonstrate a marked improvement over existing work applying contrastive methods to medical time-series. ",Neighborhood Contrastive Learning Applied to Online Patient Monitoring,4,"[""Can contrastive learning help in applications where similar samples don't share downstream task labels? \n\nIn our #ICML2021 paper, we propose a simple framework for such problems and apply it to the monitoring of patients in the ICU \n\nPaper: "", 'Inspired by the similarity existing between contiguous states of a patient, we introduce a new contrastive objective. It allows a trade-off between preserving or discarding predefined attributes and this independently of data-augmentation. https://t.co/92hlkptBGM', 'We call samples sharing these chosen attributes âneighborsâ and define it as a binary function. It allows unifying prior works, unsupervised and supervised, under a common framework. https://t.co/supfRvc8ZR', 'This is my first work as a Ph.D. student at @CSatETH and Iâm very grateful to @gideoknite, our collaborators @FrancescoLocat8 and Matthias HĂŒser, and my supervisor @gxr for their help and guidance throughout this process!']",21,06,911
12,229,1407123342428184582,2415164311,Mufan (Bill) Li,"There has been a flurry of work beyond the infinite-width limit. We study the infinite DEPTH-AND-WIDTH limit of ReLU nets with residual connections and see remarkable (!) agreement with STANDARD finite networks. Joint work w/ @MihaiCNica @roydanroy How is the infinite depth-and-width limit different? In short, each layer of width (n) carries an error term of size O(1/n), and increasing depth (d) compounds the error exponentially. At the heart of the analysis is the following ""dichotomy"": As a result, the infinite depth-and-width limit is not Gaussian. This work extends results for fully connected MLPs where the analysis is much simpler. See @MihaiCNicaâs youtube video for an introduction. However, networks with skip connections introduce correlations between layers, which complicates the analysis. Surprising observation: with residual connections, the population of neurons is HYPOactivated, i.e., fewer than half of the ReLU units are active. Our main result is a precise description of the distribution of the output of the network in the infinite depth-and-width limit. One key observation: the magnitude contains a log-Gaussian factor. The exact constants and Gaussian parameters can be found in the paper. We believe this result can be extended to non-Gaussian weights. See an earlier universality result for MLPs by @BorisHanin and @MihaiCNica ",https://arxiv.org/abs/2106.04013,"Theoretical results show that neural networks can be approximated by Gaussian processes in the infinite-width limit. However, for fully connected networks, it has been previously shown that for any fixed network width, $n$, the Gaussian approximation gets worse as the network depth, $d$, increases. Given that modern networks are deep, this raises the question of how well modern architectures, like ResNets, are captured by the infinite-width limit. To provide a better approximation, we study ReLU ResNets in the infinite-depth-and-width limit, where both depth and width tend to infinity as their ratio, $d/n$, remains constant. In contrast to the Gaussian infinite-width limit, we show theoretically that the network exhibits log-Gaussian behaviour at initialization in the infinite-depth-and-width limit, with parameters depending on the ratio $d/n$. Using Monte Carlo simulations, we demonstrate that even basic properties of standard ResNet architectures are poorly captured by the Gaussian limit, but remarkably well captured by our log-Gaussian limit. Moreover, our analysis reveals that ReLU ResNets at initialization are hypoactivated: fewer than half of the ReLUs are activated. Additionally, we calculate the interlayer correlations, which have the effect of exponentially increasing the variance of the network output. Based on our analysis, we introduce Balanced ResNets, a simple architecture modification, which eliminates hypoactivation and interlayer correlations and is more amenable to theoretical analysis. ","The Future is Log-Gaussian: ResNets and Their Infinite-Depth-and-Width
Limit at Initialization",6,"['There has been a flurry of work beyond the infinite-width limit. We study the infinite DEPTH-AND-WIDTH limit of ReLU nets with residual connections and see remarkable (!) agreement with STANDARD finite networks. Joint work w/ @MihaiCNica @roydanroy ', 'How is the infinite depth-and-width limit different? In short, each layer of width (n) carries an error term of size O(1/n), and increasing depth (d) compounds the error exponentially. At the heart of the analysis is the following ""dichotomy"": https://t.co/YtVj4Gfw0Z', 'As a result, the infinite depth-and-width limit is not Gaussian. This work extends results for fully connected MLPs where the analysis is much simpler. See @MihaiCNicaâs youtube video for an introduction. https://t.co/6u0z32SwW0', 'However, networks with skip connections introduce correlations between layers, which complicates the analysis. Surprising observation: with residual connections, the population of neurons is HYPOactivated, i.e., fewer than half of the ReLU units are active. https://t.co/gTGZf2NCWO', 'Our main result is a precise description of the distribution of the output of the network in the infinite depth-and-width limit. One key observation: the magnitude contains a log-Gaussian factor. The exact constants and Gaussian parameters can be found in the paper. https://t.co/gTlmcLc8wF', 'We believe this result can be extended to non-Gaussian weights. See an earlier universality result for MLPs by @BorisHanin and @MihaiCNica https://t.co/HRbunQ7og3']",21,06,1410
13,140,1186710627655413761,633942876,Kevin L. Keys,"Feeling both proud and nepotistic today as I announce a new paper đ from my little brother! đ If you love stochastic differential equations, quantum mechanics, and Sooper Precise Notation, then take a peek here: This is the same baby bro whose first publication landed the cover of the (paywalled) journal Solar Physics. ref: Keys, D., Kholikov, S., Pevtsov, A.A. 2015, Solar Physics, 290, 659 blog: preprint: I cheekily informed him that itâs hard to read the discussion in his recent paper about Fock spaces and Wiener processes without giggling like a teenager. đ",https://arxiv.org/abs/1910.08649,"The paper studies a class of quantum stochastic differential equations, modeling an interaction of a system with its environment in the quantum noise approximation. The space representing quantum noise is the symmetric Fock space over L^2(R_+). Using the isomorphism of this space with the space of square-integrable functionals of the Poisson process, the equations can be represented as classical stochastic differential equations, driven by Poisson processes. This leads to a discontinuous dynamical state reduction which we compare to the Ghirardi-Rimini-Weber model. A purely quantum object, the norm process, is found which plays the role of an observer (in the sense of Everett [H. Everett III, Reviews of modern physics, 29.3, 454, (1957)]), encoding all events occurring in the system space. An algorithm introduced by Dalibard et al [J. Dalibard, Y. Castin, and K. M{\o}lmer, Physical review letters, 68.5, 580 (1992)] to numerically solve quantum master equations is interpreted in the context of unravellings and the trajectories of expected values of system observables are calculated. ","Poisson stochastic master equation unravellings and the measurement
problem: a quantum stochastic calculus perspective",3,"['Feeling both proud and nepotistic today as I announce a new paper\n\nđ from my little brother! đ\n\nIf you love stochastic differential equations, quantum mechanics, and Sooper Precise Notation, then take a peek here:\n\n', 'This is the same baby bro whose first publication landed the cover of the (paywalled) journal Solar Physics.\n\nref: Keys, D., Kholikov, S., Pevtsov, A.A. 2015, Solar Physics, 290, 659\n\nblog: https://t.co/E8BS00mcCu\n\npreprint: https://t.co/DD3YUpAY9B', 'I cheekily informed him that itâs hard to read the discussion in his recent paper about Fock spaces and Wiener processes without giggling like a teenager. đ']",19,10,587
14,165,1316321913132679168,1140222123006472194,Kasper Elm Heintz,"Happy to announce that our new paper: was posted today on ArXiv! In this, we basically ask: can you efficiently identify quasars purely as point-like sources that doesnât move on the sky? Turns out: Yes! As long as its far away from the Galactic plane This has been a rollercoaster of a project, starting with the release of @ESAGaia-DR2 more than 2 years ago, and with several non-succesful observing runs due to clouds, ice, etc. When we *finally* got spectra of our targets, we have had several first-year students at @DAWNCopenhagen/@uni_copenhagen help us classify them as either stars âïž or quasars đ„ (some many billion of lightyears away) â and for many of them, this was their first academic paper! Briefly, what we found was: it is possible to efficiently select quasars (~70% of the total number of targets) purely as sources with zero proper motions and parallaxes as measured by @ESAGaia-DR2. This is, however, only feasible to reach at high Galactic latitudes, i.e. far away from the Galactic plane. Moving closer, we would be completely overwhelmed by stars that were apparently non-moving on the sky (at least not in the transverse direction). This *complete* sample of quasars allowed us to examine the intrinsic properties of the quasar population, and specifically how large the fraction of âredâ quasars is (anyway between 10% and 40% depending on the brightness limit). In this survey we also discovered several very interesting quasars â here is one example that shines right through the disk of a nearby (z~0.02) galaxy! This allowed us to study the dust and metals in this disk, that was imposed on the background quasar spectrum. Finally, we could also assess how complete other, more commonly used photometric quasar surveys were at similar brightness â turns out, they only manage to find 85-90% of the quasars we do! This just cements the fact that this survey is the only way to select quasars, or extragalactic point-sources in general, without assumptions of their intrinsic emission (i.e. radio-loud, blue UV colors, etc) â so basically, quasars that donât look like typical quasars! @dr_guangtou No apparently not! As far as I understand, extended sources will not trigger a detection on Gaiaâs CCD â however, for bright AGN (and this spectacular case) Gaia will be sensitive enough to disentangle and accurately measure the central (or outlying) point-source.",https://arxiv.org/abs/2010.05934,"Here we explore the efficiency and fidelity of a purely astrometric selection of quasars as point sources with zero proper motions in the {\it Gaia} data release 2 (DR2). We have built a complete candidate sample including 104 Gaia-DR2 point sources brighter than $G<20$ mag within one degree of the north Galactic pole (NGP), all with proper motions consistent with zero within 2$\sigma$ uncertainty. In addition to pre-existing spectra, we have secured long-slit spectroscopy of all the remaining candidates and find that all 104 stationary point sources in the field can be classified as either quasars (63) or stars (41). The selection efficiency of the zero-proper-motion criterion at high Galactic latitudes is thus $\approx 60\%$. Based on this complete quasar sample we examine the basic properties of the underlying quasar population within the imposed limiting magnitude. We find that the surface density of quasars is 20 deg$^{-2}$, the redshift distribution peaks at $z\sim1.5$, and that only eight systems ($13^{+5}_{-3}\%$) show significant dust reddening. We then explore the selection efficiency of commonly used optical, near- and mid-infrared quasar identification techniques and find that they are all complete at the $85-90\%$ level compared to the astrometric selection. Finally, we discuss how the astrometric selection can be improved to an efficiency of $\approx70\%$ by including an additional cut requiring parallaxes of the candidates to be consistent with zero within 2$\sigma$. The selection efficiency will further increase with the release of future, more sensitive astrometric measurement from the Gaia mission. This type of selection, purely based on the astrometry of the quasar candidates, is unbiased in terms of colours and emission mechanisms of the quasars and thus provides the most complete census of the quasar population within the limiting magnitude of Gaia. ","Spectroscopic classification of a complete sample of
astrometrically-selected quasar candidates using Gaia DR2",10,"['Happy to announce that our new paper: was posted today on ArXiv! \nIn this, we basically ask: can you efficiently identify quasars purely as point-like sources that doesnât move on the sky? \nTurns out: Yes! \nAs long as its far away from the Galactic plane', 'This has been a rollercoaster of a project, starting with the release of @ESAGaia-DR2 more than 2 years ago, and with several non-succesful observing runs due to clouds, ice, etc. https://t.co/6lmzM4HrAQ', 'When we *finally* got spectra of our targets, we have had several first-year students at @DAWNCopenhagen/@uni_copenhagen help us classify them as either stars âïž or quasars đ„ (some many billion of lightyears away) â and for many of them, this was their first academic paper!', 'Briefly, what we found was: it is possible to efficiently select quasars (~70% of the total number of targets) purely as sources with zero proper motions and parallaxes as measured by @ESAGaia-DR2. https://t.co/AbzczXmkCI', 'This is, however, only feasible to reach at high Galactic latitudes, i.e. far away from the Galactic plane. Moving closer, we would be completely overwhelmed by stars that were apparently non-moving on the sky (at least not in the transverse direction). https://t.co/nN77blhLfA', 'This *complete* sample of quasars allowed us to examine the intrinsic properties of the quasar population, and specifically how large the fraction of âredâ quasars is (anyway between 10% and 40% depending on the brightness limit). https://t.co/B9huHsfbo5', 'In this survey we also discovered several very interesting quasars â here is one example that shines right through the disk of a nearby (z~0.02) galaxy! This allowed us to study the dust and metals in this disk, that was imposed on the background quasar spectrum. https://t.co/MY7pw0cXH7', 'Finally, we could also assess how complete other, more commonly used photometric quasar surveys were at similar brightness â turns out, they only manage to find 85-90% of the quasars we do! https://t.co/eEtbUjJTcS', 'This just cements the fact that this survey is the only way to select quasars, or extragalactic point-sources in general, without assumptions of their intrinsic emission (i.e. radio-loud, blue UV colors, etc) â so basically, quasars that donât look like typical quasars!', '@dr_guangtou No apparently not! As far as I understand, extended sources will not trigger a detection on Gaiaâs CCD â however, for bright AGN (and this spectacular case) Gaia will be sensitive enough to disentangle and accurately measure the central (or outlying) point-source.']",20,10,2441
15,94,1063100286686781441,14975979,Abhimat Gautam,"Hereâs a really brief summary thread of my work and results. Check out the paper for full details behind these results and graphs: We developed some new techniques to get precise stellar photometry from adaptive optics images of the Galactic center! (1/4) With the precise photometry, we found that roughly half of all stars in our sample are variable over the 11.5 years of our experiment! (2/4) We recovered the two known young eclipsing binary star systems at the Galactic center, and put a tight lower limit on the eclipsing binary fraction at the Galactic center. (3/4) We discovered a new periodic variable at the Galactic center, with a roughly 39 day period! Figuring out what is causing this variability still requires a little more work! (4/4) ",https://arxiv.org/abs/1811.04898,"We present a $\approx 11.5$ year adaptive optics (AO) study of stellar variability and search for eclipsing binaries in the central $\sim 0.4$ pc ($\sim 10''$) of the Milky Way nuclear star cluster. We measure the photometry of 563 stars using the Keck II NIRC2 imager ($K'$-band, $\lambda_0 = 2.124 \text{ } \mu \text{m}$). We achieve a photometric uncertainty floor of $\Delta m_{K'} \sim 0.03$ ($\approx 3\%$), comparable to the highest precision achieved in other AO studies. Approximately half of our sample ($50 \pm 2 \%$) shows variability. $52 \pm 5\%$ of known early-type young stars and $43 \pm 4 \%$ of known late-type giants are variable. These variability fractions are higher than those of other young, massive star populations or late-type giants in globular clusters, and can be largely explained by two factors. First, our experiment time baseline is sensitive to long-term intrinsic stellar variability. Second, the proper motion of stars behind spatial inhomogeneities in the foreground extinction screen can lead to variability. We recover the two known Galactic center eclipsing binary systems: IRS 16SW and S4-258 (E60). We constrain the Galactic center eclipsing binary fraction of known early-type stars to be at least $2.4 \pm 1.7\%$. We find no evidence of an eclipsing binary among the young S-stars nor among the young stellar disk members. These results are consistent with the local OB eclipsing binary fraction. We identify a new periodic variable, S2-36, with a 39.43 day period. Further observations are necessary to determine the nature of this source. ",An Adaptive Optics Survey of Stellar Variability at the Galactic Center,4,"['Hereâs a really brief summary thread of my work and results. Check out the paper for full details behind these results and graphs: \n\nWe developed some new techniques to get precise stellar photometry from adaptive optics images of the Galactic center!\n(1/4) ', 'With the precise photometry, we found that roughly half of all stars in our sample are variable over the 11.5 years of our experiment!\n(2/4) https://t.co/ZVCU7S4NnI', 'We recovered the two known young eclipsing binary star systems at the Galactic center, and put a tight lower limit on the eclipsing binary fraction at the Galactic center.\n(3/4) https://t.co/kzHKOuO81e', 'We discovered a new periodic variable at the Galactic center, with a roughly 39 day period! Figuring out what is causing this variability still requires a little more work!\n(4/4) https://t.co/DaoCW3yV3G']",18,11,788
16,47,1187107907688529920,348637346,Diana Powell,"Inhomogeneous clouds can be observed in transmission even with limited wavelength coverage, uncertainty on limb darkening coefficients, and imprecise transit times in a way that is statistically robust! See my new paper to learn why and how: Both the east and west limb often form clouds, but that the different properties of these clouds enhances the limb to limb differences compared to the clear atmosphere case. Using JWST it should be possible to detect the presence of cloud inhomogeneities by comparing the shape of the transit lightcurve at multiple wavelengths because inhomogeneous clouds impart a characteristic, wavelength dependent signature Probing limb inhomogeneity on hot Jupiters due to clouds can be statistically robust even with limited wavelength coverage, uncertainty on limb darkening coefficients, and imprecise transit times! The signatures due to clouds in these models are key in observing inhomogeneity! Check out the paper for predictions about cloud properties and their impact on the observed spectra! Also don't forget to consider cloud particle size distributions! Using the area or mass weighted particle size significantly alters the relative strength of the cloud spectral features compared to using the predicted size distribution. In short, we can use cloud physics to probe fundamental atmospheric properties like limb inhomogeneity! This work was done in collaboration with @TomLouden_b, @lkreidberg, Xi Zhang, @PlanetaryGao, and @V_Parmentier #dreamteam #cloudsarefriends @henrifdrake thank you!!! @Of_FallingStars @TomLouden_b @lkreidberg @PlanetaryGao @V_Parmentier thank you!!!",https://arxiv.org/abs/1910.07527,"We determine the observability in transmission of inhomogeneous cloud cover on the limbs of hot Jupiters through post processing a general circulation model to include cloud distributions computed using a cloud microphysics model. We find that both the east and west limb often form clouds, but that the different properties of these clouds enhances the limb to limb differences compared to the clear case. Using JWST it should be possible to detect the presence of cloud inhomogeneities by comparing the shape of the transit lightcurve at multiple wavelengths because inhomogeneous clouds impart a characteristic, wavelength dependent signature. This method is statistically robust even with limited wavelength coverage, uncertainty on limb darkening coefficients, and imprecise transit times. We predict that the short wavelength slope varies strongly with temperature. The hot limb of the hottest planets form higher altitude clouds composed of smaller particles leading to a strong rayleigh slope. The near infrared spectral features of clouds are almost always detectable, even when no spectral slope is visible in the optical. In some of our models a spectral window between 5 and 9 microns can be used to probe through the clouds and detect chemical spectral features. Our cloud particle size distributions are not log-normal and differ from species to species. Using the area or mass weighted particle size significantly alters the relative strength of the cloud spectral features compared to using the predicted size distribution. Finally, the cloud content of a given planet is sensitive to a species' desorption energy and contact angle, two parameters that could be constrained experimentally in the future. ","Transit Signatures of Inhomogeneous Clouds on Hot Jupiters: Insights
From Microphysical Cloud Modeling",9,"['Inhomogeneous clouds can be observed in transmission even with limited wavelength coverage, uncertainty on limb darkening coefficients, and imprecise transit times in a way that is statistically robust! See my new paper to learn why and how: ', 'Both the east and west limb often form clouds, but that the different properties of these clouds enhances the limb to limb differences compared to the clear atmosphere case. https://t.co/x3XRvqZnDx', 'Using JWST it should be possible to detect the presence of cloud inhomogeneities by comparing the shape of the transit lightcurve at multiple wavelengths because inhomogeneous clouds impart a characteristic, wavelength dependent signature https://t.co/MUIodCPjrZ', 'Probing limb inhomogeneity on hot Jupiters due to clouds can be statistically robust even with limited wavelength coverage, uncertainty on limb darkening coefficients, and imprecise transit times! https://t.co/gQW2OUDYg2', 'The signatures due to clouds in these models are key in observing inhomogeneity! Check out the paper for predictions about cloud properties and their impact on the observed spectra!', ""Also don't forget to consider cloud particle size distributions! Using the area or mass weighted particle size significantly alters the relative strength of the cloud spectral features compared to using the predicted size distribution. https://t.co/R0MTSaVyW1"", 'In short, we can use cloud physics to probe fundamental atmospheric properties like limb inhomogeneity! This work was done in collaboration with @TomLouden_b, @lkreidberg, Xi Zhang, @PlanetaryGao, and @V_Parmentier \n\n#dreamteam #cloudsarefriends\nhttps://t.co/GqurZ0jzHb', '@henrifdrake thank you!!!', '@Of_FallingStars @TomLouden_b @lkreidberg @PlanetaryGao @V_Parmentier thank you!!!']",19,10,1665
17,139,1499791903432261634,972440922070888449,Jason Parisi,"We just put out a new paper on electron temperature gradient turbulence in the edge of tokamak fusion reactors The highly 'shaped' magnetic geometry in the tokamak edge causes turbulence that has a different character to the tokamak core. We argue that 'topographies' of important physics effects generated by the magnetic geometry offer routes to build reactors that can reduce turbulence. The tokamak edge is also a really interesting place for multiscale turbulence physics, because the scale separation between electron and ion physics is broken. We need to understand better what the consequences are for turbulence in reactors.",https://arxiv.org/abs/2203.00831,"Nonlinear multiscale gyrokinetic simulations of a Joint European Torus edge pedestal are used to show that electron-temperature-gradient (ETG) turbulence has a rich three-dimensional structure, varying strongly according to the local magnetic-field configuration. In the plane normal to the magnetic field, the steep pedestal electron temperature gradient gives rise to anisotropic turbulence with a radial (normal) wavelength much shorter than in the binormal direction. In the parallel direction, the location and parallel extent of the turbulence are determined by the variation in the magnetic drifts and finite-Larmor-radius (FLR) effects. The magnetic drift and FLR topographies have a perpendicular-wavelength dependence, which permits turbulence intensity maxima near the flux-surface top and bottom at longer binormal scales, but constrains turbulence to the outboard midplane at shorter electron-gyroradius binormal scales. Our simulations show that long-wavelength ETG turbulence does not transport heat efficiently, and significantly decreases overall ETG transport -- in our case by $\sim$40 \% -- through multiscale interactions. ","Three-Dimensional Inhomogeneity of Electron-Temperature-Gradient
Turbulence in the Edge of Tokamak Plasmas",3,"['We just put out a new paper on electron temperature gradient turbulence in the edge of tokamak fusion reactors ', ""The highly 'shaped' magnetic geometry in the tokamak edge causes turbulence that has a different character to the tokamak core. We argue that 'topographies' of important physics effects generated by the magnetic geometry offer routes to build reactors that can reduce turbulence."", 'The tokamak edge is also a really interesting place for multiscale turbulence physics, because the scale separation between electron and ion physics is broken. We need to understand better what the consequences are for turbulence in reactors.']",22,03,640
18,336,1319628325380378624,182730982,Andreas RĂŒcklĂ©,"Check out our paper âAdapterDrop""! We find that Adapters can train 60% faster than full fine-tuning. With AdapterDrop we increase inference speed by up to 36% for 8 parallel tasks. GGeigle @Maxxx216 @devnull90 @PfeiffJo NReimers IGurevych @AdapterHub With our *robust* AdapterDrop layers, we improve the efficiency of adapter models during training and inference. By removing adapters from lower layers, based on available computational resources, we enable a dynamic trade-off between inference speed and task performance. Interestingly, we also find that we can share the adapter weights across *all* layers without losing much performance. This drastically reduces the storage space required for each task even more. We also reveal significant potential for boosting the efficiency of AdapterFusion. We can drop the majority of adapters after transfer learning from the Fusion model, while maintaining task performance entirely. ",https://arxiv.org/abs/2010.11918,"Massively pre-trained transformer models are computationally expensive to fine-tune, slow for inference, and have large storage requirements. Recent approaches tackle these shortcomings by training smaller models, dynamically reducing the model size, and by training light-weight adapters. In this paper, we propose AdapterDrop, removing adapters from lower transformer layers during training and inference, which incorporates concepts from all three directions. We show that AdapterDrop can dynamically reduce the computational overhead when performing inference over multiple tasks simultaneously, with minimal decrease in task performances. We further prune adapters from AdapterFusion, which improves the inference efficiency while maintaining the task performances entirely. ",AdapterDrop: On the Efficiency of Adapters in Transformers,4,"['Check out our paper âAdapterDrop""!\n \nWe find that Adapters can train 60% faster than full fine-tuning. With AdapterDrop we increase inference speed by up to 36% for 8 parallel tasks.\n \nGGeigle @Maxxx216 @devnull90 @PfeiffJo NReimers IGurevych @AdapterHub\n ', 'With our *robust* AdapterDrop layers, we improve the efficiency of adapter models during training and inference. By removing adapters from lower layers, based on available computational resources, we enable a dynamic trade-off between inference speed and task performance. https://t.co/oTiMSmEPpn', 'Interestingly, we also find that we can share the adapter weights across *all* layers without losing much performance. This drastically reduces the storage space required for each task even more. https://t.co/7cmEk6jRt1', 'We also reveal significant potential for boosting the efficiency of AdapterFusion. We can drop the majority of adapters after transfer learning from the Fusion model, while maintaining task performance entirely. https://t.co/b4eHQTrs5G']",20,10,968
19,110,1490714222740885519,1953104810,"Kayse Lee Maass, PhD",Update! The preprint is now available too! Check out @YarenBilgeKayaâs first authored paper âImproving Access to Housing and Supportive Services for Runaway and Homeless Youth: Reducing Vulnerability to Human Trafficking in New York Cityâ! ,https://arxiv.org/abs/2202.00138,"Recent estimates indicate that there are over 1 million runaway and homeless youth and young adults (RHY) in the United States (US). Exposure to trauma, violence, and substance abuse, coupled with a lack of community support services, puts homeless youth at high risk of being exploited and trafficked. Although access to safe housing and supportive services such as physical and mental healthcare is an effective response to youth s vulnerability towards being trafficked, the number of youth experiencing homelessness exceeds the capacity of available housing resources in most US communities. We undertake an informed, systematic, and data-driven approach to project the collective capacity required by service providers to adequately meet the needs of homeless youth in New York City, including those most at risk of being trafficked. Our approach involves an integer linear programming model that extends the multiple multidimensional knapsack problem and is informed by partnerships with key stakeholders. The mathematical model allows for time-dependent allocation and capacity expansion, while incorporating stochastic youth arrivals and length of stays, services provided in periodic fashion and service delivery time windows. Our RHY and service provider-centered approach is an important step toward meeting the actual, rather than presumed, survival needs of vulnerable youth, particularly those at-risk of being trafficked. ","Improving Access to Housing and Supportive Services for Runaway and
Homeless Youth: Reducing Vulnerability to Human Trafficking in New York City",1,['Update! The preprint is now available too! \n\nCheck out @YarenBilgeKayaâs first authored paper âImproving Access to Housing and Supportive Services for Runaway and Homeless Youth: Reducing Vulnerability to Human Trafficking in New York Cityâ!\n\n '],22,02,254
20,41,1442097175454908417,226785003,Raghav Kunnawalkam Elayavalli,"New paper time from @RHIC_STAR! The preliminary results have been public for a few years now so i thought i would just highlight the main takeaways from the paper :) 1/x This is essentially the first measurement that looks at a canonical energy loss observable (dijet asymmetry) as a function of an actual resolution scale (subjet opening angle) in the medium 2/x There have been other measurements of energy loss (such as RAA) vs jet mass or mass/pT (handle of virtuality) and the Softdrop groomed jet radius but its not a scale in the medium 3/x thats where this measurements comes into play. we highlight the importance of an angle observable thats sensitive to physics but also simultaneously robust to the heavy ion underlying event 4/x subjets to the rescue! using a formation time argument, we come to our interpretation of the data is that a shorter medium path length and the dijet selection bias at RHIC energies, leads to ... 5/x an observation of jet quenching of a single color charge! this is the QCD analogue of the LPM effect in action! Like i said before this is our understanding of the data and i would love for our community to use this data and study different hypothesis 6/x my view - these types of differential measurements are how we will understand the QGP's transport properties. isolate specific populations of jet topology and systematically explore modifications in the emissions phase space! this will lead us to space-time evolution 7/x",https://arxiv.org/abs/2109.09793,"The STAR collaboration presents jet substructure measurements related to both the momentum fraction and the opening angle within jets in \pp and \AuAu collisions at \sqrtsn $= 200$ GeV. The substructure observables include SoftDrop groomed momentum fraction (\zg), groomed jet radius (\rg), and subjet momentum fraction (\zsj) and opening angle (\tsj). The latter observable is introduced for the first time. Fully corrected subjet measurements are presented for \pp collisions and are compared to leading order Monte Carlo models. The subjet \tsj~distributions reflect the jets leading opening angle and are utilized as a proxy for the resolution scale of the medium in \AuAu collisions. We compare data from \AuAu collisions to those from \pp which are embedded in minimum-bias \AuAu events in order to include the effects of detector smearing and the heavy-ion collision underlying event. The subjet observables are shown to be more robust to the background than \zg~and \rg. We observe no significant modifications of the subjet observables within the two highest-energy, back-to-back jets, resulting in a distribution of opening angles and the splittings that are vacuum-like. We also report measurements of the differential di-jet momentum imbalance ($A_{\rm{J}}$) for jets of varying \tsj. We find no qualitative differences in energy loss signatures for varying angular scales in the range $0.1 < $ \tsj $ < 0.3$, leading to the possible interpretation that energy loss in this population of high momentum di-jet pairs, is due to soft medium-induced gluon radiation from a single color-charge as it traverses the medium. ","Differential measurements of jet substructure and partonic energy loss
in Au$+$Au collisions at $\sqrt{s_{\rm{NN}}} =200$ GeV",7,"['New paper time from @RHIC_STAR! The preliminary results have been public for a few years now so i thought i would just highlight the main takeaways from the paper :) 1/x', 'This is essentially the first measurement that looks at a canonical energy loss observable (dijet asymmetry) as a function of an actual resolution scale (subjet opening angle) in the medium 2/x', 'There have been other measurements of energy loss (such as RAA) vs jet mass or mass/pT (handle of virtuality) and the Softdrop groomed jet radius but its not a scale in the medium 3/x', 'thats where this measurements comes into play. we highlight the importance of an angle observable thats sensitive to physics but also simultaneously robust to the heavy ion underlying event 4/x', 'subjets to the rescue! using a formation time argument, we come to our interpretation of the data is that a shorter medium path length and the dijet selection bias at RHIC energies, leads to ... 5/x', 'an observation of jet quenching of a single color charge! this is the QCD analogue of the LPM effect in action! Like i said before this is our understanding of the data and i would love for our community to use this data and study different hypothesis 6/x', ""my view - these types of differential measurements are how we will understand the QGP's transport properties. isolate specific populations of jet topology and systematically explore modifications in the emissions phase space! this will lead us to space-time evolution 7/x""]",21,09,1475
21,91,1337064147645816832,1238925341747343361,Aaron Barth,"New paper today by Ben Boizelle, on black hole mass measurements in radio galaxies NGC 315 and NGC 4261 using ALMA data: NGC 315 is one of the best ALMA targets we've found so far, with a clear detection of high-velocity rotation from gas inside the sphere of influence of a 2 billion solar mass black hole.",https://arxiv.org/abs/2012.04669,"We present Atacama Large Millimeter/submillimeter Array (ALMA) Cycle 5 and Cycle 6 observations of CO(2$-$1) and CO(3$-$2) emission at 0.2''$-$0.3'' resolution in two radio-bright, brightest group/cluster early-type galaxies, NGC 315 and NGC 4261. The data resolve CO emission that extends within their black hole (BH) spheres of influence ($r_\mathrm{g}$), tracing regular Keplerian rotation down to just tens of parsecs from the BHs. The projected molecular gas speeds in the highly inclined ($i>60^\circ$) disks rises at least 500 km s$^{-1}$ near their galaxy centers. We fit dynamical models of thin-disk rotation directly to the ALMA data cubes, and account for the extended stellar mass distributions by constructing galaxy surface brightness profiles corrected for a range of plausible dust extinction values. The best-fit models yield $(M_\mathrm{BH}/10^9\,M_\odot)=2.08\pm0.01(\mathrm{stat})^{+0.32}_{-0.14}(\mathrm{sys})$ for NGC 315 and $(M_\mathrm{BH}/10^9\,M_\odot)=1.67\pm0.10(\mathrm{stat})^{+0.39}_{-0.24}(\mathrm{sys})$ for NGC 4261, the latter of which is larger than previous estimates by a factor of $\sim$3. The BH masses are broadly consistent with the relations between BH masses and host galaxy properties. These are among the first ALMA observations to map dynamically cold gas kinematics well within the BH-dominated regions of radio galaxies, resolving the respective $r_\mathrm{g}$ by factors of $\sim$5$-$10. The observations demonstrate ALMA's ability to precisely measure BH masses in active galaxies, which will enable more confident probes of accretion physics for the most massive galaxies. ","Black Hole Mass Measurements of Radio Galaxies NGC 315 and NGC 4261
Using ALMA CO Observations",2,"['New paper today by Ben Boizelle, on black hole mass measurements in radio galaxies NGC 315 and NGC 4261 using ALMA data:\n\n ', ""NGC 315 is one of the best ALMA targets we've found so far, with a clear detection of high-velocity rotation from gas inside the sphere of influence of a 2 billion solar mass black hole.""]",20,12,321
22,93,1273627434005471233,931179613178515459,Adilson E. Motter,"Our Science Advances paper âSpontaneous Oscillations and Negative-Conductance Transitions in Microfluidic Networksâ is now in the arXiv: It is all about networks designed to exhibit nonlinear behaviors that enable new built-in flow control capabilities. These network are shown to exhibit oscillatory output patterns, bistable flow states, hysteresis, signal amplification, and negative-conductance transitions, all without reliance on dedicated external control hardware, movable parts, flexible components, OR oscillatory inputs!",https://arxiv.org/abs/2006.09400,"The tendency for flows in microfluidic systems to behave linearly poses a challenge for designing integrated flow control schemes to carry out complex fluid processing tasks. This hindrance has led to the use of numerous external control devices to manipulate flows, thereby thwarting the potential scalability and portability of lab-on-a-chip technology. Here, we devise a microfluidic network exhibiting nonlinear flow dynamics that enable new mechanisms for on-chip flow control. This network is shown to exhibit oscillatory output patterns, bistable flow states, hysteresis, signal amplification, and negative-conductance transitions, all without reliance on dedicated external control hardware, movable parts, flexible components, or oscillatory inputs. These dynamics arise from nonlinear fluid inertia effects in laminar flows that we amplify and harness through the design of the network geometry. We suggest that these results, which are supported by fluid dynamical simulations and theoretical modeling, have the potential to inspire development of new built-in control capabilities, such as on-chip timing and synchronized flow patterns. ","Spontaneous oscillations and negative-conductance transitions in
microfluidic networks",2,"['Our Science Advances paper âSpontaneous Oscillations and Negative-Conductance Transitions in Microfluidic Networksâ is now in the arXiv: \n\nIt is all about networks designed to exhibit nonlinear behaviors that enable new built-in flow control capabilities. ', 'These network are shown to exhibit oscillatory output patterns, bistable flow states, hysteresis, signal amplification, and negative-conductance transitions, all without reliance on dedicated external control hardware, movable parts, flexible components, OR oscillatory inputs!']",20,06,552
23,54,1429482848495546372,1291064871510069249,Calvin McPhail-Snyder,"New paper up on the arXiv! I'm a bit belated tweeting about it because I was at the lake last week when it went up This paper works out the details of a construction of Kashaev and Reshetikhin. The idea is to improve ordinary quantum invariants of topological objects by ""twisting"" them with geometric data, in this case, reps of Ï_1 into SL_2(C) My thesis was on a similar construction, which has considerably more technical issues. This paper is nice because we can avoid them and do some explicit, algebraic computations. I am hopeful that this sort of thing will provide useful for geometric topology, especially hyperbolic knot theory. If you're interested in doing this sort of thing, please let me know!",https://arxiv.org/abs/2108.06561,"Kashaev and Reshetikhin previously described a way to define holonomy invariants of knots using quantum $\mathfrak{sl}_2$ at a root of unity. These are generalized quantum invariants depend both on a knot $K$ and a representation of the fundamental group of its complement into $\mathrm{SL}_2(\mathbb{C})$; equivalently, we can think of $\mathrm{KR}(K)$ as associating to each knot a function on (a slight generalization of) its character variety. In this paper we clarify some details of their construction. In particular, we show that for $K$ a hyperbolic knot $\mathrm{KaRe}(K)$ can be viewed as a function on the geometric component of the $A$-polynomial curve of $K$. We compute some examples at a third root of unity. ",Kashaev--Reshetikhin Invariants of Links,4,"[""New paper up on the arXiv! I'm a bit belated tweeting about it because I was at the lake last week when it went up "", 'This paper works out the details of a construction of Kashaev and Reshetikhin. The idea is to improve ordinary quantum invariants of topological objects by ""twisting"" them with geometric data, in this case, reps of Ï_1 into SL_2(C)', 'My thesis was on a similar construction, which has considerably more technical issues. This paper is nice because we can avoid them and do some explicit, algebraic computations.', ""I am hopeful that this sort of thing will provide useful for geometric topology, especially hyperbolic knot theory. If you're interested in doing this sort of thing, please let me know!""]",21,08,717
24,63,1286195018890387456,1163883824,Shivangee Rathi,"New paper on dwarf galaxies! (and a first for me) in collaboration with Michele Mastropietro (@mic_mas), Sven De Rijcke, Carmen Gallart, Edouard Bernard and Robbert Verbeke (@Rbhfd): ""Observations"" of simulated dwarf galaxies (). In this paper, we observationally analyze a set of realistically simulated MoRIA dwarf galaxies (from the work of @Rbhfd), to look for any systematic bias in comparison of simulations with observations. In particular, we use the synthetic color-magnidue diagram (CMD) technique to reconstruct the star formation history (SFH) of dwarf galaixes. We construct CMDs of simulated dwarfs from simulation star particle data and add observational errors to mimic real observations. On comparing the reconstructed SFH from the synthetic CMD method with the ground truth from the simulation star particle data, we overall find a good agreement. Our paper also explores: 1) the effect of dust extinction on the CMD, and hence on the reconstructed SFH, and 2) the dependence on the SFH on the aperture used. We also analyze infrared CMDs, in view of the next generation astronomical facilities.",https://arxiv.org/abs/2007.11413,"Apparent deviations between properties of dwarf galaxies from observations and simulations are known to exist, such as the ""Missing Dwarfs"" problem, the too-big-to-fail problem, and the cusp-core problem, to name a few. Recent studies have shown that these issues can at least be partially resolved by taking into account the systematic differences between simulations and observations. This work aims to investigate and address any systematic differences affecting the comparison of simulations with observations. To this aim, we analyzed a set of 24 realistically simulated MoRIA (Models of Realistic dwarfs In Action) dwarf galaxies in an observationally motivated way. We first constructed ""observed"" color-magnitude diagrams (CMDs) of the simulated dwarf galaxies in the typically used V- and I-bands. Then we used the CMD-fitting method to recover their star-formation histories (SFHs) from their observed CMDs. These solved SFHs were then directly compared to the true SFHs from the simulation star-particle data, mainly in terms of the star-formation rate(SFR) and the age-metallicity relation (AMR). We applied a dust extinction prescription to the simulation data to produce observed CMDs affected by dust in star-formation regions. Since future facilities, such as the JWST and E-ELT will focus on the near IR rather than the optical, we also constructed and analyzed CMDs using the I- and H-bands. We find a very good agreement between the recovered and the true SFHs of all the simulated dwarf galaxies in our sample, from the synthetic CMD analysis of their V-I versus I as well as the I-H versus H CMDs. Dust leads to an underestimation of the SFR during the last few hundred million years. Overall, our analysis indicates that quantities like SFR and AMR derived from the photometric observations of galaxies are directly comparable to their simulated counterparts. ","""Observations"" of simulated dwarf galaxies: Star-formation histories
from color-magnitude diagrams",5,"['New paper on dwarf galaxies! (and a first for me) in collaboration with Michele Mastropietro (@mic_mas), Sven De Rijcke, Carmen Gallart, Edouard Bernard and Robbert Verbeke (@Rbhfd): ""Observations"" of simulated dwarf galaxies ().', 'In this paper, we observationally analyze a set of realistically simulated MoRIA dwarf galaxies (from the work of @Rbhfd), to look for any systematic bias in comparison of simulations with observations.', 'In particular, we use the synthetic color-magnidue diagram (CMD) technique to reconstruct the star formation history (SFH) of dwarf galaixes. We construct CMDs of simulated dwarfs from simulation star particle data and add observational errors to mimic real observations.', 'On comparing the reconstructed SFH from the synthetic CMD method with the ground truth from the simulation star particle data, we overall find a good agreement.', 'Our paper also explores:\n1) the effect of dust extinction on the CMD, and hence on the reconstructed SFH, and\n2) the dependence on the SFH on the aperture used.\n\nWe also analyze infrared CMDs, in view of the next generation astronomical facilities.']",20,07,1119
25,124,1334432619979792384,772809603046334464,Jonathan Mackey,"It was fun to work on this new paper led by @LucaGrassitelli from @UniBonn, trying to understand and simulate 10-year cyclic variations in Luminous Blue Variables. Plus North-South collaboration with our friends @ArmaghPlanet, @AstroAndreas and @jorick73. ",https://arxiv.org/abs/2012.00023,"Luminous blue variables (LBVs) are hot, very luminous massive stars displaying large quasi-periodic variations in brightness, radius,and photospheric temperature, on timescales of years to decades. The physical origin of this variability, called S Doradus cycle after its prototype, has remained elusive. Here, we study the feedback of stellar wind mass-loss on the envelope structure in stars near the Eddington limit. We perform a time-dependent hydrodynamic stellar evolutionary calculation, applying a stellar wind mass-loss prescription with a temperature-dependence inspired by the predicted systematic increase in mass-loss rates below 25 kK. We find that when the wind mass-loss rate crosses a well-defined threshold, a discontinuous change in the wind base conditions leads to a restructuring of the stellar envelope. The induced drastic radius and temperature changes, which occur on the thermal timescale of the inflated envelope, impose in turn mass-loss variations that reverse the initial changes, leading to a cycle that lacks a stationary equilibrium configuration. Our proof-of-concept model broadly reproduces the typical observational phenomenology of the S Doradus variability. We identify three key physical ingredients needed to trigger the instability: inflated envelopes in close proximity to the Eddington limit, a temperature range where decreasing opacities do not lead to an accelerating outflow, and a mass-loss rate that increases with decreasing temperature, crossing a critical threshold value within this temperature range. Our scenario and model provide testable predictions, and open the door for a consistent theoretical treatment of the LBV phase in stellar evolution, with consequences for their further evolution as single stars or in binary systems. ","Wind-envelope interaction as the origin of the slow cyclic brightness
variations of luminous blue variables",1,"['It was fun to work on this new paper led by @LucaGrassitelli from @UniBonn, trying to understand and simulate 10-year cyclic variations in Luminous Blue Variables. Plus North-South collaboration with our friends @ArmaghPlanet, @AstroAndreas and @jorick73.\n']",20,12,262
26,114,1456318555599687682,955103299,Marina Radulaski,The 4th figure of #Rlab new perspective paper is a masterpiece assembled by @sridhar_majety reviewing 3 generations of Quantum Repeaters. Silicon carbide color centers have superior coherence and spectral properties for these key quantum network elements ,https://arxiv.org/abs/2111.00136,"Color centers in wide band gap semiconductors are prominent candidates for solid-state quantum technologies due to their attractive properties including optical interfacing, long coherence times, spin-photon and spin-spin entanglement, as well as the potential for scalability. Silicon carbide color centers integrated into photonic devices span a wide range of applications in quantum information processing, in a material platform with quantum-grade wafer availability and advanced processing capabilities. Recent progress in emitter generation and characterization, nanofabrication, device design, and quantum optical studies have amplified the scientific interest in this platform. We provide a conceptual and quantitative analysis of the role of silicon carbide integrated photonics in three key application areas: quantum networking, simulation, and computing. ",Quantum Information Processing With Integrated Silicon Carbide Photonics,1,['The 4th figure of #Rlab new perspective paper is a masterpiece assembled by @sridhar_majety reviewing 3 generations of Quantum Repeaters. Silicon carbide color centers have superior coherence and spectral properties for these key quantum network elements \n\n '],21,11,276
27,106,1468642899285925892,979379437069271043,Pedro Machado,"New paper with @RyanPlestid, Brdar and de GouvĂȘa. We predict observable resonant nu-electron scattering events at @FASERexperiment! All standard physics. Having collaborators like that makes life much easier ;-) Check out this excellent thread by Ryan! ",https://arxiv.org/abs/2112.03283,"We consider the resonant production and detection of charged mesons in existing and near-future neutrino scattering experiments with $E_\nu \lesssim 1$ TeV, characteristic of high-energy atmospheric neutrinos or collider-sourced neutrino beams. The most promising candidate is the reaction $\bar{\nu}_e e^-\rightarrow \rho^-\rightarrow \pi^- \pi^0$. We discuss detection prospects at the LHC's forward physics facility with nuclear emulsion (FASER$\nu$) and liquid argon detectors (FLArE) and estimate the number of expected resonance-mediated events in the existing data set of IceCube. We also outline possible detection strategies for the different experimental environments. We predict dozens of events at the forward physics facility and identify cuts with order one signal efficiency that could potentially suppress backgrounds at FASER$\nu$, yielding a signal-to-background ratio larger than 1. Antineutrino-induced $s$-channel meson resonances are yet unobserved Standard Model scattering processes which offer a realistic target for near-term experiments. ",Resonances in $\bar\nu_e-e^-$ scattering below a TeV,1,"['New paper with @RyanPlestid, Brdar and de GouvĂȘa. We predict observable resonant nu-electron scattering events at @FASERexperiment! All standard physics. Having collaborators like that makes life much easier ;-)\n\nCheck out this excellent thread by Ryan!\n ']",21,12,266
28,17,1012126279871627265,2902658140,Sander Dieleman,Stacking WaveNet autoencoders on top of each other leads to raw audio models that can capture long-range structure in music. Check out our new paper: Listen to some minute-long piano music samples: more unconditional samples and reconstructions are available here: ,https://arxiv.org/abs/1806.10474,"Realistic music generation is a challenging task. When building generative models of music that are learnt from data, typically high-level representations such as scores or MIDI are used that abstract away the idiosyncrasies of a particular performance. But these nuances are very important for our perception of musicality and realism, so in this work we embark on modelling music in the raw audio domain. It has been shown that autoregressive models excel at generating raw audio waveforms of speech, but when applied to music, we find them biased towards capturing local signal structure at the expense of modelling long-range correlations. This is problematic because music exhibits structure at many different timescales. In this work, we explore autoregressive discrete autoencoders (ADAs) as a means to enable autoregressive models to capture long-range correlations in waveforms. We find that they allow us to unconditionally generate piano music directly in the raw audio domain, which shows stylistic consistency across tens of seconds. ","The challenge of realistic music generation: modelling raw audio at
scale",2,"['Stacking WaveNet autoencoders on top of each other leads to raw audio models that can capture long-range structure in music. Check out our new paper: \n\nListen to some minute-long piano music samples: ', 'more unconditional samples and reconstructions are available here: https://t.co/Mu6Cp11LKw']",18,06,292
29,71,1427210696685756419,1425340957,Alex Haber,"We (Mark Alford, Ziyuan Yhang, Steven Harris and me) recently published a new paper on beta equilibration in neutron star mergers: . What is it about? 1/n In a neutron star, weak interactions lead to various nucleonic reactions like neutron decay, electron capture and so on. These reactions change the chemical composition of the star, like the proton fraction (yes neutron stars do have some other particles, not only neutrons!) 2/n If you wait long enough, these reactions will balance and the chemical composition stays constant. Traditionally, we have always studied that at T=0. Although neutron stars are quite hot by traditional standards (10^6 K), in a particle physics context they are actually very cold. At T=0, the reactions balance each other when the chemical potential of the neutron equals the sum of the proton and electron chemical potential. But neutron star mergers require us to reexamine a lot of our prior assumptions and findings. 4/n Mergers are much hotter than isolated, old neutron stars. We find that in these conditions, the traditional beta equilibrium condition needs to be modified. The correction term reach magnitudes of more than 10 MeV. 5/n The corrected nucleonic rates change by up to an order of magnitude. How our results influence cooling or the outcome of neutron star mergers remains to be seen, but there is interesting work from @ProfNilsAnd and collaborators coming 6/6",https://arxiv.org/abs/2108.03324,"We calculate the nonzero-temperature correction to the beta equilibrium condition in nuclear matter under neutron star merger conditions, in the temperature range $1\,$MeV$ < T \lesssim 5\,$MeV. We improve on previous work by using a consistent description of nuclear matter based on the IUF and SFHo relativistic mean field models. This includes using relativistic dispersion relations for the nucleons, which we show is essential in these models. We find that the nonzero-temperature correction can be of order $10$ to $20\,$MeV, and plays an important role in the correct calculation of Urca rates, which can be wrong by factors of $10$ or more if it is neglected. ",Beta equilibrium under neutron star merger conditions,6,"['We (Mark Alford, Ziyuan Yhang, Steven Harris and me) recently published a new paper on beta equilibration in neutron star mergers: . What is it about? 1/n', 'In a neutron star, weak interactions lead to various nucleonic reactions like neutron decay, electron capture and so on. These reactions change the chemical composition of the star, like the proton fraction (yes neutron stars do have some other particles, not only neutrons!) 2/n', 'If you wait long enough, these reactions will balance and the chemical composition stays constant. Traditionally, we have always studied that at T=0. Although neutron stars are quite hot by traditional standards (10^6 K), in a particle physics context they are actually very cold.', 'At T=0, the reactions balance each other when the chemical potential of the neutron equals the sum of the proton and electron chemical potential. But neutron star mergers require us to reexamine a lot of our prior assumptions and findings. 4/n', 'Mergers are much hotter than isolated, old neutron stars. We find that in these conditions, the traditional beta equilibrium condition needs to be modified. The correction term reach magnitudes of more than 10 MeV. 5/n', 'The corrected nucleonic rates change by up to an order of magnitude. How our results influence cooling or the outcome of neutron star mergers remains to be seen, but there is interesting work from @ProfNilsAnd and collaborators coming 6/6']",21,08,1424
30,125,1318356831878537216,661613,"Alex Hanna, Ph.D., NREMT","New paper: @teeepain and I wrote a short piece for the recent #CSCW2020 workshop on Reconsidering Scale and Scaling, wherein we try to map the dimensions of ""scale thinking"" in Valley culture and map out resistances in mutual aid This owes a lot to the intervention that @gleemie has made on ""design thinking"" and @ruha9's discussion of design, and was initially inspired by Bed-Stuy mutual aid organizers (@bedstuystrong) who wanted to build tech that ""didn't scale."" @EZanichkows Thank you, Elizabeth â€ïž I mostly yell at engineers about data. Can't wait to have a thanksgiving with you in the near future! @kaytwo @gleemie @ruha9 @bedstuystrong Oh wow! I'd love to talk to you more about that sometime.",https://arxiv.org/abs/2010.08850,"At the heart of what drives the bulk of innovation and activity in Silicon Valley and elsewhere is scalability. This unwavering commitment to scalability -- to identify strategies for efficient growth -- is at the heart of what we refer to as ""scale thinking."" Whether people are aware of it or not, scale thinking is all-encompassing. It is not just an attribute of one's product, service, or company, but frames how one thinks about the world (what constitutes it and how it can be observed and measured), its problems (what is a problem worth solving versus not), and the possible technological fixes for those problems. This paper examines different facets of scale thinking and its implication on how we view technology and collaborative work. We argue that technological solutions grounded in scale thinking are unlikely to be as liberatory or effective at deep, systemic change as their purveyors imagine. Rather, solutions which resist scale thinking are necessary to undo the social structures which lie at the heart of social inequality. We draw on recent work on mutual aid networks and propose questions to ask of collaborative work systems as a means to evaluate technological solutions and guide designers in identifying sites of resistance to scale thinking. ",Against Scale: Provocations and Resistances to Scale Thinking,4,"['New paper: @teeepain and I wrote a short piece for the recent #CSCW2020 workshop on Reconsidering Scale and Scaling, wherein we try to map the dimensions of ""scale thinking"" in Valley culture and map out resistances in mutual aid ', 'This owes a lot to the intervention that @gleemie has made on ""design thinking"" \nand @ruha9\'s discussion of design, and was initially inspired by Bed-Stuy mutual aid organizers (@bedstuystrong) who wanted to build tech that ""didn\'t scale."" \n\nhttps://t.co/froDW042Pi', ""@EZanichkows Thank you, Elizabeth â€ïž I mostly yell at engineers about data. Can't wait to have a thanksgiving with you in the near future!"", ""@kaytwo @gleemie @ruha9 @bedstuystrong Oh wow! I'd love to talk to you more about that sometime.""]",20,10,719
31,120,1356995005135601664,352507474,Gorka Azkune,"Check out our new paper entitled ""Inferring spatial relations from textual descriptions of images"" with @Aitzole @oierldl @IgnacioArganda @Aitor57 and @eagirre where we show that NNs learn prototypical spatial relations btw entities The paper has been published by the Pattern Recognition journal We share our code and the used dataset publicly (specially created for this task and called REC-COCO) ",https://arxiv.org/abs/2102.00997,"Generating an image from its textual description requires both a certain level of language understanding and common sense knowledge about the spatial relations of the physical entities being described. In this work, we focus on inferring the spatial relation between entities, a key step in the process of composing scenes based on text. More specifically, given a caption containing a mention to a subject and the location and size of the bounding box of that subject, our goal is to predict the location and size of an object mentioned in the caption. Previous work did not use the caption text information, but a manually provided relation holding between the subject and the object. In fact, the used evaluation datasets contain manually annotated ontological triplets but no captions, making the exercise unrealistic: a manual step was required; and systems did not leverage the richer information in captions. Here we present a system that uses the full caption, and Relations in Captions (REC-COCO), a dataset derived from MS-COCO which allows to evaluate spatial relation inference from captions directly. Our experiments show that: (1) it is possible to infer the size and location of an object with respect to a given subject directly from the caption; (2) the use of full text allows to place the object better than using a manually annotated relation. Our work paves the way for systems that, given a caption, decide which entities need to be depicted and their respective location and sizes, in order to then generate the final image. ",Inferring spatial relations from textual descriptions of images,2,"['Check out our new paper entitled ""Inferring spatial relations from textual descriptions of images"" with @Aitzole @oierldl @IgnacioArganda @Aitor57 and @eagirre where we show that NNs learn prototypical spatial relations btw entities ', 'The paper has been published by the Pattern Recognition journal https://t.co/gJb9Gvq1EH We share our code and the used dataset publicly (specially created for this task and called REC-COCO) https://t.co/j4s7SfzCwt']",21,02,419
32,169,1509849211533201412,584142796,Carl Rodriguez,"Paper day (not an April Fool's joke, pun notwithstanding)! The second in our Great Balls of FIRE series, where we (me, @MikeGrudic, @ZachHafen, Astrid Lambert +) study the formation and evolution of globular clusters in a realistic MHD galaxy simulation : First paper in the series () identified all the clusters that formed over the history of the MHD m12i FIRE-2 galaxy, and using detailed simulations of collapsing molecular clouds, mapped them to a catalog of young cluster masses, radii, metallicities, etc: This paper () takes the next step: integrating ~1000 of the most massive clusters (10^5 to 10^7 stars) in the catalog forward with a HĂ©non-style N-body code to the present day, with the initial conditions and tidal fields taken directly from the galaxy sim. This is, to my knowledge, the first study of it's kind using a star-by-star N-body approach that can integrate clusters massive enough to become the old globular clusters we see in the MW and other galaxies. The model predicts about 148 GCs in the m12i galaxy at z=0: Because we're using realistic star cluster models, we can extract v-band surface brightness profiles, fit them to King 1966 models like observers do, and directly compare them to the properties of GCs in the MW and M31. Here, we show the masses and radii of our clusters. Our clusters are typically smaller (in both core radius and mass) than GCs in the MW and M31. This turns out to be related to the specific formation history of the host galaxy. It turns out that clusters born later tend to lose mass more quickly. At the same time, forming GCs later means they form at higher metallicities, and retain fewer stellar mass black holes due to natal kicks. This means they undergo core collapse FASTER than older, metal pool GCs. There is a LOT more information and analysis in the paper, mapping out how exactly the history of this galaxy shapes its z=0 GC population, so I encourage you to take a look! Damnit, Lamberts, not Lambert! @evbauer_astro @MikeGrudic @ZachHafen Yeah I completely forgot about April fools. Course one could argue that posting a paper with an April Fools-like title but real content is the greatest form of the April fools paperâŠ",https://arxiv.org/abs/2203.16547,"The current generation of galaxy simulations can resolve individual giant molecular clouds, the progenitors of dense star clusters. But the evolutionary fate of these young massive clusters (YMCs), and whether they can become the old globular clusters (GCs) observed in many galaxies, is determined by a complex interplay of internal dynamical processes and external galactic effects. We present the first star-by-star $N$-body models of massive ($N\sim10^5-10^7$) star clusters formed in a FIRE-2 MHD simulation of a Milky Way-mass galaxy, with all of the relevant initial conditions and galactic tidal effects extracted directly from the cosmological simulation. We randomly select 895 ($\sim 30\%$) of the YMCs with $ > 6\times10^4M_{\odot}$ from Grudi\'c et al. 2022 and integrate them to the present day using the Cluster Monte Carlo Code, CMC. This procedure predicts a MW-like system with 148 GCs, most of which were formed during the early, bursty mode of star formation in the galaxy. Our GCs are younger, less massive, and more core collapsed than clusters in the Milky Way or M31. This is a direct result of the assembly history and age-metallicity relationship of the GCs' host galaxy: younger clusters are preferentially born in stronger galactic tidal fields and initially retain fewer stellar-mass black holes, causing them to lose mass faster and reach core collapse sooner than their older counterparts. Our results suggest that the masses and core/half-light radii of GCs are shaped not only by internal dynamical processes, but by the specific evolutionary history of their host galaxies as well. These results emphasize that $N$-body studies with realistic stellar physics are crucial to understanding the evolution and present-day properties of galactic GC systems. ","Great Balls of FIRE II: The evolution and destruction of star clusters
across cosmic time in a Milky Way-mass galaxy",10,"[""Paper day (not an April Fool's joke, pun notwithstanding)! The second in our Great Balls of FIRE series, where we (me, @MikeGrudic, @ZachHafen, Astrid Lambert +) study the formation and evolution of globular clusters in a realistic MHD galaxy simulation : "", 'First paper in the series (https://t.co/nCis74DJqX) identified all the clusters that formed over the history of the MHD m12i FIRE-2 galaxy, and using detailed simulations of collapsing molecular clouds, mapped them to a catalog of young cluster masses, radii, metallicities, etc:', 'This paper (https://t.co/pIF78XsZOH) takes the next step: integrating ~1000 of the most massive clusters (10^5 to 10^7 stars) in the catalog forward with a HĂ©non-style N-body code to the present day, with the initial conditions and tidal fields taken directly from the galaxy sim.', ""This is, to my knowledge, the first study of it's kind using a star-by-star N-body approach that can integrate clusters massive enough to become the old globular clusters we see in the MW and other galaxies. The model predicts about 148 GCs in the m12i galaxy at z=0: https://t.co/4GugJzHoVB"", ""Because we're using realistic star cluster models, we can extract v-band surface brightness profiles, fit them to King 1966 models like observers do, and directly compare them to the properties of GCs in the MW and M31. Here, we show the masses and radii of our clusters. https://t.co/PlUXAT0ecj"", 'Our clusters are typically smaller (in both core radius and mass) than GCs in the MW and M31. This turns out to be related to the specific formation history of the host galaxy. It turns out that clusters born later tend to lose mass more quickly.', 'At the same time, forming GCs later means they form at higher metallicities, and retain fewer stellar mass black holes due to natal kicks. This means they undergo core collapse FASTER than older, metal pool GCs.', 'There is a LOT more information and analysis in the paper, mapping out how exactly the history of this galaxy shapes its z=0 GC population, so I encourage you to take a look!', 'Damnit, Lamberts, not Lambert!', '@evbauer_astro @MikeGrudic @ZachHafen Yeah I completely forgot about April fools. Course one could argue that posting a paper with an April Fools-like title but real content is the greatest form of the April fools paperâŠ']",22,03,2235
33,48,1331270116290523136,891766035250122757,Rosie Talbot,Very excited to share my first paper which is out on arXiv today! I describe a new numerical model for AGN jets that are powered by the spin of the black hole and test it in simulations of the central regions of a Seyfert galaxy. Find it here: @gusbeane I just interpolated the sim output onto a regular grid. If the grid is sufficiently fine this picks out AREPOâs voronoi cells :),https://arxiv.org/abs/2011.10580,"Jets launched by active galactic nuclei (AGN) are believed to play a significant role in shaping the properties of galaxies and provide an energetically viable mechanism through which galaxies can become quenched. Here we present a novel AGN feedback model, which we have incorporated into the AREPO code, that evolves the black hole mass and spin as the accretion flow proceeds through a thin $\alpha$-disc which we self-consistently couple to a Blandford-Znajek jet. We apply our model to the central region of a typical radio-loud Seyfert galaxy embedded in a hot circumgalactic medium (CGM). We find that jets launched into high pressure environments thermalise efficiently due to the formation of recollimation shocks and the vigorous instabilities that these shocks excite increase the efficiency of the mixing of CGM and jet material. The beams of more overpressured jets, however, are not as readily disrupted by instabilities so the majority of the momentum flux at the jet base is retained out to the head, where the jet terminates in a reverse shock. All jets entrain a significant amount of cold circumnuclear disc material which, while energetically insignificant, dominates the lobe mass together with the hot, entrained CGM material. The jet power evolves significantly due to effective self-regulation by the black hole, fed by secularly-driven, intermittent mass flows. The direction of jets launched directly into the circumnuclear disc changes considerably due to effective Bardeen-Petterson torquing. Interestingly, these jets obliterate the innermost regions of the disc and drive large-scale, multi-phase, turbulent, bipolar outflows. ","Blandford-Znajek jets in galaxy formation simulations: method and
implementation",2,"['Very excited to share my first paper which is out on arXiv today! \nI describe a new numerical model for AGN jets that are powered by the spin of the black hole and test it in simulations of the central regions of a Seyfert galaxy. Find it here: ', '@gusbeane I just interpolated the sim output onto a regular grid. If the grid is sufficiently fine this picks out AREPOâs voronoi cells :)']",20,11,396
34,27,1265413436403412993,3105498192,Julie Clutterbuck,"New paper: the first two eigenvalues of a convex domain in hyperbolic space fail to socially distance Wait, that title was vetoed. Real title: the vanishing of the fundamental gap of convex domains in hyperbolic space With these fun people .. Guofang Wei, Alina Stancu, Hien Nguyen, Theodora Bourni, and Valentina Wheeler (pictured working while also scoffing pastries)",https://arxiv.org/abs/2005.11784,"For the Laplace operator with Dirichlet boundary conditions on convex domains in $\mathbb H^n$, $n\geq 2$, we prove that the product of the fundamental gap with the square of the diameter can be arbitrarily small for domains of any diameter. ",The vanishing of the fundamental gap of convex domains in $\mathbb H^n$,4,"['New paper: the first two eigenvalues of a convex domain in hyperbolic space fail to socially distance ', 'Wait, that title was vetoed. Real title: the vanishing of the fundamental gap of convex domains in hyperbolic space', 'With these fun people https://t.co/ODlta2S2e2', '.. Guofang Wei, Alina Stancu, Hien Nguyen, Theodora Bourni, and Valentina Wheeler (pictured working while also scoffing pastries)']",20,05,383
35,179,1494676841126416393,1327363823817412608,Maura Lally,"Browsing arXiv? Check out my first first-author paper, with @amvanderburg: we reassess the claimed detection of atmospheric variability on exoplanet HAT-P-7b, & find that the apparent variations could instead be caused by supergranulation on the host star. ",https://arxiv.org/abs/2202.08279,"We reassess the claimed detection of variability in the atmosphere of the hot Jupiter HAT-P-7 b, reported by Armstrong et al. (2016). Although astronomers expect hot Jupiters to have changing atmospheres, variability is challenging to detect. We looked for time variation in the phase curves of HAT-P-7 b in Kepler data using similar methods to Armstrong et al. (2016), and identified apparently significant variations similar to what they found. Numerous tests show the variations to be mostly robust to different analysis strategies. However, when we injected unchanging phase curve signals into the light curves of other stars and searched for variability, we often saw similar levels of variations as in the HAT-P-7 light curve. Fourier analysis of the HAT-P-7 light curve revealed background red noise from stellar supergranulation on timescales similar to the planet's orbital period. Tests of simulated light curves with the same level of noise as HAT-P-7's supergranulation show that this effect alone can cause the amplitude and phase offset variability we detect for HAT-P-7 b. Therefore, the apparent variations in HAT-P-7 b's atmosphere could instead be caused by non-planetary sources, most likely photometric variability due to supergranulation on the host star. ","Reassessing the Evidence for Time Variability in the Atmosphere of the
Exoplanet HAT-P-7 b",1,"['Browsing arXiv? Check out my first first-author paper, with @amvanderburg: we reassess the claimed detection of atmospheric variability on exoplanet HAT-P-7b, & find that the apparent variations could instead be caused by supergranulation on the host star. ']",22,02,263
36,72,1328967361265856513,1392935011,Ole-Chr. Granmo,"New paper from @cairuia's talented PhD student Bimal Bhattarai! ""Measuring the Novelty of Natural Language Text Using the Conjunctive Clauses of a #TsetlinMachine Text Classifier"" Bimal Bhattarai, Ole-Christoffer Granmo, Lei Jiao #MachineLearning ",https://arxiv.org/abs/2011.08755,"Most supervised text classification approaches assume a closed world, counting on all classes being present in the data at training time. This assumption can lead to unpredictable behaviour during operation, whenever novel, previously unseen, classes appear. Although deep learning-based methods have recently been used for novelty detection, they are challenging to interpret due to their black-box nature. This paper addresses \emph{interpretable} open-world text classification, where the trained classifier must deal with novel classes during operation. To this end, we extend the recently introduced Tsetlin machine (TM) with a novelty scoring mechanism. The mechanism uses the conjunctive clauses of the TM to measure to what degree a text matches the classes covered by the training data. We demonstrate that the clauses provide a succinct interpretable description of known topics, and that our scoring mechanism makes it possible to discern novel topics from the known ones. Empirically, our TM-based approach outperforms seven other novelty detection schemes on three out of five datasets, and performs second and third best on the remaining, with the added benefit of an interpretable propositional logic-based representation. ","Measuring the Novelty of Natural Language Text Using the Conjunctive
Clauses of a Tsetlin Machine Text Classifier",1,"['New paper from @cairuia\'s talented PhD student Bimal Bhattarai!\n""Measuring the Novelty of Natural Language Text Using the Conjunctive Clauses of a #TsetlinMachine Text Classifier""\nBimal Bhattarai, Ole-Christoffer Granmo, Lei Jiao \n #MachineLearning ']",20,11,260
37,102,1273460843548565504,96779364,Arnab Bhattacharyya,"New paper: ""Efficient Statistics for Sparse Graphical Models from Truncated Samples"" with Rathin Desai, @SaiGaneshNagar1, and Yannis Panageas. A common issue in statistical inference is *selection bias*. This occurs when a dataset contains missing samples, and whether a sample is missing or not depends preferentially on its value. E.g., poor people may not want to report their income in a poll. In general, inference is impossible when there's selection bias. E.g., only millionaires may report their earnings skewing our mean income estimate. But what if we make parametric assumptions? Like, suppose the underlying distribution is gaussian. We look at truncated samples: a sample x is seen only when it satisfies some fixed property P and not seen otherwise. The property P can be arbitrary, but we assume that we can efficiently check whether an item satisfies P. The first result in our paper gives improved sample complexity bounds for learning a Gaussian graphical model from truncated samples. The # of samples scales with the sparsity of the precision matrix (unlike the previous work, ). The second result in our paper looks at high-dimensional sparse linear regression with truncated samples. Here, we make stronger assumptions to show that you only need O(k^2 log(d)) samples to recover the parameters upto bounded *entrywise* error (k=sparsity, d=dimension). For untruncated samples, the typical approach for both problems is to do l1-regularization, i.e. Lasso. We do the same. The main technical challenge is to show that the Lasso optimum falls in a region that is locally strongly convex. The big open question here is to make our results computationally effective. For gradient descent-type algorithms, it's not clear how to choose the initial point inside the strongly convex region. Another interesting direction is to remove the assumption that we can check the property P. In the low-dimensional setting, there has been exciting work along this line here: . Of late, my work has been following a pattern :) The general problem is impossible, @yudapearl, @eliasbareinboim and company () give conditions for statistical identifiability, and we show stronger conditions that imply finite sample guarantees.",https://arxiv.org/abs/2006.09735,"In this paper, we study high-dimensional estimation from truncated samples. We focus on two fundamental and classical problems: (i) inference of sparse Gaussian graphical models and (ii) support recovery of sparse linear models. (i) For Gaussian graphical models, suppose $d$-dimensional samples ${\bf x}$ are generated from a Gaussian $N(\mu,\Sigma)$ and observed only if they belong to a subset $S \subseteq \mathbb{R}^d$. We show that ${\mu}$ and ${\Sigma}$ can be estimated with error $\epsilon$ in the Frobenius norm, using $\tilde{O}\left(\frac{\textrm{nz}({\Sigma}^{-1})}{\epsilon^2}\right)$ samples from a truncated $\mathcal{N}({\mu},{\Sigma})$ and having access to a membership oracle for $S$. The set $S$ is assumed to have non-trivial measure under the unknown distribution but is otherwise arbitrary. (ii) For sparse linear regression, suppose samples $({\bf x},y)$ are generated where $y = {\bf x}^\top{{\Omega}^*} + \mathcal{N}(0,1)$ and $({\bf x}, y)$ is seen only if $y$ belongs to a truncation set $S \subseteq \mathbb{R}$. We consider the case that ${\Omega}^*$ is sparse with a support set of size $k$. Our main result is to establish precise conditions on the problem dimension $d$, the support size $k$, the number of observations $n$, and properties of the samples and the truncation that are sufficient to recover the support of ${\Omega}^*$. Specifically, we show that under some mild assumptions, only $O(k^2 \log d)$ samples are needed to estimate ${\Omega}^*$ in the $\ell_\infty$-norm up to a bounded error. For both problems, our estimator minimizes the sum of the finite population negative log-likelihood function and an $\ell_1$-regularization term. ",Efficient Statistics for Sparse Graphical Models from Truncated Samples,10,"['New paper: ""Efficient Statistics for Sparse Graphical Models from Truncated Samples"" with Rathin Desai, @SaiGaneshNagar1, and Yannis Panageas. ', 'A common issue in statistical inference is *selection bias*. This occurs when a dataset contains missing samples, and whether a sample is missing or not depends preferentially on its value. E.g., poor people may not want to report their income in a poll.', ""In general, inference is impossible when there's selection bias. E.g., only millionaires may report their earnings skewing our mean income estimate. But what if we make parametric assumptions? Like, suppose the underlying distribution is gaussian."", 'We look at truncated samples: a sample x is seen only when it satisfies some fixed property P and not seen otherwise. The property P can be arbitrary, but we assume that we can efficiently check whether an item satisfies P.', 'The first result in our paper gives improved sample complexity bounds for learning a Gaussian graphical model from truncated samples. The # of samples scales with the sparsity of the precision matrix (unlike the previous work, https://t.co/mwvngTJjeY).', 'The second result in our paper looks at high-dimensional sparse linear regression with truncated samples. Here, we make stronger assumptions to show that you only need O(k^2 log(d)) samples to recover the parameters upto bounded *entrywise* error (k=sparsity, d=dimension).', 'For untruncated samples, the typical approach for both problems is to do l1-regularization, i.e. Lasso. We do the same. The main technical challenge is to show that the Lasso optimum falls in a region that is locally strongly convex.', ""The big open question here is to make our results computationally effective. For gradient descent-type algorithms, it's not clear how to choose the initial point inside the strongly convex region."", 'Another interesting direction is to remove the assumption that we can check the property P. In the low-dimensional setting, there has been exciting work along this line here: https://t.co/3XsEQhSA9m.', 'Of late, my work has been following a pattern :) The general problem is impossible, @yudapearl, @eliasbareinboim and company (https://t.co/puAVNWltqX) give conditions for statistical identifiability, and we show stronger conditions that imply finite sample guarantees.']",20,06,2252
38,126,1488938966485250050,226366834,Hattie Zhou,"[New paper đš] Youâve heard of catastrophic forgetting, but have you heard of *fortuitous* forgetting? In this paper, we show how forgetting can be a friend of learning in artificial neural nets. With @AnkitKVani, @hugo_larochelle, @AaronCourville (1/6) We introduce âforget-and-relearnâ as a powerful paradigm for steering the learning trajectory of NNs. In this process, the forgetting step removes undesirable information and the relearning step amplifies features that are useful under different learning conditions. (2/6) We show that many iterative training algos (e.g. LT pruning, knowledge evolution, iterated learning) can be seen as instances of forget-and-relearn, and that their success is proportional to how well the forgetting operations can selectively remove undesirable information (3/6) Thus, by designing more targeted forgetting methods, we can significantly improve the performance of these algorithms. Our analyses also illustrate how the relearning stage amplifies features that are consistently useful, and how this corresponds to better generalization. (4/6) Forget-and-relearn suggests a shift in thinking from âlearning desirable informationâ to âforgetting undesirable informationâ. Itâs often easier to suppress unwanted behavior than to delineate good behavior, and we hope this work will inspire new forgetting-based algorithms (5/6) Lastly, Iâm really grateful for my collaborators who made this project super fun to work on, as well as for the friends who offered valuable feedback throughout this work â€ïž Excited to present this at #ICLR2022 đ„ł Feel free to reach out if you want to discuss these ideas! (6/6)",http://arxiv.org/abs/2202.00155,"Forgetting is often seen as an unwanted characteristic in both human and machine learning. However, we propose that forgetting can in fact be favorable to learning. We introduce ""forget-and-relearn"" as a powerful paradigm for shaping the learning trajectories of artificial neural networks. In this process, the forgetting step selectively removes undesirable information from the model, and the relearning step reinforces features that are consistently useful under different conditions. The forget-and-relearn framework unifies many existing iterative training algorithms in the image classification and language emergence literature, and allows us to understand the success of these algorithms in terms of the disproportionate forgetting of undesirable information. We leverage this understanding to improve upon existing algorithms by designing more targeted forgetting operations. Insights from our analysis provide a coherent view on the dynamics of iterative training in neural networks and offer a clear path towards performance improvements. ",Fortuitous Forgetting in Connectionist Networks,6,"['[New paper đš] Youâve heard of catastrophic forgetting, but have you heard of *fortuitous* forgetting? \n\nIn this paper, we show how forgetting can be a friend of learning in artificial neural nets. With @AnkitKVani, @hugo_larochelle, @AaronCourville (1/6)\n\n', 'We introduce âforget-and-relearnâ as a powerful paradigm for steering the learning trajectory of NNs. \n\nIn this process, the forgetting step removes undesirable information and the relearning step amplifies features that are useful under different learning conditions. (2/6)', 'We show that many iterative training algos (e.g. LT pruning, knowledge evolution, iterated learning) can be seen as instances of forget-and-relearn, and that their success is proportional to how well the forgetting operations can selectively remove undesirable information\n(3/6)', 'Thus, by designing more targeted forgetting methods, we can significantly improve the performance of these algorithms.\n\nOur analyses also illustrate how the relearning stage amplifies features that are consistently useful, and how this corresponds to better generalization. (4/6)', 'Forget-and-relearn suggests a shift in thinking from âlearning desirable informationâ to âforgetting undesirable informationâ. Itâs often easier to suppress unwanted behavior than to delineate good behavior, and we hope this work will inspire new forgetting-based algorithms (5/6)', 'Lastly, Iâm really grateful for my collaborators who made this project super fun to work on, as well as for the friends who offered valuable feedback throughout this work â€ïž\n\nExcited to present this at #ICLR2022 đ„ł\n\nFeel free to reach out if you want to discuss these ideas! (6/6)']",22,02,1651
39,19,1333516537467002881,1291134968907698176,Jennifer J. Sun,"Our framework, using self- and programmatic supervision, improves data efficiency for behavioral experiments by up to 10x. New preprint with @Antihebbiann, Eric Zhan, @yisongyue & Pietro Perona Paper: Code release in the works :) @mikemcdannald @Antihebbiann @yisongyue Thank you! :)",https://arxiv.org/abs/2011.13917,"Specialized domain knowledge is often necessary to accurately annotate training sets for in-depth analysis, but can be burdensome and time-consuming to acquire from domain experts. This issue arises prominently in automated behavior analysis, in which agent movements or actions of interest are detected from video tracking data. To reduce annotation effort, we present TREBA: a method to learn annotation-sample efficient trajectory embedding for behavior analysis, based on multi-task self-supervised learning. The tasks in our method can be efficiently engineered by domain experts through a process we call ""task programming"", which uses programs to explicitly encode structured knowledge from domain experts. Total domain expert effort can be reduced by exchanging data annotation time for the construction of a small number of programmed tasks. We evaluate this trade-off using data from behavioral neuroscience, in which specialized domain knowledge is used to identify behaviors. We present experimental results in three datasets across two domains: mice and fruit flies. Using embeddings from TREBA, we reduce annotation burden by up to a factor of 10 without compromising accuracy compared to state-of-the-art features. Our results thus suggest that task programming and self-supervision can be an effective way to reduce annotation effort for domain experts. ",Task Programming: Learning Data Efficient Behavior Representations,2,"['Our framework, using self- and programmatic supervision, improves data efficiency for behavioral experiments by up to 10x.\n\nNew preprint with @Antihebbiann, Eric Zhan, @yisongyue & Pietro Perona\n\nPaper: \nCode release in the works :) ', '@mikemcdannald @Antihebbiann @yisongyue Thank you! :)']",20,11,297
40,136,1281486695515529216,994520594489204737,Jack Hare,"New arXiv paper, ""An Imaging Refractometer for Density Fluctuation Measurements in High Energy Density Plasmas"" . This new diagnostic analyses a laser beam passing through #turbulent #Plasma, recording ray locations and ray deflections in orthogonal axes. The idea is that the spectrum of ray deflections relates to the spectrum of density fluctuations within the turbulent plasma. These fluctuations were studied with digital Fourier transforms of shadowgraphy and schlieren images, but our new technique resolves much smaller scales. We compare our diagnostic to existing methods using ray tracing techniques, and present data from an experiment which shows the exquisite detail we can capture. Next up: going beyond the power spectrum and looking for signatures of intermittency in magnetohydrodynamic turbulence. Thanks to everyone who has helped with this paper, it's been a long journey. The data is from 2016, so it's taken a pandemic to finally analyse it. Along the way we've developed a really nice ray-tracing/ray-transfer-matrix code which we can use for all our optical diagnostics.",https://arxiv.org/abs/2007.04682,"We report on a recently developed laser-probing diagnostic which allows direct measurements of ray-deflection angles in one axis, whilst retaining imaging capabilities in the other axis. This allows us to measure the spectrum of angular deflections from a laser beam which passes though a turbulent high-energy-density plasma. This spectrum contains information about the density fluctuations within the plasma, which deflect the probing laser over a range of angles. %The principle of this diagnostic is described, along with our specific experimental realisation. We create synthetic diagnostics using ray-tracing to compare this new diagnostic with standard shadowgraphy and schlieren imaging approaches, which demonstrates the enhanced sensitivity of this new diagnostic over standard techniques. We present experimental data from turbulence behind a reverse shock in a plasma and demonstrate that this technique can measure angular deflections between 0.06 and 34 mrad, corresponding to a dynamic range of over 500. ","An Imaging Refractometer for Density Fluctuation Measurements in High
Energy Density Plasmas",4,"['New arXiv paper, ""An Imaging Refractometer for Density Fluctuation Measurements in High Energy Density Plasmas"" . This new diagnostic analyses a laser beam passing through #turbulent #Plasma, recording ray locations and ray deflections in orthogonal axes. ', 'The idea is that the spectrum of ray deflections relates to the spectrum of density fluctuations within the turbulent plasma. These fluctuations were studied with digital Fourier transforms of shadowgraphy and schlieren images, but our new technique resolves much smaller scales. https://t.co/CoHkahN2Cb', 'We compare our diagnostic to existing methods using ray tracing techniques, and present data from an experiment which shows the exquisite detail we can capture. Next up: going beyond the power spectrum and looking for signatures of intermittency in magnetohydrodynamic turbulence. https://t.co/uvxT9JaJW1', ""Thanks to everyone who has helped with this paper, it's been a long journey. The data is from 2016, so it's taken a pandemic to finally analyse it. Along the way we've developed a really nice ray-tracing/ray-transfer-matrix code which we can use for all our optical diagnostics.""]",20,07,1122
41,37,1453021100946952196,1109974447954382848,ăAlexandres Lazară,"New paper day! The IMF of the first stars is a crucial quantity of early cosmic history, but it is presently unknown. Our paper proposes a way to constrain its form using the brightest high-redshift transients (i.e., the first stars most violent deaths): ",https://arxiv.org/abs/2110.11956,"The emergence of the first, so-called Population III (Pop III), stars shaped early cosmic history in ways that crucially depends on their initial mass function (IMF). However, because of the absence of direct observational constraints, the detailed IMF remains elusive. Nevertheless, numerical simulations agree in broad terms that the first stars were typically massive and should often end their lives in violent, explosive deaths. These fates include extremely luminous pair-instability supernovae (PISNe) and bright gamma-ray bursts (GRBs), the latter arising from the collapse of rapidly rotating progenitor stars into black holes. These high-redshift transients are expected to be within the detection limits of upcoming space telescope missions, allowing to place effective constraints on the shape of the primordial IMF that is not easily accessible with other probes. This paper presents a framework to probe the Pop III IMF, utilizing the cosmological source densities of high-redshift PISNe and GRBs. Considering these transients separately could provide useful constraints on the Pop III IMF, but tighter bounds are obtainable by combining PISN and GRB counts. This combined diagnostic is more robust as it is independent of the underlying Pop III star formation rate density, an unknown prior. Future surveys promise to capture most high-redshift GRBs across the entire sky, but high-redshift PISN searches with future telescopes, e.g. Roman Space Telescope, will likely be substantially incomplete. Nevertheless, we demonstrate that even such lower bounds on the PISN count will be able to provide key constraints on the primordial IMF, in particular, if it is top-heavy or not. ",Probing the initial mass function of the first stars with transients,1,"['New paper day! The IMF of the first stars is a crucial quantity of early cosmic history, but it is presently unknown. Our paper proposes a way to constrain its form using the brightest high-redshift transients (i.e., the first stars most violent deaths): ']",21,10,268
42,41,1009729620231573506,23000769,Christopher Conselice,"On astro-ph today we posted a paper where we argue that individual galaxy halo masses can be measured using a new hybrid theory/observational approach. One result is the following -- the stellar mass to halo mass ratio is relatively constant up to z~3 which has implications for how galaxies acquire their dark and baryonic matter. Another is that we find a third parameter in the relation between stellar and halo mass which is the time the bulk of the mass of a halo is formed. The earlier the formation, the higher the stellar to halo mass ratio. #DarkMatter #galaxies",https://arxiv.org/abs/1806.07752,"We use a hybrid observational/theoretical approach to study the relation between galaxy kinematics and the derived stellar and halo masses of galaxies up to z=3 as a function of stellar mass, redshift and morphology. Our observational sample consists of a concatenation of 1125 galaxies with kinematic measurements at 0.4', 'One result is the following -- the stellar mass to halo mass ratio is relatively constant up to z~3 which has implications for how galaxies acquire their dark and baryonic matter. https://t.co/OPnPLvnDx4', 'Another is that we find a third parameter in the relation between stellar and halo mass which is the time the bulk of the mass of a halo is formed. The earlier the formation, the higher the stellar to halo mass ratio. #DarkMatter #galaxies']",18,06,586
43,126,1257603797519675392,84888361,Miriam Rengel,Very proud of this new paper led by my DFG postdoc Denis Shulyak. It studies disequilibrium chemistry on the simulated spectra of Hot Jupiter atmospheres. Disequilibrium chemistry could be robustly detected with future missions like JWST and ARIEL. ,https://arxiv.org/abs/2005.01470,"In this work we study the effect of disequilibrium processes on mixing ratio profiles of neutral species and on the simulated spectra of a hot Jupiter exoplanet that orbits stars of different spectral types. We also address the impact of stellar activity that should be present to a different degree in all stars with convective envelopes. We used the VULCAN chemical kinetic code to compute number densities of species. The temperature-pressure profile of the atmosphere was computed with the HELIOS code. We also utilized the $\tau$-ReX forward model to predict the spectra of planets in primary and secondary eclipses. In order to account for the stellar activity we made use of the observed solar XUV spectrum taken from Virtual Planetary Laboratory (VPL) as a proxy for an active sun-like star. We find large changes in mixing ratios of most chemical species in planets orbiting A-type stars that radiate strong XUV flux inducing a very effective photodissociation. For some species, these changes can propagate very deep into the planetary atmosphere to pressures of around 1 bar. To observe disequilibrium chemistry we favor hot Jupiters with temperatures Teq=1000 K and ultra-hot Jupiters with Teq=3000$ K that also have temperature inversion in their atmospheres. On the other hand, disequilibrium calculations predict little changes in spectra of planets with intermediate temperatures. We also show that stellar activity similar to the one of the modern Sun drives important changes in mixing ratio profiles of atmospheric species. However, these changes take place at very high atmospheric altitudes and thus do not affect predicted spectra. We estimate that the effect of disequilibrium chemistry in planets orbiting nearby bright stars could be robustly detected and studied with future missions with spectroscopic capabilities in infrared such as, e.g., JWST and ARIEL. ","Stellar impact on disequilibrium chemistry and on observed spectra of
hot Jupiter atmospheres",1,['Very proud of this new paper led by my DFG postdoc Denis Shulyak. It studies disequilibrium chemistry on the simulated spectra of Hot Jupiter atmospheres. Disequilibrium chemistry could be robustly detected with future missions like JWST and ARIEL. \n '],20,05,262
44,240,1434894801086910473,1014454910346285057,Daniel Coelho de Castro,"Our new study demonstrates noisy labels impacts training, eval & selection of ML models. Reannotating large datasets requires lots of expert time (eg in healthcare), so we also show data-driven strategies enable fixing more labels using fewer resources: Great collab with Melanie Bernhardt, @RyutaroT92, Anton Schwaighofer, Kerem Tezcan, @mabmonteiro, Shruthi Bannur, @mattlungrenMD, Aditya Nori, @GlockerBen, @alvarezvalle, @ozanoktay__! @MSFTResearch That should be @mel_bernhardt! Somehow Twitter struggled to find you đ",https://arxiv.org/abs/2109.00574,"Imperfections in data annotation, known as label noise, are detrimental to the training of machine learning models and have an often-overlooked confounding effect on the assessment of model performance. Nevertheless, employing experts to remove label noise by fully re-annotating large datasets is infeasible in resource-constrained settings, such as healthcare. This work advocates for a data-driven approach to prioritising samples for re-annotation - which we term ""active label cleaning"". We propose to rank instances according to estimated label correctness and labelling difficulty of each sample, and introduce a simulation framework to evaluate relabelling efficacy. Our experiments on natural images and on a new medical imaging benchmark show that cleaning noisy labels mitigates their negative impact on model training, evaluation, and selection. Crucially, the proposed active label cleaning enables correcting labels up to 4 times more effectively than typical random selection in realistic conditions, making better use of experts' valuable time for improving dataset quality. ","Active label cleaning for improved dataset quality under resource
constraints",3,"['Our new study demonstrates noisy labels impacts training, eval & selection of ML models. Reannotating large datasets requires lots of expert time (eg in healthcare), so we also show data-driven strategies enable fixing more labels using fewer resources: ', 'Great collab with Melanie Bernhardt, @RyutaroT92, Anton Schwaighofer, Kerem Tezcan, @mabmonteiro, Shruthi Bannur, @mattlungrenMD, Aditya Nori, @GlockerBen, @alvarezvalle, @ozanoktay__! @MSFTResearch', 'That should be @mel_bernhardt! Somehow Twitter struggled to find you đ']",21,09,530
45,105,1273418141243228161,20703003,Peter B Denton,"New neutrino paper, this time with Rebekah Pestes a @VTPhysics grad student visiting BNL on the SCGSR program. We discuss what deltaCP for neutrino oscillations really means and highlight the impact of how it's not actually a fundamental parameter. 1/7 If we do the rotations in the PMNS matrix in a different order, what changes? This figure shows that even if deltaPDG in the usual parameterization is constrained (that is, we ignore the T2K app data), delta in other parameterizations is constrained to within 150-210 degrees! 2/7 (The shaded regions show the effect of varying the osc params within their 3 Ï regions.) While the premise might have been expected by some neutrino theorists, how tightly delta in other parameterizations is constrained was surprising to me anyway. 3/7 The reason is because |Ue3| is small (that is, theta13 is small). In other parameterizations this looks like sqrt(A+B*cos(delta)) so cos(delta) has to be near -1 for the cancellation to work. It's a bit subtle but we have useful approximations in the paper. 4/7 The above figure also implies that precision in delta doesn't mean much of anything fundamental. We show that here where we plot what a precision on deltaPDG of 15deg looks like in different parameterizations; it could be as small as 1deg! 5/7 So what's the right answer? Within the context of oscillations it's always the Jarlskog (pictured here). This quantity is the closest to what is measured and doesn't require as much input from solar parameters to extract unlike using delta. 6/7 While some aspect of these concepts has been discussed in the past, we focus on the situation in light of current measurements. As we zero in on the remaining parameters, keeping straight what is/isn't fundamental is more important than ever! 7/7 *this should say ""in the usual parameterization is UNconstrained""",https://arxiv.org/abs/2006.09384,"CP violation in the lepton mass matrix will be probed with good precision in upcoming experiments. The amount of CP violation present in oscillations can be quantified in numerous ways and is typically parameterized by the complex phase $\delta_{\rm PDG}$ in the standard PDG definition of the lepton mixing matrix. There are additional parameterizations of the lepton mixing matrix as well. Through various examples, we explore how, given the current data, different parameterizations can lead to different conclusions when working with parameterization dependent variables, such as $\delta$. We demonstrate how the smallness of $|U_{e3}|$ governs the scale of these results. We then demonstrate how $\delta$ can be misleading and argue that the Jarlskog is the cleanest means of presenting the amount of CP violation in the lepton sector. We also confirm that, among the different parameterizations considered, the standard PDG parameterization has a number of convenient features. ","The Impact of Different Parameterizations on the Interpretation of CP
Violation in Neutrino Oscillations",8,"[""New neutrino paper, this time with Rebekah Pestes a @VTPhysics grad student visiting BNL on the SCGSR program.\n\nWe discuss what deltaCP for neutrino oscillations really means and highlight the impact of how it's not actually a fundamental parameter. 1/7"", 'If we do the rotations in the PMNS matrix in a different order, what changes? This figure shows that even if deltaPDG in the usual parameterization is constrained (that is, we ignore the T2K app data), delta in other parameterizations is constrained to within 150-210 degrees! 2/7 https://t.co/l4fHwPPiKl', '(The shaded regions show the effect of varying the osc params within their 3 Ï regions.)\n\nWhile the premise might have been expected by some neutrino theorists, how tightly delta in other parameterizations is constrained was surprising to me anyway. 3/7', ""The reason is because |Ue3| is small (that is, theta13 is small). In other parameterizations this looks like sqrt(A+B*cos(delta)) so cos(delta) has to be near -1 for the cancellation to work. It's a bit subtle but we have useful approximations in the paper. 4/7"", ""The above figure also implies that precision in delta doesn't mean much of anything fundamental. We show that here where we plot what a precision on deltaPDG of 15deg looks like in different parameterizations; it could be as small as 1deg! 5/7 https://t.co/AXB0g1CXZX"", ""So what's the right answer? Within the context of oscillations it's always the Jarlskog (pictured here). This quantity is the closest to what is measured and doesn't require as much input from solar parameters to extract unlike using delta. 6/7 https://t.co/YzrZYQ1MFo"", ""While some aspect of these concepts has been discussed in the past, we focus on the situation in light of current measurements. As we zero in on the remaining parameters, keeping straight what is/isn't fundamental is more important than ever! 7/7"", '*this should say ""in the usual parameterization is UNconstrained""']",20,06,1878
46,26,1354254175631273988,1316039333175066624,Ruslan Shaydulin,"New paper out w/ @SciWild! We show how by leveraging symmetries computed using fast graph automorphism solvers the cost of training QAOA purely classically can be reduced by multiple orders of magnitude even on graphs known to be hard for such solvers This is a direct corollary of our recent results on classical symmetries in QAOA () with @stuart_hadfield, Tad Hogg and Ilya Safro. More to come, stay tuned!",http://arxiv.org/abs/2101.10296,"A promising approach to the practical application of the Quantum Approximate Optimization Algorithm (QAOA) is finding QAOA parameters classically in simulation and sampling the solutions from QAOA with optimized parameters on a quantum computer. Doing so requires repeated evaluations of QAOA energy in simulation. We propose a novel approach for accelerating the evaluation of QAOA energy by leveraging the symmetry of the problem. We show a connection between classical symmetries of the objective function and the symmetries of the terms of the cost Hamiltonian with respect to the QAOA energy. We show how by considering only the terms that are not connected by symmetry, we can significantly reduce the cost of evaluating the QAOA energy. Our approach is general and applies to any known subgroup of symmetries and is not limited to graph problems. Our results are directly applicable to nonlocal QAOA generalization RQAOA. We outline how available fast graph automorphism solvers can be leveraged for computing the symmetries of the problem in practice. We implement the proposed approach on the MaxCut problem using a state-of-the-art tensor network simulator and a graph automorphism solver on a benchmark of 48 graphs with up to 10,000 nodes. Our approach provides an improvement for $p=1$ on $71.7\%$ of the graphs considered, with a median speedup of $4.06$, on a benchmark where $62.5\%$ of the graphs are known to be hard for automorphism solvers. ",Exploiting Symmetry Reduces the Cost of Training QAOA,2,"['New paper out w/ @SciWild! We show how by leveraging symmetries computed using fast graph automorphism solvers the cost of training QAOA purely classically can be reduced by multiple orders of magnitude even on graphs known to be hard for such solvers ', 'This is a direct corollary of our recent results on classical symmetries in QAOA (https://t.co/mYwKQDLPKI) with @stuart_hadfield, Tad Hogg and Ilya Safro. More to come, stay tuned!']",21,01,422
47,78,1328643574867628033,992535707557269504,Dr. Aaron Labdon,"Our new paper is out today! We present the first J band interferometric observations of a YSO using the MIRC-X instrument @CHARAArray. Observing the infamous outbursting star FU Orionis and the surrounding accretion disk. Using our tri-wavelength observations we derive the temperature gradient across the disk. We find evidence of direct boundary layer accretion onto the central star and viscous heating in the inner disk. (A thanks to Laws et al 2020 for the wonderful GPI image) A big thanks to collaborators from @UoE_Astro, @michiganastro and @CHARAArray for making this work possible.",https://arxiv.org/abs/2011.07865,"Context. FU Orionis is the archetypal FUor star, a subclass of young stellar object (YSO) that undergo rapid brightening events, often gaining 4-6 magnitudes on timescales of days. This brightening is often associated with a massive increase in accretion; one of the most ubiquitous processes in astrophysics from planets and stars to super-massive black holes. We present multi-band interferometric observations of the FU Ori circumstellar environment, including the first J-band interferometric observations of a YSO. Aims. We investigate the morphology and temperature gradient of the inner-most regions of the accretion disk around FU Orionis. We aim to characterise the heating mechanisms of the disk and comment on potential outburst triggering processes. Methods. Recent upgrades to the MIRC-X instrument at the CHARA array allowed the first dual-band J and H observations of YSOs.Using baselines up to 331 m, we present high angular resolution data of a YSO covering the near-infrared bands J, H, and K. The unprecedented spectral range of the data allows us to apply temperature gradient models to the innermost regions of FU Ori. Results. We spatially resolve the innermost astronomical unit of the disk and determine the exponent of the temperature gradient of the inner disk to $T=r^{-0.74\pm0.02}$. This agrees with theoretical work that predicts $T = r^{-0.75}$ for actively accreting, steady state disks, a value only obtainable through viscous heating within the disk. We find a disk which extends down to the stellar surface at $0.015\pm0.007$ au where the temperature is found to be $5800\pm700$ K indicating boundary layer accretion. We find a disk inclined at $32\pm4^\circ$ with a minor-axis position angle of $34\pm11^\circ$. ","Viscous Heating and Boundary Layer Accretion in the Disk of Outbursting
Star FU Orionis",3,"['Our new paper is out today! We present the first J band interferometric observations of a YSO using the MIRC-X instrument @CHARAArray. Observing the infamous outbursting star FU Orionis and the surrounding accretion disk. ', 'Using our tri-wavelength observations we derive the temperature gradient across the disk. We find evidence of direct boundary layer accretion onto the central star and viscous heating in the inner disk. (A thanks to Laws et al 2020 for the wonderful GPI image) https://t.co/0ah2piDJ6b', 'A big thanks to collaborators from @UoE_Astro, @michiganastro and @CHARAArray for making this work possible.']",20,11,605
48,9,979761588927324160,40737859,Aleksandra Korolova,"New paper on microtargeting, private info inference, and precise geo-targeting attacks feasible using Facebookâs advertising system: . @faizulla_boy & I discuss Facebookâs responses to disclosures (2 months!) & call for radical transparency & more research. I first identified and pointed out to Facebook the privacy risks of microtargeting using Facebook's ad platform in 2010: . 7+ years later, itâs still an issue with no scientific innovation on protections from Facebook: ",https://arxiv.org/abs/1803.10099,"Ad targeting is getting more powerful with introduction of new tools, such as Custom Audiences, behavioral targeting, and Audience Insights. Although this is beneficial for businesses as it enables people to receive more relevant advertising, the power of the tools has downsides. In this paper, we focus on three downsides: privacy violations, microtargeting (i.e., the ability to reach a specific individual or individuals without their explicit knowledge that they are the only ones an ad reaches) and ease of reaching marginalized groups. Using Facebook's ad system as a case study, we demonstrate the feasibility of such downsides. We then discuss Facebook's response to our responsible disclosures of the findings and call for additional policy, science, and engineering work to protect consumers in the rapidly evolving ecosystem of ad targeting. ","Facebook's Advertising Platform: New Attack Vectors and the Need for
Interventions",2,"['New paper on microtargeting, private info inference, and precise geo-targeting attacks feasible using Facebookâs advertising system: . @faizulla_boy & I discuss Facebookâs responses to disclosures (2 months!) & call for radical transparency & more research.', ""I first identified and pointed out to Facebook the privacy risks of microtargeting using Facebook's ad platform in 2010: https://t.co/nPK28EPd48. 7+ years later, itâs still an issue with no scientific innovation on protections from Facebook: https://t.co/yZLN9Xs966""]",18,03,495
49,111,1163794283515785217,1104703454,Francesco Silvestri,"Tensor architectures (like @GoogleAI TPU and @nvidia TC) have been developed for accelerating Deep Learning applications. But can we exploit their computational power for other problems? We study this question in . The heart of tensor architectures is a hardware circuit for multiplying small and dense matrices. We propose a computational model that captures this feature, and then design algorithms accelerating some linear algebra problems by exploiting tensor architectures. Joint work with @Flav1oV.",https://arxiv.org/abs/1908.06649,"To respond to the need of efficient training and inference of deep neural networks, a plethora of domain-specific hardware architectures have been introduced, such as Google Tensor Processing Units and NVIDIA Tensor Cores. A common feature of these architectures is a hardware circuit for efficiently computing a dense matrix multiplication of a given small size. In order to broaden the class of algorithms that exploit these systems, we propose a computational model, named the TCU model, that captures the ability to natively multiply small matrices. We then use the TCU model for designing fast algorithms for several problems, including matrix operations (dense and sparse multiplication, Gaussian Elimination), graph algorithms (transitive closure, all pairs shortest distances), Discrete Fourier Transform, stencil computations, integer multiplication, and polynomial evaluation. We finally highlight a relation between the TCU model and the external memory model. ",A Computational Model for Tensor Core Units,2,"['Tensor architectures (like @GoogleAI TPU and @nvidia TC) have been developed for accelerating Deep Learning applications. But can we exploit their computational power for other problems? We study this question in . The heart of tensor architectures is', 'a hardware circuit for multiplying small and dense matrices. We propose a computational model that captures this feature, and then design algorithms accelerating some linear algebra problems by exploiting tensor architectures. Joint work with @Flav1oV.']",19,08,510
50,51,1363720587500269575,64958481,Birhanu Eshete,New paper đą: We empirically explore the combination of additive secret sharing and differential privacy for privacy-preserving collaborative prediction with minimal loss in model accuracy. Paper: To appear at IWSPA'21 co-located with @acmcodaspy '21 ,https://arxiv.org/abs/2102.09751,"When multiple parties that deal with private data aim for a collaborative prediction task such as medical image classification, they are often constrained by data protection regulations and lack of trust among collaborating parties. If done in a privacy-preserving manner, predictive analytics can benefit from the collective prediction capability of multiple parties holding complementary datasets on the same machine learning task. This paper presents PRICURE, a system that combines complementary strengths of secure multi-party computation (SMPC) and differential privacy (DP) to enable privacy-preserving collaborative prediction among multiple model owners. SMPC enables secret-sharing of private models and client inputs with non-colluding secure servers to compute predictions without leaking model parameters and inputs. DP masks true prediction results via noisy aggregation so as to deter a semi-honest client who may mount membership inference attacks. We evaluate PRICURE on neural networks across four datasets including benchmark medical image classification datasets. Our results suggest PRICURE guarantees privacy for tens of model owners and clients with acceptable accuracy loss. We also show that DP reduces membership inference attack exposure without hurting accuracy. ","PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party
Setting",1,"[""New paper đą:\nWe empirically explore the combination of additive secret sharing and differential privacy for privacy-preserving collaborative prediction with minimal loss in model accuracy.\nPaper: \nTo appear at IWSPA'21 co-located with @acmcodaspy '21 ""]",21,02,263
51,117,1157296970429554688,885237745198747648,"Jennifer Stiso, PhD","Ever looked at the huge number of neural systems that are involved in BCI performance and wondered how they interact to support learning? We did ! We jointly decompose dynamic brain and behavioral data and find an important role for the attention system. Work done with @DaniSBassett, and thanks to collaborators @F_DeVicoFallani and @MConstanceCorsi",https://arxiv.org/abs/1908.00077,"Motor imagery-based brain-computer interfaces (BCIs) use an individuals ability to volitionally modulate localized brain activity as a therapy for motor dysfunction or to probe causal relations between brain activity and behavior. However, many individuals cannot learn to successfully modulate their brain activity, greatly limiting the efficacy of BCI for therapy and for basic scientific inquiry. Previous research suggests that coherent activity across diverse cognitive systems is a hallmark of individuals who can successfully learn to control the BCI. However, little is known about how these distributed networks interact through time to support learning. Here, we address this gap in knowledge by constructing and applying a multimodal network approach to decipher brain-behavior relations in motor imagery-based brain-computer interface learning using MEG. Specifically, we employ a minimally constrained matrix decomposition method (non-negative matrix factorization) to simultaneously identify regularized, covarying subgraphs of functional connectivity, to assess their similarity to task performance, and to detect their time-varying expression. Individuals also displayed marked variation in the spatial properties of subgraphs such as the connectivity between the frontal lobe and the rest of the brain, and in the temporal properties of subgraphs such as the stage of learning at which they reached maximum expression. From these observations, we posit a conceptual model in which certain subgraphs support learning by modulating brain activity in regions important for sustaining attention. To test this model, we use tools that stipulate regional dynamics on a networked system (network control theory), and find that good learners display a single subgraph whose temporal expression tracked performance and whose architecture supports easy modulation of brain regions important for attention. ","Learning in brain-computer interface control evidenced by joint
decomposition of brain and behavior",2,"['Ever looked at the huge number of neural systems that are involved in BCI performance and wondered how they interact to support learning? We did ! We jointly decompose dynamic brain and behavioral data and find an important role for the attention system. ', 'Work done with @DaniSBassett, and thanks to collaborators @F_DeVicoFallani and @MConstanceCorsi']",19,08,363
52,70,1261100792729071616,3260753346,Deepanway Ghosal,"Excited to share our new #acl2020nlp paper KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis with @hdevamanyu @soujanyaporia @radamihalcea, Abhinaba Roy, and Navonil Majumder. Paper: Code: #NLProc In this work, we avail commonsense knowledge from ConceptNet to perform domain adaptation and show the ability of graph features helping in bridging domain gap between seen and unseen domains.",https://arxiv.org/abs/2005.00791,"Cross-domain sentiment analysis has received significant attention in recent years, prompted by the need to combat the domain gap between different applications that make use of sentiment analysis. In this paper, we take a novel perspective on this task by exploring the role of external commonsense knowledge. We introduce a new framework, KinGDOM, which utilizes the ConceptNet knowledge graph to enrich the semantics of a document by providing both domain-specific and domain-general background concepts. These concepts are learned by training a graph convolutional autoencoder that leverages inter-domain concepts in a domain-invariant manner. Conditioning a popular domain-adversarial baseline method with these learned concepts helps improve its performance over state-of-the-art approaches, demonstrating the efficacy of our proposed framework. ",KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis,2,"['Excited to share our new #acl2020nlp paper KinGDOM: Knowledge-Guided DOMain adaptation for sentiment analysis with @hdevamanyu @soujanyaporia @radamihalcea, Abhinaba Roy, and Navonil Majumder.\nPaper: \nCode: \n#NLProc ', 'In this work, we avail commonsense knowledge from ConceptNet to perform domain adaptation and show the ability of graph features helping in bridging domain gap between seen and unseen domains.']",20,05,427
53,213,1501978303078535174,1024247894474481665,Furkan GĂŒrsoy,"System Cards for AI... is out! We propose: âłïžSystem Accountability Benchmark: 50 criteria organized in a framework to audit algorithmic accountability. âłïžSystem Cards: scorecards presenting the outcomes of such algorithm audits. Feedback is welcome! đđżThe use of AI systems expanded dramatically, including decisions in criminal justice, allocation of public resources, public education, etc. đđż AI systems are not inherently free from issues such as bias, opaqueness, lack of explainability, maleficence, and the like. đđż Standards of accountability reflecting current legal obligations and societal concerns lag behind the extensive use of AI systems. â
Auditing AI systems based on the proposed framework can ensure fairness, privacy, explainability, transparency, robustness, and more.",https://arxiv.org/abs/2203.04754,"Decisions in public policy are increasingly being made or assisted by automated decision-making algorithms. Many of these algorithms process personal data for tasks such as predicting recidivism, assisting welfare decisions, identifying individuals using face recognition, and more. While potentially improving efficiency and effectiveness, such algorithms are not inherently free from issues such as bias, opaqueness, lack of explainability, maleficence, and the like. Given that the outcomes of these algorithms have significant impacts on individuals and society and are open to analysis and contestation after deployment, such issues must be accounted for before deployment. Formal audits are a way towards ensuring algorithms that are used in public policy meet the appropriate accountability standards. This work, based on an extensive analysis of the literature, proposes a unifying framework for system accountability benchmark for formal audits of artificial intelligence-based decision-aiding systems in public policy as well as system cards that serve as scorecards presenting the outcomes of such audits. The benchmark consists of 50 criteria organized within a four by four matrix consisting of the dimensions of (i) data, (ii) model, (iii) code, (iv) system and (a) development, (b) assessment, (c) mitigation, (d) assurance. Each criterion is described and discussed alongside a suggested measurement scale indicating whether the evaluations are to be performed by humans or computers and whether the evaluation outcomes are binary or on an ordinal scale. The proposed system accountability benchmark reflects the state-of-the-art developments for accountable systems, serves as a checklist for future algorithm audits, and paves the way for sequential work as future research. ",System Cards for AI-Based Decision-Making for Public Policy,3,"['System Cards for AI... is out!\n\nWe propose:\n\nâłïžSystem Accountability Benchmark: 50 criteria organized in a framework to audit algorithmic accountability.\n\nâłïžSystem Cards: scorecards presenting the outcomes of such algorithm audits.\n\nFeedback is welcome!\n\n ', 'đđżThe use of AI systems expanded dramatically, including decisions in criminal justice, allocation of public resources, public education, etc.\n\nđđż AI systems are not inherently free from issues such as bias, opaqueness, lack of explainability, maleficence, and the like.', 'đđż Standards of accountability reflecting current legal obligations and societal concerns lag behind the extensive use of AI systems.\n\nâ
Auditing AI systems based on the proposed framework can ensure fairness, privacy, explainability, transparency, robustness, and more.']",22,03,802
54,146,1326430417700970497,802543221943439360,Andrea Caputo," New paper out guys! We assess the sensitivity of future Higgs Factories, such as FCC-ee, CLIC-380, ILC and CEPC, to the seesaw portal with GeV sterile neutrinos and generic new physics at higher scale. Thanks a lot to all my collaborators for the fun. I had this idea the first time 3 years ago, it was a long process and I am grateful is finally out.",https://arxiv.org/abs/2011.04725,"We consider an extension of the Standard Model with two right-handed singlet fermions with mass at the electroweak scale that induce neutrino masses, plus a generic new physics sector at a higher scale $\Lambda$. We focus on the effective operators of lowest dimension $d=5$, which induce new production and decay modes for the singlet fermions. We assess the sensitivity of future Higgs Factories, such as FCC-ee, CLIC-380, ILC and CEPC, to the coefficients of these operators for various center of mass energies. We show that future lepton colliders can test the cut-off of the theory up to $\Lambda \simeq 500 - 1000\;$TeV, surpassing the reach of future indirect measurements of the Higgs and $Z$ boson widths. We also comment on the possibility of determining the underlying model flavor structure should a New Physics signal be observed, and on the impact of higher dimensional $d=6$ operators on the experimental signatures. ",The see-saw portal at future Higgs Factories,2,"['\n\nNew paper out guys! We assess the sensitivity of future Higgs Factories, such as FCC-ee, CLIC-380, ILC and\nCEPC, to the seesaw portal with GeV sterile neutrinos and generic new physics at higher scale. ', 'https://t.co/JBa4y8DLPg\nThanks a lot to all my collaborators for the fun. I had this idea the first time 3 years ago, it was a long process and I am grateful is finally out.']",20,11,372
55,127,1248599744747798528,1068545181576773632,Kenneth Brown,"New paper on a weighted Union-Find decoder for the toric code with @Huang_Shilin and Mike Newman (). #DukeQuantum @DukeEngineering A weighted Union-Find decoder increases the threshold for the Union-Find decoder but at what cost? In previous work with @Huang_Shilin () on 2D compass codes, we predicted it should be quadratically slower. In this paper, we show that with truncated weights that the speed is comparable to unweighted Union-Find with negligible loss in performance. The truncated version can be readily implemented using a microarchitecture based on by @krystasvore @mointweets, @nico_delf and co-workers.",http://arxiv.org/abs/2004.04693,"Quantum error correction requires decoders that are both accurate and efficient. To this end, union-find decoding has emerged as a promising candidate for error correction on the surface code. In this work, we benchmark a weighted variant of the union-find decoder on the toric code under circuit-level depolarizing noise. This variant preserves the almost-linear time complexity of the original while significantly increasing the performance in the fault-tolerance setting. In this noise model, weighting the union-find decoder increases the threshold from 0.38% to 0.62%, compared to an increase from 0.65% to 0.72% when weighting a matching decoder. Further assuming quantum non-demolition measurements, weighted union-find decoding achieves a threshold of 0.76% compared to the 0.90% threshold when matching. We additionally provide comparisons of timing as well as low error rate behavior. ",Fault-Tolerant Weighted Union-Find Decoding on the Toric Code,4,"['New paper on a weighted Union-Find decoder for the toric code with @Huang_Shilin and Mike Newman (). #DukeQuantum @DukeEngineering', 'A weighted Union-Find decoder increases the threshold for the Union-Find decoder but at what cost? In previous work with @Huang_Shilin (https://t.co/rdqpuzwKfa) on 2D compass codes, we predicted it should be quadratically slower.', 'In this paper, we show that with truncated weights that the speed is comparable to unweighted Union-Find with negligible loss in performance.', 'The truncated version can be readily implemented using a microarchitecture based on https://t.co/P0WpSu6FZx by @krystasvore @mointweets, @nico_delf and co-workers.']",20,04,638
56,58,1308532869837524992,708096751161311232,Priyamvada Natarajan,"For my astro peeps who might have missed my paper on the arXiv today Here's the gist of this new channel to make IMBHs *throughout cosmic time* in dense, gas-rich nuclear star clusters (NSCs) via accretion (1/n) stellar mass seed BH ~ 5 -10 solar mass forms in the NSC that has ~ 10^4-5 stars. Seed BH behaves like a test particle scatters off stars & gas, inevitably wandering randomly, small excursions from the center, growing very rapidly via wind-fed supra-exponential accretion (2/n) In this early growth phase can model motion as a simple harmonic oscillator and compute how much it can grow. As it grows motion gets damped due to drag from gas and dynamical friction, motion transits to that of a damped harmonic oscillator, continues growing but...(3/n) Growth could get disrupted..can estimate final BH mass if prematurely terminated (25 - 100 solar masses). If it continues then it reaches the center, settles down to being a stationary BH at this point it would be between 100 - 1000 solar masses (4/n) Of course, depending on the availability of gas could keep growing till gas supply is exhausted which in typical NSCs can easily bring it 10^4-5 solar masses. So here is a schematic of the possible outcomes -- (5/n) What the model offers is a continual formation mechanism for IMBHs as NSCs seen in a large fraction of galaxies, they are huddled in the center of galaxies but some are off-center too. Therefore, NSCs can act as incubators for the formation of IMBHs - channel can explain....(6/n) (i) central & off-center IMBHs are now detected in low-mass dwarf galaxies (ii) premature termination can account for formation of BHs in the mass gap 50 - 150 solar masses, sources like GW190521. In this model IMBHs grow by accretion not mergers (previous work) in NSCs. (7/n) @harshalhb_india model not yet set up to do demographic predictions like the merger scenarios..just worked out the physics at the moment - but yes plan is to look at all observables...",https://arxiv.org/abs/2009.09156,"While the formation of the first black holes at high redshift is reasonably well understood though debated, massive black hole formation at later cosmic epochs has not been adequately explored. We present a gas accretion driven mechanism that can build up black hole masses rapidly in dense, gas-rich nuclear star clusters (NSCs). Wind-fed supra-exponential accretion of an initially wandering black hole in NSCs can lead to extremely fast growth, scaling stellar mass remnant seed black holes up to intermediate mass black holes (IMBHs). Operating throughout cosmic time, growth via this new channel is modulated by the gas supply, and premature termination results in the formation of lower mass black holes with masses in the range of 50 - few 100 solar masses, filling in the so-called mass gap. However, in most gas-rich NSCs, growth is unimpeded, inevitably leading to the formation of IMBHs with masses ranging from 100 - 100,000 solar masses. A spate of new detection spanning the full range of the IMBH mass function - from the LIGO-VIRGO source GW190521 to the emerging population of 10^5 solar mass black holes harbored in low-mass dwarf galaxies - are revealing this elusive population. Naturally accounting for the detected presence of off-center IMBHs in low-mass dwarfs, this new pathway also predicts the existence of an extensive population of wandering non-central black holes in more massive galaxies would be detectable via tidal disruption events and as GW sources. Gas-rich NSCs serve as incubators for the continual formation of black holes over a wide range in mass throughout cosmic time. ",A new channel to form IMBHs throughout cosmic time,8,"[""For my astro peeps who might have missed my paper on the arXiv today \nHere's the gist of this new channel to make IMBHs *throughout cosmic time* in dense, gas-rich nuclear star clusters (NSCs) via accretion (1/n) "", 'stellar mass seed BH ~ 5 -10 solar mass forms in the NSC that has ~ 10^4-5 stars. Seed BH behaves like a test particle scatters off stars & gas, inevitably wandering randomly, small excursions from the center, growing very rapidly via wind-fed supra-exponential accretion (2/n) https://t.co/9xYc53lpIb', 'In this early growth phase can model motion as a simple harmonic oscillator and compute how much it can grow. As it grows motion gets damped due to drag from gas and dynamical friction, motion transits to that of a damped harmonic oscillator, continues growing but...(3/n)', 'Growth could get disrupted..can estimate final BH mass if prematurely terminated (25 - 100 solar masses). If it continues then it reaches the center, settles down to being a stationary BH at this point it would be between 100 - 1000 solar masses (4/n)', 'Of course, depending on the availability of gas could keep growing till gas supply is exhausted which in typical NSCs can easily bring it 10^4-5 solar masses. So here is a schematic of the possible outcomes -- (5/n) https://t.co/i9QNCMhWvi', 'What the model offers is a continual formation mechanism for IMBHs as NSCs seen in a large fraction of galaxies, they are huddled in the center of galaxies but some are off-center too. Therefore, NSCs can act as incubators for the formation of IMBHs - channel can explain....(6/n)', '(i) central & off-center IMBHs are now detected in low-mass dwarf galaxies (ii) premature termination can account for formation of BHs in the mass gap 50 - 150 solar masses, sources like GW190521. In this model IMBHs grow by accretion not mergers (previous work) in NSCs. (7/n)', '@harshalhb_india model not yet set up to do demographic predictions like the merger scenarios..just worked out the physics at the moment - but yes plan is to look at all observables...']",20,09,2002
57,19,1289260137249554433,979379437069271043,Pedro Machado,"New paper today on intranuclear cascades with awesome collabs: Isaacson, Jay, Lovato and Rocco. The goal is to simulate the propagation of hadrons in nuclear media, which is relevant to current and future oscillation experiments. Results are promising! :-D ",https://arxiv.org/abs/2007.15570,"We propose a novel approach to intranuclear cascades which takes as input quantum MonteCarlo nuclear configurations and uses a semi-classical, impact-parameter based algorithm to modelthe propagation of protons and neutrons in the nuclear medium. We successfully compare oursimulations to available proton-carbon scattering data and nuclear-transparency measurements. Byanalyzing the dependence of the simulated observables upon the ingredients entering our intranuclearcascade algorithm, we provide a quantitative understanding of their impact. Particular emphasisis devoted to the role played by nuclear correlations, the Pauli exclusion principle, and interactionprobability distributions. ",A quantum Monte Carlo based approach to intranuclear cascades,1,"['New paper today on intranuclear cascades with awesome collabs: Isaacson, Jay, Lovato and Rocco.\nThe goal is to simulate the propagation of hadrons in nuclear media, which is relevant to current and future oscillation experiments. Results are promising! :-D\n']",20,07,263
58,87,1293295565111881728,1008944276431036416,Boris Ivanovic,"New paper up on arXiv!! In it, Karen Leung, Ed Schmerling, @MarcoPavoneSU, and I give an accessible intro to our CVAE-based approach for multimodal modeling of human trajectories. Perfect for those new to the field or looking for a refresher! Read it here: ",https://arxiv.org/abs/2008.03880,"Human behavior prediction models enable robots to anticipate how humans may react to their actions, and hence are instrumental to devising safe and proactive robot planning algorithms. However, modeling complex interaction dynamics and capturing the possibility of many possible outcomes in such interactive settings is very challenging, which has recently prompted the study of several different approaches. In this work, we provide a self-contained tutorial on a conditional variational autoencoder (CVAE) approach to human behavior prediction which, at its core, can produce a multimodal probability distribution over future human trajectories conditioned on past interactions and candidate robot future actions. Specifically, the goals of this tutorial paper are to review and build a taxonomy of state-of-the-art methods in human behavior prediction, from physics-based to purely data-driven methods, provide a rigorous yet easily accessible description of a data-driven, CVAE-based approach, highlight important design characteristics that make this an attractive model to use in the context of model-based planning for human-robot interactions, and provide important design considerations when using this class of models. ","Multimodal Deep Generative Models for Trajectory Prediction: A
Conditional Variational Autoencoder Approach",1,"['New paper up on arXiv!! In it, Karen Leung, Ed Schmerling, @MarcoPavoneSU, and I give an accessible intro to our CVAE-based approach for multimodal modeling of human trajectories. Perfect for those new to the field or looking for a refresher! Read it here: ']",20,08,263
59,248,1271027698157182978,1259188770,Olivier Bachem,"We spent the last months at @GoogleAI Zurich & Paris trying to understand on-policy RL for locomotion using a large scale study (>250'000 models). Check out for insights and practical recommendations on >50 high- and low-level implementation decisions! 1/2 This is joint work w/ Marcin Andrychowicz, Anton Raichuk, Piotr StaĆczyk, Manu Orsini, Sertan Girgin, @RaphaelMarinier, @leonardhussenot, Matthieu Geist, Olivier Pietquin, @MMMichalski, @sylvain_gelly. 2/2",https://arxiv.org/abs/2006.05990,"In recent years, on-policy reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress [Engstrom'20]. As a step towards filling that gap, we implement >50 such ``choices'' in a unified on-policy RL framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250'000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for on-policy training of RL agents. ","What Matters In On-Policy Reinforcement Learning? A Large-Scale
Empirical Study",2,"[""We spent the last months at @GoogleAI Zurich & Paris trying to understand on-policy RL for locomotion using a large scale study (>250'000 models). Check out for insights and practical recommendations on >50 high- and low-level implementation decisions! 1/2 "", 'This is joint work w/ Marcin Andrychowicz, Anton Raichuk, Piotr StaĆczyk, Manu Orsini, Sertan Girgin, @RaphaelMarinier, @leonardhussenot, Matthieu Geist, Olivier Pietquin, @MMMichalski, @sylvain_gelly. 2/2']",20,06,482
60,155,1481151491243139076,883039700,Lenka Zdeborova,"I studied many inference problems on random graphs in the past, but the graph alignment is challenging: We still do not know what are the constants for optimal behavior nor algorithms. A lot of work to do still here. @egavves Yes, this size is the parameter d, the influence is discussed in the paper.",https://arxiv.org/abs/2112.13079,"The problem of aligning Erd\""os-R\'enyi random graphs is a noisy, average-case version of the graph isomorphism problem, in which a pair of correlated random graphs is observed through a random permutation of their vertices. We study a polynomial time message-passing algorithm devised to solve the inference problem of partially recovering the hidden permutation, in the sparse regime with constant average degrees. We perform extensive numerical simulations to determine the range of parameters in which this algorithm achieves partial recovery. We also introduce a generalized ensemble of correlated random graphs with prescribed degree distributions, and extend the algorithm to this case. ","Aligning random graphs with a sub-tree similarity message-passing
algorithm",2,"['I studied many inference problems on random graphs in the past, but the graph alignment is challenging: We still do not know what are the constants for optimal behavior nor algorithms. A lot of work to do still here.', '@egavves Yes, this size is the parameter d, the influence is discussed in the paper.']",21,12,308
61,11,989663234683584514,930224996785332224,"Jacob White, PhD",Check out my new paper that uses @ALMA data to characterize the HD 141569 circumstellar disk! The analysis shows that the system may be one of the youngest debris disks meaning that significant grain growth and planet formation can happen in ~5 Myr. :) ,https://arxiv.org/abs/1804.09724,"We present archival ALMA observations of the HD 141569 circumstellar disk at 345, 230, and 100 GHz. These data detect extended millimeter emission that is exterior to the inner disk. We find through simultaneous visibility modeling of all three data sets that the system's morphology is described well by a two-component disk model. The inner disk ranges from approximately 16 to 45 au with a spectral index of 1.81 (q = 2.95) and the outer disk ranges from 95 to 300 au with a spectral index of 2.28 (q = 3.21). Azimuthally averaged radial emission profiles derived from the continuum images at each frequency show potential emission that is consistent with the visibility modeling. The analysis presented here shows that at ~5 Myr HD 141569's grain size distribution is steeper, and therefore more evolved, in the outer disk than in the inner disk. ","Extended Millimeter Emission in the HD 141569 Circumstellar Disk
Detected with ALMA",1,['Check out my new paper that uses @ALMA data to characterize the HD 141569 circumstellar disk! The analysis shows that the system may be one of the youngest debris disks meaning that significant grain growth and planet formation can happen in ~5 Myr. :) \n'],18,04,259
62,27,1110796733213216769,141440459,Rod Van Meter đ»,"New paper dance! Compile better for your quantum computer. Possibly of interest to @meQuanics @margmartonosi @kenbrownquantum @dajmeyer Proud of @shin_tsujido @_pandaman64_ , the undergrads who did all the work. I was just a light hand on the tiller. Nice to have #QuantumNative students!",https://arxiv.org/abs/1903.10963,"NISQ (Noisy, Intermediate-Scale Quantum) computing requires error mitigation to achieve meaningful computation. Our compilation tool development focuses on the fact that the error rates of individual qubits are not equal, with a goal of maximizing the success probability of real-world subroutines such as an adder circuit. We begin by establishing a metric for choosing among possible paths and circuit alternatives for executing gates between variables placed far apart within the processor, and test our approach on two IBM 20-qubit systems named Tokyo and Poughkeepsie. We find that a single-number metric describing the fidelity of individual gates is a useful but imperfect guide. Our compiler uses this subsystem and maps complete circuits onto the machine using a beam search-based heuristic that will scale as processor and program sizes grow. To evaluate the whole compilation process, we compiled and executed adder circuits, then calculated the KL-divergence (a measure of the distance between two probability distributions). For a circuit within the capabilities of the hardware, our compilation increases estimated success probability and reduces KL-divergence relative to an error-oblivious placement. ","Extracting Success from IBM's 20-Qubit Machines Using Error-Aware
Compilation",3,"['New paper dance! Compile better for your quantum computer.\n\nPossibly of interest to @meQuanics @margmartonosi @kenbrownquantum @dajmeyer', 'Proud of @shin_tsujido @_pandaman64_ , the undergrads who did all the work. I was just a light hand on the tiller.', 'Nice to have #QuantumNative students!']",19,03,295
63,157,1316752822193606656,1149734612693745666,Emily Rauscher,"đŁ New paper alert! In this work, led by Michael Roman, we present a grid of 3D models of hot Jupiters with clouds (temperature dependent and with radiative feedback) across a wide range of irradiation temperature, for different assumed cloud properties. In order of decreasing irradiation temperate, clouds form successively on: the night, the cool western terminator, the eastern terminator, and even across the dayside! For planets cooler than irradiation temperatures of about 3000 K (Teq ~ 2100 K) clouds influence observable properties, like phase curves (both thermal and reflected)! Sometimes clouds even produce temperature inversions! Also see a lovely complementary paper by @V_Parmentier that is also on arXiv today! (Sorry this thread isnât snazzier... my kindergartener needs help with his class work and is crying in my lap right now)",http://arxiv.org/abs/2010.06936,"Using a general circulation model (GCM), we investigate trends in simulated hot Jupiter atmospheres for a range of irradiation temperatures (1,500 - 4,000 K), surface gravities (10 and 40 m s-2), and cloud conditions. Our models include simplified temperature-dependent clouds with radiative feedback and show how different cloud compositions, vertical thicknesses, and opacities shape hot Jupiters atmospheres by potentially increasing planetary albedos, decreasing photospheric pressures and nightside temperatures, and in some cases producing strong dayside thermal inversions. With decreasing irradiation, clouds progressively form on the nightside and cooler western limb followed by the eastern limb and central dayside. We find that clouds significantly modify the radiative transport and affect the observable properties of planets colder than T_irr ~ 3,000~K (T_eq~2,100 K) depending on the clouds' vertical extent. The precise strength of expected effects depends on the assumed parameters, but trends in predicted phase curves emerge from an ensemble of simulations. Clouds lead to larger phase curve amplitudes and smaller phase curve offsets at IR wavelengths, compared to cloud-free models. At optical wavelengths, we predict mostly westward phase curve offsets at intermediate temperatures (T_irr ~ 2,000 - 3,500 K) with clouds confined to the nightside and western limb. If clouds are vertically compact (i.e. on order of a pressure scale height in thickness), their distributions and effects become more complicated as different condensates form at different heights -- some too deep to significantly affect the observable atmosphere. Our results have implications for interpreting the diversity of phase curve observations of planets with T_irr <~3,000~K. ","Clouds in Three-Dimensional Models of Hot Jupiters Over a Wide Range of
Temperatures I: Thermal Structures and Broadband Phase Curve Predictions",6,"['đŁ New paper alert! In this work, led by Michael Roman, we present a grid of 3D models of hot Jupiters with clouds (temperature dependent and with radiative feedback) across a wide range of irradiation temperature, for different assumed cloud properties. ', 'In order of decreasing irradiation temperate, clouds form successively on: the night, the cool western terminator, the eastern terminator, and even across the dayside!', 'For planets cooler than irradiation temperatures of about 3000 K (Teq ~ 2100 K) clouds influence observable properties, like phase curves (both thermal and reflected)!', 'Sometimes clouds even produce temperature inversions!', 'Also see a lovely complementary paper by @V_Parmentier that is also on arXiv today!', '(Sorry this thread isnât snazzier... my kindergartener needs help with his class work and is crying in my lap right now)']",20,10,862
64,9,1422157614306234368,192826908,Jorge Lillo-Box,"Paper day! The @KeplerGO mission still hides many interesting discoveries. We have confirmed new planets detected with Kepler that are NOT transiting... how can that be? đ In Millholland & Laughlin (2017), my paper mates discovered a bunch of light curve modulations that could potentially be attributed to planets inducing three effects: 1) Reflexion of stellar light, 2) ellipsoidal modulations (due to tides), and 3) Doppler Beaming We used CAFE spectrograph from @CalarAltoObs and HERMES from Mercator Telescope @OOCC_IAC to confirm three of these planets with radial velocity and validate other three. We also used AstraLux high-spatial resolution camera to discard possible close contaminant sources. I would like to thank my two paper mates Sarah Millholland and Greg Laughlin for their help and support in developing this work. This new planet detection technique is very promising for future missions like @ESA_Plato and @ArielTelescope !!",https://arxiv.org/abs/2107.14621,"The direct detection of new extrasolar planets from high-precision photometry data is commonly based on the observation of the transit signal of the planet as it passes in front of its star. Close-in planets, however, leave additional imprints in the light curve even if they do not transit. These are the so-called phase curve variations that include ellipsoidal, reflection and beaming effects. In Millholland & Laughlin (2017), the authors scrutinized the Kepler database looking for these phase variations from non-transiting planets. They found 60 candidates whose signals were compatible with planetary companions. In this paper, we perform a ground-based follow-up of a sub-sample of these systems with the aim of confirming and characterizing these planets and thus validating the detection technique. We used the CAFE and HERMES instruments to monitor the radial velocity of ten non-transiting planet candidates along their orbits. We additionally used AstraLux to obtain high-resolution images of some of these candidates to discard blended binaries that contaminate the Kepler light curves by mimicking planetary signals. Among the ten systems, we confirm three new hot-Jupiters (KIC8121913 b, KIC10068024 b, and KIC5479689 b) with masses in the range 0.5-2 M$_{\rm Jup}$ and set mass constraints within the planetary regime for the other three candidates (KIC8026887b, KIC5878307 b, and KIC11362225 b), thus strongly suggestive of their planetary nature. For the first time, we validate the technique of detecting non-transiting planets via their phase curve variations. We present the new planetary systems and their properties. We find good agreement between the RV-derived masses and the photometric masses in all cases except KIC8121913 b, which shows a significantly lower mass derived from the ellipsoidal modulations than from beaming and radial velocity data. ","Follow-up of non-transiting planets detected by Kepler. Confirmation of
three hot-Jupiters and validation of three other planets",5,"['Paper day! The @KeplerGO mission still hides many interesting discoveries. We have confirmed new planets detected with Kepler that are NOT transiting... how can that be? đ\n', 'In Millholland & Laughlin (2017), my paper mates discovered a bunch of light curve modulations that could potentially be attributed to planets inducing three effects: 1) Reflexion of stellar light, 2) ellipsoidal modulations (due to tides), and 3) Doppler Beaming https://t.co/1IiDm3Vfg6', 'We used CAFE spectrograph from @CalarAltoObs and HERMES from Mercator Telescope @OOCC_IAC to confirm three of these planets with radial velocity and validate other three. https://t.co/ASrO5BFagi', 'We also used AstraLux high-spatial resolution camera to discard possible close contaminant sources. https://t.co/9UACFXLu7r', 'I would like to thank my two paper mates Sarah Millholland and Greg Laughlin for their help and support in developing this work. This new planet detection technique is very promising for future missions like @ESA_Plato and @ArielTelescope !!']",21,07,976
65,89,1327032710318190593,1559281832,Vishvas Pandey,"Check out new paper today: - Flavor universality do break in CEvNS at loop level and it has interesting implications! I learnt a lot of interesting physics and fun stuff working with some awesome people Sasha, @RyanPlestid and @PedroANMachado! @psbarbeau @RyanPlestid @PedroANMachado @KateScholberg Thanks Phil, that would be great! mentions that a detailed CEvNS calculations including radiative corrections were carried out in Ref. [29], would it be possible to share that tech note. @psbarbeau @RyanPlestid @PedroANMachado @KateScholberg I guess the codes of that tech note are in the duke github that you shared.",https://arxiv.org/abs/2011.05960,"We calculate coherent elastic neutrino-nucleus scattering cross sections on spin-0 nuclei (e.g. $^{40}$Ar and $^{28}$Si) at energies below 100 MeV within the Standard Model and account for all effects of permille size. We provide a complete error budget including uncertainties at nuclear, nucleon, hadronic, and quark levels separately as well as perturbative error. Our calculation starts from the four-fermion effective field theory to explicitly separate heavy-particle mediated corrections (which are absorbed by Wilson coefficients) from light-particle contributions. Electrons and muons running in loops introduce a nontrivial dependence on the momentum transfer due to their relatively light masses. These same loops, and those mediated by tau leptons, break the flavor universality because of mass-dependent electromagnetic radiative corrections. Nuclear physics uncertainties significantly cancel in flavor asymmetries resulting in subpercent relative errors. We find that for low neutrino energies, the cross section can be predicted with a relative precision that is competitive with neutrino-electron scattering. We highlight potentially useful applications of such a precise cross section prediction ranging from precision tests of the Standard Model, to searches for new physics and to the monitoring of nuclear reactors. ","Flavor-dependent radiative corrections in coherent elastic
neutrino-nucleus scattering",3,"['Check out new paper today: - Flavor universality do break in CEvNS at loop level and it has interesting implications! I learnt a lot of interesting physics and fun stuff working with some awesome people Sasha, @RyanPlestid and @PedroANMachado!', '@psbarbeau @RyanPlestid @PedroANMachado @KateScholberg Thanks Phil, that would be great! https://t.co/MYGpSXFuXk mentions that a detailed CEvNS calculations including radiative corrections were carried out in Ref. [29], would it be possible to share that tech note.', '@psbarbeau @RyanPlestid @PedroANMachado @KateScholberg I guess the codes of that tech note are in the duke github that you shared.']",20,11,630
66,102,1146050901565743105,10547982,"Orad Reshef, PhD","New paper out on arXiv: We demonstrate a high-Q multi-resonant plasmonic metasurface. Also provide a quick analytic code to help you design it! Featuring @MdSaadBinAlam1, @RWBoyd12 and our industrial collaborators at @IridianSpectral Technologies. @MdSaadBinAlam1 @RWBoyd12 @IridianSpectral Special thanks to everyone at twitter with the great advice on how to include our code with the article, notably @NanoStAndrews @j_bertolotti @johnmdudley @mickeykats @LeeCBassett @gonuke",https://arxiv.org/abs/1907.00458,"Resonant metasurfaces are devices composed of nanostructured sub-wavelength scatterers that generate narrow optical resonances, enabling applications in filtering, nonlinear optics, and molecular fingerprinting. It is highly desirable for these applications to incorporate such devices with multiple, high-quality-factor resonances; however, it can be challenging to obtain more than a pair of narrow resonances in a single plasmonic surface. Here, we demonstrate a multi-resonant metasurface that operates by extending the functionality of surface lattice resonances, which are the collective responses of arrays of metallic nanoparticles. This device features a series of resonances with high quality factors (Q ~ 40), an order of magnitude larger than what is typically achievable with plasmonic nanoparticles, as well as a narrow free spectral range. This design methodology can be used to better tailor the transmission spectrum of resonant metasurfaces and represents an important step towards the miniaturization of optical devices. ",Multi-resonant high-Q plasmonic metasurfaces,2,"['New paper out on arXiv:\n\n\nWe demonstrate a high-Q multi-resonant plasmonic metasurface. Also provide a quick analytic code to help you design it!\n\nFeaturing @MdSaadBinAlam1, @RWBoyd12 and our industrial collaborators at @IridianSpectral Technologies. ', '@MdSaadBinAlam1 @RWBoyd12 @IridianSpectral Special thanks to everyone at twitter with the great advice on how to include our code with the article, notably @NanoStAndrews @j_bertolotti @johnmdudley @mickeykats @LeeCBassett @gonuke']",19,07,492
67,104,1325706065820995585,856802303533305856,Dr Aaron Jones,"New paper on the arxiv about the #EinsteinTelescope (ET) design, led by Sam Rowlinson (from @UoBIGWaves). 1/8 The Einstein Telescope (, ET) is a proposed gravitational wave observatory, with substantial improvements, allowing us to detect new types and more gravitational waves 2/8 For optimal sensitivity the observatory will be a triangle shape (instead of an L) compromising of 6 interferometers! Each side will be 10 km (instead of 3km used by @ego_virgo and @KAGRA_PR and 4km used by @LIGO). 3/8 Because the arm is very long, the laser beam will be very wide when it exits the ""arm cavity"" (where the gravitational wave interacts with the light). This ""wide"" laser beam must be reduced in size so it can be matched to the laser 4/8 We design a telescope to reduce the size of this laser beam to an intermediate size as soon as it exits the arm. This allows the use of a smaller beamsplitter (saving money) and has a number of other technical advantages 5/8 These include: decoupling the effects of beam jitter; steering the ""cold"" and ""hot"" beams around each other, allowing good placement of beams in the tunnels. The size is chosen by a tradeoff analysis considering the effects of thermal lensing and thermal noise. 6/8 The paper closes with a discussion about which parameters can be ""tuned"" without changing the overall design, based on further studies. 7/8 Lastly, all of this work was completed with the open-source software Finesse3 ()! 8/8",https://arxiv.org/abs/2011.02983,"The optical design of the Einstein Telescope (ET) is based on a dual-recycled Michelson interferometer with Fabry-Perot cavities in the arms. ET will be constructed in a new infrastructure, allowing us to consider different technical implementations beyond the constraints of the current facilities. In this paper we investigate the feasibility of using beam-expander telescopes in the interferometer arms. We provide an example implementation that matches the optical layout as presented in the ET design update 2020. We further show that the beam-expander telescopes can be tuned to compensate for mode mismatches between the arm cavities and the rest of the interferometer. ","Feasibility study of beam-expanding telescopes in the interferometer
arms for the Einstein Telescope",8,"['New paper on the arxiv about the #EinsteinTelescope (ET) design, led by Sam Rowlinson (from @UoBIGWaves). 1/8 ', 'The Einstein Telescope (https://t.co/eY4VS6y0PY, ET) is a proposed gravitational wave observatory, with substantial improvements, allowing us to detect new types and more gravitational waves 2/8 https://t.co/NZ9BJ7a1fv', 'For optimal sensitivity the observatory will be a triangle shape (instead of an L) compromising of 6 interferometers! Each side will be 10 km (instead of 3km used by @ego_virgo and @KAGRA_PR and 4km used by @LIGO). 3/8 https://t.co/RhFMQB5SJF', 'Because the arm is very long, the laser beam will be very wide when it exits the ""arm cavity"" (where the gravitational wave interacts with the light). This ""wide"" laser beam must be reduced in size so it can be matched to the laser 4/8 https://t.co/b3CrfMxwpD', 'We design a telescope to reduce the size of this laser beam to an intermediate size as soon as it exits the arm. This allows the use of a smaller beamsplitter (saving money) and has a number of other technical advantages 5/8 https://t.co/a32NlZkZBD', 'These include: decoupling the effects of beam jitter; steering the ""cold"" and ""hot"" beams around each other, allowing good placement of beams in the tunnels. The size is chosen by a tradeoff analysis considering the effects of thermal lensing and thermal noise. 6/8', 'The paper closes with a discussion about which parameters can be ""tuned"" without changing the overall design, based on further studies. 7/8 https://t.co/9LzHGFV2G2', 'Lastly, all of this work was completed with the open-source software Finesse3 (https://t.co/aeqTWD1Kbi)! 8/8']",20,11,1514
68,109,1456601775444725761,780011537738174464,Dennis Soemers,"(1/5) Our new paper on ""Optimised Playout Implementations for the Ludii General Game System"" is now available on arXiv! (2/5) Based on the game descriptions in the game description language of @LudiiGames, we automatically detect whether games fit in any of a few broad classes of games that permit significant optimisations for (semi-)random playouts / rollouts (as used in e.g. MCTS) (3/5) One example is in games like Chess, where legal + illegal (due to some postcondition) moves (blue + red arrows) are cheap to compute, but filtering this down to only the legal (blue) moves is expensive. (4/5) In rollouts, we sample moves (uniformly or otherwise) from the complete set, and only evaluate the expensive postconditions for moves that were sampled (rejecting them and resampling if postconditions are not satisfied). The same idea applies to liberty tests in Go. (5/5) This work will be presented at ACG 2021 in about two and a half weeks. Register for free at ! @LudiiGames @archaeoludology @UM_DKE @ERC_Research",https://arxiv.org/abs/2111.02839,"This paper describes three different optimised implementations of playouts, as commonly used by game-playing algorithms such as Monte-Carlo Tree Search. Each of the optimised implementations is applicable only to specific sets of games, based on their rules. The Ludii general game system can automatically infer, based on a game's description in its general game description language, whether any optimised implementations are applicable. An empirical evaluation demonstrates major speedups over a standard implementation, with a median result of running playouts 5.08 times as fast, over 145 different games in Ludii for which one of the optimised implementations is applicable. ",Optimised Playout Implementations for the Ludii General Game System,5,"['(1/5) Our new paper on ""Optimised Playout Implementations for the Ludii General Game System"" is now available on arXiv! ', '(2/5) Based on the game descriptions in the game description language of @LudiiGames, we automatically detect whether games fit in any of a few broad classes of games that permit significant optimisations for (semi-)random playouts / rollouts (as used in e.g. MCTS)', '(3/5) One example is in games like Chess, where legal + illegal (due to some postcondition) moves (blue + red arrows) are cheap to compute, but filtering this down to only the legal (blue) moves is expensive. https://t.co/su59U36SFW', '(4/5) In rollouts, we sample moves (uniformly or otherwise) from the complete set, and only evaluate the expensive postconditions for moves that were sampled (rejecting them and resampling if postconditions are not satisfied). The same idea applies to liberty tests in Go.', '(5/5) This work will be presented at ACG 2021 in about two and a half weeks. Register for free at https://t.co/hPWDBa893J!\n\n@LudiiGames @archaeoludology @UM_DKE @ERC_Research']",21,11,1045
69,59,963588291642654720,22148802,Leo C. Stein đŠ,"New preprint, with Davide Gerosa and François HĂ©bert! Using numerical relativity data to directly model the kicks that black holes get when they merge. Read the paper at @astrocrash When it's as easy as 'import surrkick', how could you pass it over? BTW, I'm guessing most astronomers just want the final remnant properties, and don't care about the kick profile. In that case, a custom interpolant could be built that would be way faster! On the TODO list.",https://arxiv.org/abs/1802.04276,"Binary black holes radiate linear momentum in gravitational waves as they merge. Recoils imparted to the black-hole remnant can reach thousands of km/s, thus ejecting black holes from their host galaxies. We exploit recent advances in gravitational waveform modeling to quickly and reliably extract recoils imparted to generic, precessing, black hole binaries. Our procedure uses a numerical-relativity surrogate model to obtain the gravitational waveform given a set of binary parameters, then from this waveform we directly integrate the gravitational-wave linear momentum flux. This entirely bypasses the need of fitting formulae which are typically used to model black-hole recoils in astrophysical contexts. We provide a thorough exploration of the black-hole kick phenomenology in the parameter space, summarizing and extending previous numerical results on the topic. Our extraction procedure is made publicly available as a module for the Python programming language named SURRKICK. Kick evaluations take ~0.1s on a standard off-the-shelf machine, thus making our code ideal to be ported to large-scale astrophysical studies. ",Black-hole kicks from numerical-relativity surrogate models,2,"['New preprint, with Davide Gerosa and François HĂ©bert! Using numerical relativity data to directly model the kicks that black holes get when they merge. Read the paper at ', ""@astrocrash When it's as easy as 'import surrkick', how could you pass it over?\nBTW, I'm guessing most astronomers just want the final remnant properties, and don't care about the kick profile. In that case, a custom interpolant could be built that would be way faster! On the TODO list.""]",18,02,471
70,0,1271501734045761537,864540407383937024,Dr. Estelle Smith đ (she/her),"The new @CaringBridge Planner is a great way to coordinate support during health crises & our new @acmtochi paper explores design implications for features like this. Pre-print here: Results spoilers below! Thread 1/6 Results Spoiler #1: Friends and family of patients don't always align with patients/caregivers on what types of instrumental support they are interested in providing. However, tools like the Planner can possibly help to re-align these user groups. 2/6 Results Spoiler #2: Emotional and instrumental support are important to CaringBridge users. However, we found that *prayer support* is the most important support category to these users. This suggests a significantly understudied design opportunity in online communities. 3/6 Results Spoiler #3: People generally have more trust in their closest social connections to provide instrumental support than more distant acquaintances or businesses. (No surprise there!) 4/6 Results Spoiler #4: People generally trust traditional businesses more than sharing economy apps to help out during health crises. But in some cases, their trust in app-based businesses like @lyft @uber or @gofundme is greater than their trust in taxi services or banks. 5/6 I'm looking forward to tweeting out a full blog post once the paper has been officially published, but here's a first peek at the pre-print! :) đ Thanks for reading! 6/6 @andrewmiller @CaringBridge @acmtochi That's great to hear @andrewmiller. Will keep an eye out for your upcoming work! Support coordination is challenging, but there are important opportunities for tech to make it a little easier. It's certainly a big focus at CaringBridge right now, as the new Planner shows.",https://arxiv.org/abs/2005.11884,"Online Health Communities (OHCs) are known to provide substantial emotional and informational support to patients and family caregivers facing life-threatening diagnoses like cancer and other illnesses, injuries, or chronic conditions. Yet little work explores how OHCs facilitate other vital forms of social support, especially instrumental support. We partner with CaringBridge.org---a prominent OHC for journaling about health crises---to complete a two-phase study focused on instrumental support. Phase one involves a content analysis of 641 CaringBridge updates. Phase two is a survey of 991 CaringBridge users. Results show that patients and family caregivers diverge from their support networks in their preferences for specific instrumental support types. Furthermore, ``prayer support'' emerged as the most prominent support category across both phases. We discuss design implications to accommodate divergent preferences and to expand the instrumental support network. We also discuss the need for future work to empower family caregivers and to support spirituality. ","""I Cannot Do All of This Alone"": Exploring Instrumental and Prayer
Support in Online Health Communities",7,"['The new @CaringBridge Planner is a great way to coordinate support during health crises & our new @acmtochi paper explores design implications for features like this. Pre-print here: Results spoilers below! Thread 1/6 ', ""Results Spoiler #1: Friends and family of patients don't always align with patients/caregivers on what types of instrumental support they are interested in providing. However, tools like the Planner can possibly help to re-align these user groups. 2/6"", 'Results Spoiler #2: Emotional and instrumental support are important to CaringBridge users. However, we found that *prayer support* is the most important support category to these users. This suggests a significantly understudied design opportunity in online communities. 3/6', 'Results Spoiler #3: People generally have more trust in their closest social connections to provide instrumental support than more distant acquaintances or businesses. (No surprise there!) 4/6', 'Results Spoiler #4: People generally trust traditional businesses more than sharing economy apps to help out during health crises. But in some cases, their trust in app-based businesses like @lyft @uber or @gofundme is greater than their trust in taxi services or banks. 5/6', ""I'm looking forward to tweeting out a full blog post once the paper has been officially published, but here's a first peek at the pre-print! :) https://t.co/9Um0Z838ee đ Thanks for reading! 6/6"", ""@andrewmiller @CaringBridge @acmtochi That's great to hear @andrewmiller. Will keep an eye out for your upcoming work! Support coordination is challenging, but there are important opportunities for tech to make it a little easier. It's certainly a big focus at CaringBridge right now, as the new Planner shows.""]",20,05,1715
71,95,1103592738765893632,797888987675365377,Tom Rainforth,"Excited to announce our new AISTATS paper. LF-PPL: A Low-Level First Order Probabilistic Programming Language for Non-Differentiable Models. Yuan Zhou*, @BayesianBrad*, Tobias Kohn, myself, @hyang144, and @frankdonaldwood. Preprint available here ",http://arxiv.org/abs/1903.02482,"We develop a new Low-level, First-order Probabilistic Programming Language (LF-PPL) suited for models containing a mix of continuous, discrete, and/or piecewise-continuous variables. The key success of this language and its compilation scheme is in its ability to automatically distinguish parameters the density function is discontinuous with respect to, while further providing runtime checks for boundary crossings. This enables the introduction of new inference engines that are able to exploit gradient information, while remaining efficient for models which are not everywhere differentiable. We demonstrate this ability by incorporating a discontinuous Hamiltonian Monte Carlo (DHMC) inference engine that is able to deliver automated and efficient inference for non-differentiable models. Our system is backed up by a mathematical formalism that ensures that any model expressed in this language has a density with measure zero discontinuities to maintain the validity of the inference engine. ","LF-PPL: A Low-Level First Order Probabilistic Programming Language for
Non-Differentiable Models",1,"['Excited to announce our new AISTATS paper. LF-PPL: A Low-Level First Order Probabilistic Programming Language for Non-Differentiable Models. Yuan Zhou*, @BayesianBrad*, Tobias Kohn, myself, @hyang144, and @frankdonaldwood. Preprint available here ']",19,03,253
72,73,1042215786968887296,1008831733406490624,Emily Martin,"Excited to announce my last thesis paper was accepted for publication!! We present parallaxes for 22 late-T and Y dwarfs using @NASAspitzer data. There are updated parallaxes for 18 targets and new parallaxes for 4 targets! Also 1 new Y dwarf! and 6 new T dwarfs, all with Keck/NIRSPEC spectra! We found that the Y dwarf class spans a larger range in absolute magnitudes than expected-- implying a large range in effective temperatures. Bottom line: give us all the #JWST time please, because these dwarfs are COLD! đ",https://arxiv.org/abs/1809.06479,"Y dwarfs provide a unique opportunity to study free-floating objects with masses $<$30 M$_{Jup}$ and atmospheric temperatures approaching those of known Jupiter-like exoplanets. Obtaining distances to these objects is an essential step towards characterizing their absolute physical properties. Using Spitzer/IRAC [4.5] images taken over baselines of $\sim$2-7 years, we measure astrometric distances for 22 late-T and early Y dwarfs, including updated parallaxes for 18 objects and new parallax measurements for 4 objects. These parallaxes will make it possible to explore the physical parameter space occupied by the coldest brown dwarfs. We also present the discovery of 6 new late-T dwarfs, updated spectra of two T dwarfs, and the reclassification of a new Y dwarf, WISE J033605.04$-$014351.0, based on Keck/NIRSPEC $J$-band spectroscopy. Assuming that effective temperatures are inversely proportional to absolute magnitude, we examine trends in the evolution of the spectral energy distributions of brown dwarfs with decreasing effective temperature. Surprisingly, the Y dwarf class encompasses a large range in absolute magnitude in the near- to mid-infrared photometric bandpasses, demonstrating a larger range of effective temperatures than previously assumed. This sample will be ideal for obtaining mid-infrared spectra with the James Webb Space Telescope because their known distances will make it easier to measure absolute physical properties. ",Y dwarf Trigonometric Parallaxes from the Spitzer Space Telescope,2,"['Excited to announce my last thesis paper was accepted for publication!! We present parallaxes for 22 late-T and Y dwarfs using @NASAspitzer data. There are updated parallaxes for 18 targets and new parallaxes for 4 targets! Also 1 new Y dwarf!', 'and 6 new T dwarfs, all with Keck/NIRSPEC spectra! We found that the Y dwarf class spans a larger range in absolute magnitudes than expected-- implying a large range in effective temperatures. Bottom line: give us all the #JWST time please, because these dwarfs are COLD! đ']",18,09,524
73,74,1191975791850070018,1702174146,Janis Keuper,"Is this an image of real person? #deepfakes are hard to identify... in our new paper ""Unmasking DeepFakes with simple Features"" we are able to solve #deefake detection in face images with up to 100% accuracy on several benchmarks: paper: #deepfakes #deepfake Source code for our #deepfake detection is available: #deepfakes ",https://arxiv.org/abs/1911.00686,"Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images. Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos. Source Code: this https URL ",Unmasking DeepFakes with simple Features,3,"['Is this an image of real person? #deepfakes are hard to identify... in our new paper ""Unmasking DeepFakes with simple Features"" we are able to solve #deefake detection in face images with up to 100% accuracy on several benchmarks: ', 'paper: https://t.co/MFML2IyIMt #deepfakes #deepfake https://t.co/jXimdYiuV4', 'Source code for our #deepfake detection is available: https://t.co/CsEa5Prjic #deepfakes https://t.co/RnfVhWIuCO']",19,11,365
74,59,1518618467598946304,1870088930,Nikhil Garg,"New paper! ""Equity in Resident Crowdsourcing: Measuring Under-reporting without Ground Truth Data"" w/ @ZhiLiu724 We develop method to quantify under-reporting in @nyc311-like systems. Applying it to NYC Parks data reveals big spatial disparities Quantifying under-reporting is tricky. Why? The whole reason these systems exist is bc we don't know when/that incidents happen. So, if something isn't reported: is it because it didn't happen, or because no one reported it? Standard approach: Try to estimate ground truth. This is hard (and harder to validate), for same missing data reasons! Our approach: Leverage that there are sometimes *multiple* reports about the same incident, to disentangle reporting rates from incident occurrence rates. Why does this matter? If there are reporting disparities, that propagates to downstream disparities in what work government agencies do, and when they do it. Weâre now working directly with Parks to understand and improve the end-to-end reporting-to-work pipeline. We think our method can be almost directly applied to other reporting systems, like software bugs and other agency/311 systems -- please reach out! Paper link: (Virtual) talk next week 5/5 at C3ai seminar: And (virtual) talk this Wednesday at TOC4Fairness! ",https://arxiv.org/abs/2204.08620,"Modern city governance relies heavily on crowdsourcing (or ""co-production"") to identify problems such as downed trees and power-lines. A major concern in these systems is that residents do not report problems at the same rates, leading to an inequitable allocation of government resources. However, measuring such under-reporting is a difficult statistical task, as, almost by definition, we do not observe incidents that are not reported. Thus, distinguishing between low reporting rates and low ground-truth incident rates is challenging. We develop a method to identify (heterogeneous) reporting rates, without using external (proxy) ground truth data. Our insight is that rates on $\textit{duplicate}$ reports about the same incident can be leveraged, to turn the question into a standard Poisson rate estimation task -- even though the full incident reporting interval is also unobserved. We apply our method to over 100,000 resident reports made to the New York City Department of Parks and Recreation, finding that there are substantial spatial and socio-economic disparities in reporting rates, even after controlling for incident characteristics. ","Equity in Resident Crowdsourcing: Measuring Under-reporting without
Ground Truth Data",6,"['New paper! ""Equity in Resident Crowdsourcing: Measuring Under-reporting without Ground Truth Data"" w/ @ZhiLiu724 \n\nWe develop method to quantify under-reporting in @nyc311-like systems. Applying it to NYC Parks data reveals big spatial disparities \n\n ', ""Quantifying under-reporting is tricky. Why? The whole reason these systems exist is bc we don't know when/that incidents happen. So, if something isn't reported: is it because it didn't happen, or because no one reported it?"", 'Standard approach: Try to estimate ground truth. This is hard (and harder to validate), for same missing data reasons! \n\nOur approach: Leverage that there are sometimes *multiple* reports about the same incident, to disentangle reporting rates from incident occurrence rates.', 'Why does this matter? If there are reporting disparities, that propagates to downstream disparities in what work government agencies do, and when they do it.\n\nWeâre now working directly with Parks to understand and improve the end-to-end reporting-to-work pipeline. https://t.co/0Bg4YXaTCJ', 'We think our method can be almost directly applied to other reporting systems, like software bugs and other agency/311 systems -- please reach out!\n\nPaper link: https://t.co/vjvoOm6Nep\n\n(Virtual) talk next week 5/5 at C3ai seminar: https://t.co/X20ePSUQRj', 'And (virtual) talk this Wednesday at TOC4Fairness!\n\nhttps://t.co/yV0Xf4Yez1']",22,04,1311
75,81,1492096021157056522,1138012858988617728,Hannes StĂ€rk,Our new paper is out!𧏠EquiBind: Geometric Deep Learning for Drug Binding Structure Prediction Fast 3D structure predictions in which molecules bind to proteins! With the lovely team @octavianEganea @lucky_pattanaik @BarzilayRegina Tommi Jaakkola đ€ 1/2 @octavianEganea has a nice thread describing the paper: What I wish to add is that the code is available here: You can easily use it to predict the binding structures of your own ligand-protein pairs! 2/2,https://arxiv.org/abs/2202.05146,"Predicting how a drug-like molecule binds to a specific protein target is a core problem in drug discovery. An extremely fast computational binding method would enable key applications such as fast virtual screening or drug engineering. Existing methods are computationally expensive as they rely on heavy candidate sampling coupled with scoring, ranking, and fine-tuning steps. We challenge this paradigm with EquiBind, an SE(3)-equivariant geometric deep learning model performing direct-shot prediction of both i) the receptor binding location (blind docking) and ii) the ligand's bound pose and orientation. EquiBind achieves significant speed-ups and better quality compared to traditional and recent baselines. Further, we show extra improvements when coupling it with existing fine-tuning techniques at the cost of increased running time. Finally, we propose a novel and fast fine-tuning model that adjusts torsion angles of a ligand's rotatable bonds based on closed-form global minima of the von Mises angular distance to a given input atomic point cloud, avoiding previous expensive differential evolution strategies for energy minimization. ",EquiBind: Geometric Deep Learning for Drug Binding Structure Prediction,2,"['Our new paper is out!đ§Ź\nEquiBind: Geometric Deep Learning for Drug Binding Structure Prediction \n\nFast 3D structure predictions in which molecules bind to proteins!\n\nWith the lovely team @octavianEganea @lucky_pattanaik @BarzilayRegina Tommi Jaakkola đ€\n1/2 ', '@octavianEganea has a nice thread describing the paper: https://t.co/TYjKHSj9RD\n\nWhat I wish to add is that the code is available here: https://t.co/5pEcYPJY6E\n\nYou can easily use it to predict the binding structures of your own ligand-protein pairs!\n2/2']",22,02,485
76,64,1295665741493133312,2992109189,Luca Demetrio,"Welcome RAMEn, a new light-weight formalization for crafting adversarial EXEmples, shipped with two novel powerful attacks: Extend and Shift! Happy to share! Paper: Code: @biggiobattista @DrScottCoull @zxgio @AlessanArmando Roli @EdwardRaffML @filar @biggiobattista @DrScottCoull @zxgio @AlessanArmando It would be very interesting, for sure! Everything is already coded and it is easy to add Malconv+ to the library.",https://arxiv.org/abs/2008.07125,"Recent work has shown that adversarial Windows malware samples - referred to as adversarial EXEmples in this paper - can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes. To preserve malicious functionality, previous attacks either add bytes to existing non-functional areas of the file, potentially limiting their effectiveness, or require running computationally-demanding validation steps to discard malware variants that do not correctly execute in sandbox environments. In this work, we overcome these limitations by developing a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks based on practical, functionality-preserving manipulations to the Windows Portable Executable (PE) file format. These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section. Our experimental results show that these attacks outperform existing ones in both white-box and black-box scenarios, achieving a better trade-off in terms of evasion rate and size of the injected payload, while also enabling evasion of models that have been shown to be robust to previous attacks. To facilitate reproducibility of our findings, we open source our framework and all the corresponding attack implementations as part of the secml-malware Python library. We conclude this work by discussing the limitations of current machine learning-based malware detectors, along with potential mitigation strategies based on embedding domain knowledge coming from subject-matter experts directly into the learning process. ","Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection",2,"['Welcome RAMEn, a new light-weight formalization for crafting adversarial EXEmples, shipped with two novel powerful attacks: Extend and Shift!\nHappy to share!\nPaper: \nCode: \n@biggiobattista @DrScottCoull @zxgio @AlessanArmando Roli ', '@EdwardRaffML @filar @biggiobattista @DrScottCoull @zxgio @AlessanArmando It would be very interesting, for sure! Everything is already coded and it is easy to add Malconv+ to the library.']",20,08,438
77,77,1349354924639997955,2880029134,Marco Pangallo,"In we showed that reaching equilibrium in 2-player games could be difficult. In our new paper, , we do lots of maths and show that it is even more difficult in games with many players, unless players best-respond in a random order! This has been my first paper more on the coordination side, and can't thank enough @mungoluca, Sam Wiese and Yoojin Jang for all their work during summer internships! And of course also @TorstenHeinriX, Bassel and Alex for their key contributions!",https://arxiv.org/abs/2101.04222,"We analyze the performance of the best-response dynamic across all normal-form games using a random games approach. The playing sequence -- the order in which players update their actions -- is essentially irrelevant in determining whether the dynamic converges to a Nash equilibrium in certain classes of games (e.g. in potential games) but, when evaluated across all possible games, convergence to equilibrium depends on the playing sequence in an extreme way. Our main asymptotic result shows that the best-response dynamic converges to a pure Nash equilibrium in a vanishingly small fraction of all (large) games when players take turns according to a fixed cyclic order. By contrast, when the playing sequence is random, the dynamic converges to a pure Nash equilibrium if one exists in almost all (large) games. ","Best-response dynamics, playing sequences, and convergence to
equilibrium in random games",2,"['In we showed that reaching equilibrium in 2-player games could be difficult. In our new paper, , we do lots of maths and show that it is even more difficult in games with many players, unless players best-respond in a random order! ', ""This has been my first paper more on the coordination side, and can't thank enough @mungoluca, Sam Wiese and Yoojin Jang for all their work during summer internships! And of course also @TorstenHeinriX, Bassel and Alex for their key contributions!""]",21,01,499
78,67,961518287367688192,3091314052,Pawel Biernacki,"New paper is out: We test particle methods (traditional SPH, pressure SPH), hybrid methods (meshless finite mass) and grid (AMR) in an idealised yet realistic cluster setup. We inject hot bubble and study how it buoyantly rises in the environment without and with turbulence. If Kelvin-Helmholtz instability is well captured, then it is disrupted as expected. Energy is then deposited as kinetic energy. SPH variants do it v. poorly As soon as the paper is accepted we intend to make the initial conditions fully public and encourage others to try their favourite code in this setup, which we hope to become a standard, 3D, astrophysical test for hydro codes. @franco_vazza It is of order of what is feasible for these masses of clusters. Here RAMSES max res is about 1.5 kpc, Rhapsody-G had about 2 kpc. @franco_vazza For full cosmo boxes⊠I think only Illustris TNG comes close @CosmoCa3sar @franco_vazza Yup. I was thinking more of a spatial resolution (we compared mostly to Illustris TNG)",https://arxiv.org/abs/1802.02177,"While feedback from Active Galactic Nuclei (AGN) is an important heating source in the centre of galaxy clusters, it is still unclear how the feedback energy is injected into the intracluster medium (ICM) and what role different numerical approaches play. Here, we compare four hydrodynamical schemes in idealized simulations of a rising bubble inflated by AGN feedback in a hot stratified ICM: (traditional) smoothed particle hydrodynamics (TSPH), a pressure flavour of SPH (PSPH), a meshless finite mass (MFM) scheme, as well as an Eulerian code with adaptive mesh refinement. In the absence of magnetic fields, the bubble is Kelvin-Helmholtz unstable on short enough time scales to dissolve it fully in the ICM, which is captured by MFM and RAMSES simulations, while in the TSPH simulation the bubble survives. When the ICM is turbulent, mixing of the bubble with the ICM is accelerated. This occurs if the numerical scheme can capture the instabilities well. The differences in the evolution of the bubble has a surprisingly small influence on the thermal structure of the ICM. However, in the simulations with MFM and RAMSES the bubble disruption leads to turbulent stirring of the ICM which is suppressed in SPH. In the latter the thermal energy remains trapped in the bubble and is transported to large radii. We discuss if the choice of hydrodynamical schemes can lead to systematic differences in the outcomes of cosmological simulations. ","Physical and numerical stability and instability of AGN bubbles in a hot
intracluster medium",6,"['New paper is out: \n\nWe test particle methods (traditional SPH, pressure SPH), hybrid methods (meshless finite mass) and grid (AMR) in an idealised yet realistic cluster setup. ', 'We inject hot bubble and study how it buoyantly rises in the environment without and with turbulence. If Kelvin-Helmholtz instability is well captured, then it is disrupted as expected. Energy is then deposited as kinetic energy. SPH variants do it v. poorly https://t.co/qSNcsAWcGy', 'As soon as the paper is accepted we intend to make the initial conditions fully public and encourage others to try their favourite code in this setup, which we hope to become a standard, 3D, astrophysical test for hydro codes.', '@franco_vazza It is of order of what is feasible for these masses of clusters. Here RAMSES max res is about 1.5 kpc, Rhapsody-G had about 2 kpc.', '@franco_vazza For full cosmo boxes⊠I think only Illustris TNG comes close', '@CosmoCa3sar @franco_vazza Yup. I was thinking more of a spatial resolution (we compared mostly to Illustris TNG)']",18,02,1014
79,49,1508407909155278848,13800042,Lukas Heinrich,"Short new paper on lhood-free inference:We use the profile lhood ratio b/c of its asymptotically optimal properties but often need to approximate p(x|Ξ) to compute it. But if we take its properties seriously, we can find it in a fully likelihood-free way: As in the likelihood ratio trick, the idea is simple: if we use a test stat because it's optimal, that just means we can find it through optimization. The LRT is (asymptotically) optimal is various ways e.g. has best average power in a certain sense The idea is to optimizing a neural network-based statistic for best average power - this will generally give you a good test statistic. If the underlying model behaves asymptotically, this recovers the profile likelihood ratio without ever having to evaluate or fit p(x|Ξ)",http://arxiv.org/abs/2203.13079,"The design of optimal test statistics is a key task in frequentist statistics and for a number of scenarios optimal test statistics such as the profile-likelihood ratio are known. By turning this argument around we can find the profile likelihood ratio even in likelihood-free cases, where only samples from a simulator are available, by optimizing a test statistic within those scenarios. We propose a likelihood-free training algorithm that produces test statistics that are equivalent to the profile likelihood ratios in cases where the latter is known to be optimal. ",Learning Optimal Test Statistics in the Presence of Nuisance Parameters,3,"['Short new paper on lhood-free inference:We use the profile lhood ratio b/c of its asymptotically optimal properties but often need to approximate p(x|Ξ) to compute it. But if we take its properties seriously, we can find it in a fully likelihood-free way: ', ""As in the likelihood ratio trick, the idea is simple: if we use a test stat because it's optimal, that just means we can find it through optimization. The LRT is (asymptotically) optimal is various ways e.g. has best average power in a certain sense"", 'The idea is to optimizing a neural network-based statistic for best average power - this will generally give you a good test statistic. If the underlying model behaves asymptotically, this recovers the profile likelihood ratio without ever having to evaluate or fit p(x|Ξ)']",22,03,792
80,74,1361305266805964800,1263810665476653057,Denis Vida,"In our new paper, we develop a method to measure energies and trajectories of big fireballs using only nearby seismic and infrasound stations! The approach is simple, we just assume fireballs are seismic sources moving at Mach ~50. @westernuScience We also provide open-source software for energy and trajectory inversion: Congrats @LukeFMcFadden on your first publication! More are to come, and I'm sure they will be as amazing as this one!",https://arxiv.org/abs/2102.06574,"Near field acoustical signals from fireballs (ranges<200 km), when detected by dense ground networks, may be used to estimate the orientation of the trajectory of a fireball (Pujol et al., 2005) as well as fragmentation locations (Kalenda et al., 2014; Edwards and Hildebrand, 2004). Distinguishing ballistic arrivals (from the cylindrical shock of the fireball)from fragmentation generated signals (quasi-spherical sources) remains a challenge, but are obtainable through analysis of the acoustic path and the timing observed at ground instruments. Here we describe an integrated computer code, termed the Bolide Acoustic Modelling program or BAM, to estimate fireball trajectories and energetics. We develop a new methodology for measuring energy release from bolide fragmentation episodes solely from acoustic measurements and incorporate this into BAM. We also explore the sensitivity of seismo-acoustic fireball solutions and energy estimates to uncertainty in the underlying atmospheric model. Applying BAM to the Stubenberg meteorite producing fireball, we find the total fireball energy from ballistic arrivals to be approximately $5 \times 10^{10}$J which compares favorably to the optical estimate of $4.36 \times 10^{10}$J. The combined fragmentation energy of the Stubenberg event from acoustic data was found to be $1.47^{+0.28}_{-0.12} \times 10^{10}$J, roughly one third of the ballistic or optical total energy. We also show that measuring fireball velocities from acoustic data alone is very challenging but may be possible for slow, deeply penetrating fireballs with shallow entry angles occurring over dense seismic/infrasound networks. ",Fireball characteristics derivable from acoustic data,3,"['In our new paper, we develop a method to measure energies and trajectories of big fireballs using only nearby seismic and infrasound stations!\n\nThe approach is simple, we just assume fireballs are seismic sources moving at Mach ~50.\n@westernuScience ', 'We also provide open-source software for energy and trajectory inversion: https://t.co/9JDrU9XFVu https://t.co/iUfz7V7zE9', ""Congrats @LukeFMcFadden on your first publication! More are to come, and I'm sure they will be as amazing as this one!""]",21,02,469
81,41,1419848836109856769,96779364,Arnab Bhattacharyya,"New paper: (Efficient inference of interventional distributions) with Sutanu Gayen, @Saravanan_CU, Vedant Raval, and N.V. Vinodchandran. TL;DR: Get observational samples, learn interventional distribution, with guarantees for poly # samples and runtime. Continuing a line of work we initiated earlier (), we give finite sample guarantees for estimating causal effects, given knowledge of a causal graph and access to observational samples. All variables are discrete, from a finite-sized alphabet (e.g, binary). Seminal work by @yudapearl and Shpitser 15+ years ago had pinpointed conditions under which interventions are identifiable in this setting. They gave an algorithm which can require an unbounded number of samples. Our work makes their result algorithmic and quantitative. We also prove a hardness result showing that marginals of Bayes nets (no interventions) are hard to learn. That is, we can't expect a poly time algorithm that outputs a circuit which implements an approximate probability mass function for the marginal on n/2 nodes. #causality #causalinference #Statistics #epitwitter",https://arxiv.org/abs/2107.11712,"We consider the problem of efficiently inferring interventional distributions in a causal Bayesian network from a finite number of observations. Let $\mathcal{P}$ be a causal model on a set $\mathbf{V}$ of observable variables on a given causal graph $G$. For sets $\mathbf{X},\mathbf{Y}\subseteq \mathbf{V}$, and setting ${\bf x}$ to $\mathbf{X}$, let $P_{\bf x}(\mathbf{Y})$ denote the interventional distribution on $\mathbf{Y}$ with respect to an intervention ${\bf x}$ to variables ${\bf x}$. Shpitser and Pearl (AAAI 2006), building on the work of Tian and Pearl (AAAI 2001), gave an exact characterization of the class of causal graphs for which the interventional distribution $P_{\bf x}({\mathbf{Y}})$ can be uniquely determined. We give the first efficient version of the Shpitser-Pearl algorithm. In particular, under natural assumptions, we give a polynomial-time algorithm that on input a causal graph $G$ on observable variables $\mathbf{V}$, a setting ${\bf x}$ of a set $\mathbf{X} \subseteq \mathbf{V}$ of bounded size, outputs succinct descriptions of both an evaluator and a generator for a distribution $\hat{P}$ that is $\varepsilon$-close (in total variation distance) to $P_{\bf x}({\mathbf{Y}})$ where $Y=\mathbf{V}\setminus \mathbf{X}$, if $P_{\bf x}(\mathbf{Y})$ is identifiable. We also show that when $\mathbf{Y}$ is an arbitrary set, there is no efficient algorithm that outputs an evaluator of a distribution that is $\varepsilon$-close to $P_{\bf x}({\mathbf{Y}})$ unless all problems that have statistical zero-knowledge proofs, including the Graph Isomorphism problem, have efficient randomized algorithms. ",Efficient inference of interventional distributions,5,"['New paper: (Efficient inference of interventional distributions) with Sutanu Gayen, @Saravanan_CU, Vedant Raval, and N.V. Vinodchandran. \n\nTL;DR: Get observational samples, learn interventional distribution, with guarantees for poly # samples and runtime.', 'Continuing a line of work we initiated earlier (https://t.co/ESygJG9Wkl), we give finite sample guarantees for estimating causal effects, given knowledge of a causal graph and access to observational samples. All variables are discrete, from a finite-sized alphabet (e.g, binary).', 'Seminal work by @yudapearl and Shpitser 15+ years ago had pinpointed conditions under which interventions are identifiable in this setting. They gave an algorithm which can require an unbounded number of samples.\n\nOur work makes their result algorithmic and quantitative.', ""We also prove a hardness result showing that marginals of Bayes nets (no interventions) are hard to learn. That is, we can't expect a poly time algorithm that outputs a circuit which implements an approximate probability mass function for the marginal on n/2 nodes."", '#causality #causalinference #Statistics #epitwitter']",21,07,1114
82,119,1489295648826564611,883039700,Lenka Zdeborova,"Dear 2nd referee of our ICML 2019 submission. We finally managed to answer your question about the relation between the Saad&Solla analysis of two-layer neural networks and the one referred to as mean-field/hydrodynamic limit. Please see our new paper: With @rodsveiga @_brloureiro Ludovic Stephan, and @KrzakalaF In the figure the axed are scaling exponents of the learning and of the network width with dimension.",https://arxiv.org/abs/2202.00293,"Despite the non-convex optimization landscape, over-parametrized shallow networks are able to achieve global convergence under gradient descent. The picture can be radically different for narrow networks, which tend to get stuck in badly-generalizing local minima. Here we investigate the cross-over between these two regimes in the high-dimensional setting, and in particular investigate the connection between the so-called mean-field/hydrodynamic regime and the seminal approach of Saad & Solla. Focusing on the case of Gaussian data, we study the interplay between the learning rate, the time scale, and the number of hidden units in the high-dimensional dynamics of stochastic gradient descent (SGD). Our work builds on a deterministic description of SGD in high-dimensions from statistical physics, which we extend and for which we provide rigorous convergence rates. ","Phase diagram of Stochastic Gradient Descent in high-dimensional
two-layer neural networks",2,"['Dear 2nd referee of our ICML 2019 submission. We finally managed to answer your question about the relation between the Saad&Solla analysis of two-layer neural networks and the one referred to as mean-field/hydrodynamic limit. Please see our new paper: ', 'With @rodsveiga @_brloureiro Ludovic Stephan, and @KrzakalaF In the figure the axed are scaling exponents of the learning and of the network width with dimension.']",22,02,429
83,97,1405025466763915268,1173723962,Haque Ishfaq,"Check out our new #ICML2021 paper on how to unify the general principle of optimism with Thompson sampling style exploration method for RL. Joint work with Qiwen Cui, @defo_not_gpt3, Alex Ayoub, @zhuoran_yang, @zhaoran_wang, Doina Precup and @lyang36. ",https://arxiv.org/abs/2106.07841,"We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm as well as the optimism principle. Unlike existing upper-confidence-bound (UCB) based approaches, which are often computationally intractable, our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises. To attain optimistic value function estimation without resorting to a UCB-style bonus, we introduce an optimistic reward sampling procedure. When the value functions can be represented by a function class $\mathcal{F}$, our algorithm achieves a worst-case regret bound of $\widetilde{O}(\mathrm{poly}(d_EH)\sqrt{T})$ where $T$ is the time elapsed, $H$ is the planning horizon and $d_E$ is the $\textit{eluder dimension}$ of $\mathcal{F}$. In the linear setting, our algorithm reduces to LSVI-PHE, a variant of RLSVI, that enjoys an $\widetilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret. We complement the theory with an empirical evaluation across known difficult exploration tasks. ","Randomized Exploration for Reinforcement Learning with General Value
Function Approximation",1,"['Check out our new #ICML2021 paper on how to unify the general principle of optimism with Thompson sampling style exploration method for RL. \n\nJoint work with Qiwen Cui, @defo_not_gpt3, Alex Ayoub, @zhuoran_yang, @zhaoran_wang, Doina Precup and @lyang36. ']",21,06,265
84,101,1438197765280919554,1182194132309012481,Michihiro Yasunaga,"Excited to share our new #EMNLP2021 paper ""LM-Critic: Language Models for Unsupervised Grammatical Error Correction"" with @percyliang @jure @StanfordAILab @StanfordNLP! Paper: Github: Thread below [1/7] â€”ïž A big challenge in learning grammatical error correction (GEC) is to get labeled bad-good sentence pairs. Manual labeling is expensive. Synthetic data (e.g. perturbing good sentences) does not reflect real errors. How can we get cheap yet realistic training data for GEC? [2/7] We develop a new solution to this problem leveraging language models (LMs). Two core insights: 1) LM-Critic: We can get an unsupervised critic that assesses the grammaticality of a sentence by checking if LMs assign it a higher probability than its local perturbations. [3/7] How BIFI works is to jointly train the desired fixer that corrects a bad sentence into a good one and a breaker that corrupts a good sentence into a bad one, while using them in conjunction to generate more realistic parallel data from unlabeled raw text. [5/7] In summary, using pretrained LMs, our method (LM-Critic + BIFI) can obtain usable parallel data from unlabeled data alone! We use the generated parallel data to augment the training of GEC models. This offers substantial performance boost across various GEC benchmarks. [6/7] For more details, please check out our paper at Huge thanks to the collaborators and all who gave us feedback! [7/7]",https://arxiv.org/abs/2109.06822,"Training a model for grammatical error correction (GEC) requires a set of labeled ungrammatical / grammatical sentence pairs, but manually annotating such pairs can be expensive. Recently, the Break-It-Fix-It (BIFI) framework has demonstrated strong results on learning to repair a broken program without any labeled examples, but this relies on a perfect critic (e.g., a compiler) that returns whether an example is valid or not, which does not exist for the GEC task. In this work, we show how to leverage a pretrained language model (LM) in defining an LM-Critic, which judges a sentence to be grammatical if the LM assigns it a higher probability than its local perturbations. We apply this LM-Critic and BIFI along with a large set of unlabeled sentences to bootstrap realistic ungrammatical / grammatical pairs for training a corrector. We evaluate our approach on GEC datasets across multiple domains (CoNLL-2014, BEA-2019, GMEG-wiki and GMEG-yahoo) and show that it outperforms existing methods in both the unsupervised setting (+7.7 F0.5) and the supervised setting (+0.5 F0.5). ",LM-Critic: Language Models for Unsupervised Grammatical Error Correction,6,"['Excited to share our new #EMNLP2021 paper ""LM-Critic: Language Models for Unsupervised Grammatical Error Correction"" with @percyliang @jure @StanfordAILab @StanfordNLP!\n\nPaper: \nGithub: \n\nThread below [1/7] â€”ïž ', 'A big challenge in learning grammatical error correction (GEC) is to get labeled bad-good sentence pairs. Manual labeling is expensive. Synthetic data (e.g. perturbing good sentences) does not reflect real errors. \nHow can we get cheap yet realistic training data for GEC?\n\n[2/7] https://t.co/2l97ok3rXA', 'We develop a new solution to this problem leveraging language models (LMs). Two core insights:\n\n1) LM-Critic: We can get an unsupervised critic that assesses the grammaticality of a sentence by checking if LMs assign it a higher probability than its local perturbations.\n\n[3/7] https://t.co/2O0qJ4NEVN', 'How BIFI works is to jointly train the desired fixer that corrects a bad sentence into a good one and a breaker that corrupts a good sentence into a bad one, while using them in conjunction to generate more realistic parallel data from unlabeled raw text.\n[5/7] https://t.co/alhh7dngYV', 'In summary, using pretrained LMs, our method (LM-Critic + BIFI) can obtain usable parallel data from unlabeled data alone!\n\nWe use the generated parallel data to augment the training of GEC models. This offers substantial performance boost across various GEC benchmarks.\n\n[6/7] https://t.co/Y0twpCydPf', 'For more details, please check out our paper at https://t.co/Fvi34s4LuQ\n\nHuge thanks to the collaborators and all who gave us feedback!\n[7/7]']",21,09,1470
85,163,1515776377714356224,1049729676,Shathushan Sivashangaran,"New paper on X-CAR: An Experimental Vehicle Platform for Connected Autonomy Research developed by Virginia Tech ASIM Lab, powered by U.S. Department of Transportation Federal Highway Administrationâs CARMA platform. Itâll appear in IEEE ITS Magazine. ",https://arxiv.org/abs/2204.02559,"Autonomous vehicles promise a future with a safer, cleaner, more efficient, and more reliable transportation system. However, the current approach to autonomy has focused on building small, disparate intelligences that are closed off to the rest of the world. Vehicle connectivity has been proposed as a solution, relying on a vision of the future where a mix of connected autonomous and human-driven vehicles populate the road. Developed by the U.S. Department of Transportation Federal Highway Administration as a reusable, extensible platform for controlling connected autonomous vehicles, the CARMA Platform is one of the technologies enabling this connected future. Nevertheless, the adoption of the CARMA Platform has been slow, with a contributing factor being the limited, expensive, and somewhat old vehicle configurations that are officially supported. To alleviate this problem, we propose X-CAR (eXperimental vehicle platform for Connected Autonomy Research). By implementing the CARMA Platform on more affordable, high quality hardware, X-CAR aims to increase the versatility of the CARMA Platform and facilitate its adoption for research and development of connected driving automation. ","X-CAR: An Experimental Vehicle Platform for Connected Autonomy Research
Powered by CARMA",1,"['New paper on X-CAR: An Experimental Vehicle Platform for Connected Autonomy Research developed by Virginia Tech ASIM Lab, powered by U.S. Department of Transportation Federal Highway Administrationâs CARMA platform. Itâll appear in IEEE ITS Magazine. \n\n ']",22,04,265
86,98,1215192815900037122,713389493076758528,Xavier Bresson,"New paper on graph/sparse Transformer applied to sketch recognition. Interestingly, a standard Transformer applied to sketch points does not work well. But a Transformer on graphs of sketch points perform quite well. paper code A next step would be to generate sketches using graph Transformers instead of RNNs. See the nice blog post of @hardmaru about the sketch generative task. ",https://arxiv.org/abs/1912.11258,"Learning meaningful representations of free-hand sketches remains a challenging task given the signal sparsity and the high-level abstraction of sketches. Existing techniques have focused on exploiting either the static nature of sketches with Convolutional Neural Networks (CNNs) or the temporal sequential property with Recurrent Neural Networks (RNNs). In this work, we propose a new representation of sketches as multiple sparsely connected graphs. We design a novel Graph Neural Network (GNN), the Multi-Graph Transformer (MGT), for learning representations of sketches from multiple graphs which simultaneously capture global and local geometric stroke structures, as well as temporal information. We report extensive numerical experiments on a sketch recognition task to demonstrate the performance of the proposed approach. Particularly, MGT applied on 414k sketches from Google QuickDraw: (i) achieves small recognition gap to the CNN-based performance upper bound (72.80% vs. 74.22%), and (ii) outperforms all RNN-based models by a significant margin. To the best of our knowledge, this is the first work proposing to represent sketches as graphs and apply GNNs for sketch recognition. Code and trained models are available at this https URL ",Multi-Graph Transformer for Free-Hand Sketch Recognition,2,"['New paper on graph/sparse Transformer applied to sketch recognition. \n\nInterestingly, a standard Transformer applied to sketch points does not work well. But a Transformer on graphs of sketch points perform quite well. \n\npaper \ncode ', 'A next step would be to generate sketches using graph Transformers instead of RNNs. See the nice blog post of @hardmaru about the sketch generative task.\nhttps://t.co/9ORZaMgsWi']",19,12,418
87,104,1503641142855938049,3236251346,Mikel Sanz,"New paper today in which we study the limits of cv microwave entanglement distribution in open air. Great #qmics collaboration with WMI, @YasserOmar_ @mpmotton @SalariVahid congrats Tasio for this work! @NquireC @QuantumFlagship @QUANTEK2122 @ehuscientia ",https://arxiv.org/abs/2203.07295,"Microwave technology plays a central role in current wireless communications, standing among them mobile communication and local area networks (LANs). The microwave range shows relevant advantages with respect to other frequencies in open-air transmission, such as low absorption losses and low energy consumption, and it is additionally the natural working frequency in superconducting quantum technologies. Entanglement distribution between separate parties is at the core of secure quantum communications. Therefore, understanding its limitations in realistic open-air settings, specially in the rather unexplored microwave regime, is crucial for transforming microwave quantum communications into a mainstream technology. Here, we investigate the feasibility of an open-air entanglement distribution scheme with microwave two-mode squeezed states. First, we study the reach of direct entanglement transmission in open-air, obtaining a maximum distance of approximately 500 meters in a realistic setting with state-of-the-art experimental parameters. Afterwards, we adapt entanglement distillation and entanglement swapping protocols to microwave technology in order to reduce environmental entanglement degradation. While entanglement distillation helps to increase quantum correlations in the short-distance low-squeezing regime by up to $46\%$, entanglement swapping increases the reach by $14\%$. Then, we compute the fidelity of a continuous-variable quantum teleportation protocol using open-air-distributed entanglement as a resource. Finally, we adapt the machinery to explore the limitations of quantum communication between satellites, where the thermal noise impact is substantially reduced and diffraction losses are dominant. ",Open-Air Microwave Entanglement Distribution for Quantum Teleportation,1,"['New paper today in which we study the limits of cv microwave entanglement distribution in open air. Great #qmics collaboration with WMI, @YasserOmar_ @mpmotton @SalariVahid congrats Tasio for this work! @NquireC @QuantumFlagship @QUANTEK2122 @ehuscientia ']",22,03,268
88,4,1400827497646751747,1400815831018213376,Meng-Hao Guo,Our new paper uses two linear layers to replace self-attention and builds an all-MLP architecture name EAMLP. EAMLP achieved 79.4% accuracy on ImageNet and surpassed Performer in some settings! Results and visualizations are as follows. ArXiv: . ,https://arxiv.org/abs/2105.02358,"Attention mechanisms, especially self-attention, have played an increasingly important role in deep feature representation for visual tasks. Self-attention updates the feature at each position by computing a weighted sum of features using pair-wise affinities across all positions to capture the long-range dependency within a single sample. However, self-attention has quadratic complexity and ignores potential correlation between different samples. This paper proposes a novel attention mechanism which we call external attention, based on two external, small, learnable, shared memories, which can be implemented easily by simply using two cascaded linear layers and two normalization layers; it conveniently replaces self-attention in existing popular architectures. External attention has linear complexity and implicitly considers the correlations between all data samples. We further incorporate the multi-head mechanism into external attention to provide an all-MLP architecture, external attention MLP (EAMLP), for image classification. Extensive experiments on image classification, object detection, semantic segmentation, instance segmentation, image generation, and point cloud analysis reveal that our method provides results comparable or superior to the self-attention mechanism and some of its variants, with much lower computational and memory costs. ","Beyond Self-attention: External Attention using Two Linear Layers for
Visual Tasks",1,['Our new paper uses two linear layers to replace self-attention and builds an all-MLP architecture name EAMLP. EAMLP achieved 79.4% accuracy on ImageNet and surpassed Performer in some settings! Results and visualizations are as follows. \n\nArXiv: . '],21,05,259
89,155,1270893041532846080,2337598033,Geraint F. Lewis,Cool new paper on the @arxiv from @galahsurvey (including @astro_sven @JossBlandHawtho @_sarahmartell_ @FadAstra and more) - whereâs the lithium??? @TodLauer @arxiv @galahsurvey @astro_sven @JossBlandHawtho @_sarahmartell_ @FadAstra I will pass this to @JossBlandHawtho !,https://arxiv.org/abs/2006.05173,"Lithium depletion and enrichment in the cosmos is not yet well understood. To help tighten constraints on stellar and Galactic evolution models, we present the largest high-resolution analysis of Li abundances A(Li) to date, with results for over 100 000 GALAH field stars spanning effective temperatures $5900\,\mathrm{K} \lesssim \rm{T_{eff}} \lesssim7000\,\mathrm{K}$ and metallicities $-3 \lesssim \rm[Fe/H] \lesssim +0.5$. We separated these stars into two groups, on the warm and cool side of the so-called Li-dip, a localised region of the Kiel diagram wherein lithium is severely depleted. We discovered that stars in these two groups show similar trends in the A(Li)-[Fe/H] plane, but with a roughly constant offset in A(Li) of 0.4 dex, the warm group having higher Li abundances. At $\rm[Fe/H]\gtrsim-0.5$, a significant increasing in Li abundance with increasing metallicity is evident in both groups, signalling the onset of significant Galactic production. At lower metallicity, stars in the cool group sit on the Spite plateau, showing a reduced lithium of around 0.4 dex relative to the primordial value predicted from Big Bang nucleosynthesis (BBN). However, stars in the warm group between [Fe/H] = -1.0 and -0.5, form an elevated plateau that is largely consistent with the BBN prediction. This may indicate that these stars in fact preserve the primordial Li produced in the early Universe. ","The GALAH Survey: A new constraint on cosmological lithium and Galactic
lithium evolution from warm dwarf stars",2,"['Cool new paper on the @arxiv from @galahsurvey (including @astro_sven @JossBlandHawtho @_sarahmartell_ @FadAstra and more) - whereâs the lithium???\n\n ', '@TodLauer @arxiv @galahsurvey @astro_sven @JossBlandHawtho @_sarahmartell_ @FadAstra I will pass this to @JossBlandHawtho !']",20,06,285
90,33,940252195051659264,21611239,Sean Carroll,"How quantum entanglement can define the geometry of spacetime, and gravity can emerge from the wave function. New paper with Charles Cao. @active_50 @realDonaldTrump Iâm sure he has thoughts. @RobJLow Ooh thatâs good. Totally stealing that. (Will try to remember to attribute!)",https://arxiv.org/abs/1712.02803,"We consider the emergence from quantum entanglement of spacetime geometry in a bulk region. For certain classes of quantum states in an appropriately factorized Hilbert space, a spatial geometry can be defined by associating areas along codimension-one surfaces with the entanglement entropy between either side. We show how Radon transforms can be used to convert this data into a spatial metric. Under a particular set of assumptions, the time evolution of such a state traces out a four-dimensional spacetime geometry, and we argue using a modified version of Jacobson's ""entanglement equilibrium"" that the geometry should obey Einstein's equation in the weak-field limit. We also discuss how entanglement equilibrium is related to a generalization of the Ryu-Takayanagi formula in more general settings, and how quantum error correction can help specify the emergence map between the full quantum-gravity Hilbert space and the semiclassical limit of quantum fields propagating on a classical spacetime. ","Bulk Entanglement Gravity without a Boundary: Towards Finding Einstein's
Equation in Hilbert Space",3,"['How quantum entanglement can define the geometry of spacetime, and gravity can emerge from the wave function. New paper with Charles Cao.\n', '@active_50 @realDonaldTrump Iâm sure he has thoughts.', '@RobJLow Ooh thatâs good. Totally stealing that. (Will try to remember to attribute!)']",17,12,284
91,143,1494361834932953103,2701532126,Berivan Isik,"Check out our new paper titled âLearning under Storage and Privacy Constraintsâ. We propose a novel data pre-processing framework, LCoN, which simultaneously boosts data efficiency, privacy, accuracy, and robustness. 1/4 #compression #privacy #learning Our framework comprises noise injection followed by lossy compression. The noise injection step prevents user information from being leaked during learning, while lossy compression reduces the cost of storing/transmitting the data. 2/4 We show that, when appropriately matching the lossy compression to the distribution of the added noise, the compressed examples converge, in distribution, to that of the noise-free training data. 3/4 With this, we guarantee that the utility of the data for learning is essentially maintained while reducing storage and privacy leakage by quantifiable amounts. The improved robustness against adversarial data is a welcome additional feature we observed empirically. 4/4",https://arxiv.org/abs/2202.02892,"Storage-efficient privacy-guaranteed learning is crucial due to enormous amounts of sensitive user data required for increasingly many learning tasks. We propose a framework for reducing the storage cost while at the same time providing privacy guarantees, without essential loss in the utility of the data for learning. Our method comprises noise injection followed by lossy compression. We show that, when appropriately matching the lossy compression to the distribution of the added noise, the compressed examples converge, in distribution, to that of the noise-free training data. In this sense, the utility of the data for learning is essentially maintained, while reducing storage and privacy leakage by quantifiable amounts. We present experimental results on the CelebA dataset for gender classification and find that our suggested pipeline delivers in practice on the promise of the theory: the individuals in the images are unrecognizable (or less recognizable, depending on the noise level), overall storage of the data is substantially reduced, with no essential loss of the classification accuracy. As an added bonus, our experiments suggest that our method yields a substantial boost to robustness in the face of adversarial test data. ",Learning under Storage and Privacy Constraints,4,"['Check out our new paper titled âLearning under Storage and Privacy Constraintsâ. We propose a novel data pre-processing framework, LCoN, which simultaneously boosts data efficiency, privacy, accuracy, and robustness. 1/4\n\n\n\n#compression #privacy #learning ', 'Our framework comprises noise injection followed by lossy compression. The noise injection step prevents user information from being leaked during learning, while lossy compression reduces the cost of storing/transmitting the data. 2/4', 'We show that, when appropriately matching the lossy compression to the distribution of the added noise, the compressed examples converge, in distribution, to that of the noise-free training data. 3/4', 'With this, we guarantee that the utility of the data for learning is essentially maintained while reducing storage and privacy leakage by quantifiable amounts. The improved robustness against adversarial data is a welcome additional feature we observed empirically. 4/4']",22,02,972
92,188,1323906108134629377,15989147,Tadashi Okoshi,"""NationalMood: Large-scale Estimation of People's Mood from Web Search Query and Mobile Sensor Data"" on arxiv: - We propose a novel way of #mood estimation based on a combinational use of user's web search queries and mobile sensor data. #ubicomp #www ",https://arxiv.org/abs/2011.00665,"The ability to estimate current affective statuses of web users has considerable potential towards the realization of user-centric opportune services. However, determining the type of data to be used for such estimation as well as collecting the ground truth of such affective statuses are difficult in the real world situation. We propose a novel way of such estimation based on a combinational use of user's web search queries and mobile sensor data. Our large-scale data analysis with about 11,000,000 users and 100 recent advertisement log revealed (1) the existence of certain class of advertisement to which mood-status-based delivery would be significantly effective, (2) that our ""National Mood Score"" shows the ups and downs of people's moods in COVID-19 pandemic that inversely correlated to the number of patients, as well as the weekly mood rhythm of people. ","NationalMood: Large-scale Estimation of People's Mood from Web Search
Query and Mobile Sensor Data",1,"['""NationalMood: Large-scale Estimation of People\'s Mood from Web Search Query and Mobile Sensor Data"" on arxiv: \n- We propose a novel way of #mood estimation based on a combinational use of user\'s web search queries and mobile sensor data. \n#ubicomp #www ']",20,11,265
93,1,1293129396618956800,202211003,Alan Winfield đ,"New paper with @KatieJWinkle now on @arxiv RoboTed: a case study in Ethical Risk Assessment, accepted for #ICRES2020 @djokeller_LP @KatieJWinkle @arxiv Thank you:) @jgarforth @KatieJWinkle @arxiv Thanks James:) @qeios @KatieJWinkle @arxiv Thank you! @djokeller_LP @KatieJWinkle @arxiv The starting point was British Standard BS8611, but that doesn't have a complete taxonomy of ethical risks for social robots so there was additional brainstorming.",https://arxiv.org/abs/2007.15864,"Risk Assessment is a well known and powerful method for discovering and mitigating risks, and hence improving safety. Ethical Risk Assessment uses the same approach but extends the envelope of risk to cover ethical risks in addition to safety risks. In this paper we outline Ethical Risk Assessment (ERA) and set ERA within the broader framework of Responsible Robotics. We then illustrate ERA with a case study of a hypothetical smart robot toy teddy bear: RoboTed. The case study shows the value of ERA and how consideration of ethical risks can prompt design changes, resulting in a more ethical and sustainable robot. ",RoboTed: a case study in Ethical Risk Assessment,5,"['New paper with @KatieJWinkle now on @arxiv RoboTed: a case study in Ethical Risk Assessment, accepted for #ICRES2020 ', '@djokeller_LP @KatieJWinkle @arxiv Thank you:)', '@jgarforth @KatieJWinkle @arxiv Thanks James:)', '@qeios @KatieJWinkle @arxiv Thank you!', ""@djokeller_LP @KatieJWinkle @arxiv The starting point was British Standard BS8611, but that doesn't have a complete taxonomy of ethical risks for social robots so there was additional brainstorming.""]",20,07,455
94,106,1424922599675494401,2297684784,Yang You,Our new paper: ONES automatically manages the elasticity of each AI job based on the workload to maximize GPU utilization and improve scheduling efficiency. Experiments on 64 GPUs show great results. This paper will appear on @Supercomputing #SC21 () ,https://arxiv.org/abs/2108.03645,"Efficient GPU resource scheduling is essential to maximize resource utilization and save training costs for the increasing amount of deep learning workloads in shared GPU clusters. Existing GPU schedulers largely rely on static policies to leverage the performance characteristics of deep learning jobs. However, they can hardly reach optimal efficiency due to the lack of elasticity. To address the problem, we propose ONES, an ONline Evolutionary Scheduler for elastic batch size orchestration. ONES automatically manages the elasticity of each job based on the training batch size, so as to maximize GPU utilization and improve scheduling efficiency. It determines the batch size for each job through an online evolutionary search that can continuously optimize the scheduling decisions. We evaluate the effectiveness of ONES with 64 GPUs on TACC's Longhorn supercomputers. The results show that ONES can outperform the prior deep learning schedulers with a significantly shorter average job completion time. ","Online Evolutionary Batch Size Orchestration for Scheduling Deep
Learning Workloads in GPU Clusters",1,['Our new paper: ONES automatically manages the elasticity of each AI job based on the workload to maximize GPU utilization and improve scheduling efficiency. Experiments on 64 GPUs show great results. This paper will appear on @Supercomputing #SC21 () '],21,08,263
95,258,1402776533387776000,2724167859,Stephen McAleer,"We often choose to delegate our decisions to algorithms. What should a central mediator do when multiple people choose to delegate their actions to the same mediator? In our recent paper we propose a mediator which Pareto-improves delegating agents. In the prisoner's dilemma, our Pareto Mediator creates a new game where both players delegating is a strong Nash equilibrium. When both players delegate, the mediator has them cooperate. To see what this would look like on a restaurant-reservation platform, if delegating agents choose restaurants that are full, the mediator will simply reserve spots for them at open restaurants. In sequential social dilemmas, the mediator will simply play a cooperative policy for both players if they choose to delegate. Here's some more results on some random normal form games. Joint work with @JB_Lanier, @MichaelD1729, Pierre Baldi, and @roydfox",https://arxiv.org/abs/2106.03927,"Machine learning algorithms often make decisions on behalf of agents with varied and sometimes conflicting interests. In domains where agents can choose to take their own action or delegate their action to a central mediator, an open question is how mediators should take actions on behalf of delegating agents. The main existing approach uses delegating agents to punish non-delegating agents in an attempt to get all agents to delegate, which tends to be costly for all. We introduce a Pareto Mediator which aims to improve outcomes for delegating agents without making any of them worse off. Our experiments in random normal form games, a restaurant recommendation game, and a reinforcement learning sequential social dilemma show that the Pareto Mediator greatly increases social welfare. Also, even when the Pareto Mediator is based on an incorrect model of agent utility, performance gracefully degrades to the pre-intervention level, due to the individual autonomy preserved by the voluntary mediator. ",Improving Social Welfare While Preserving Autonomy via a Pareto Mediator,6,"['We often choose to delegate our decisions to algorithms. What should a central mediator do when multiple people choose to delegate their actions to the same mediator? In our recent paper we propose a mediator which Pareto-improves delegating agents.\n', ""In the prisoner's dilemma, our Pareto Mediator creates a new game where both players delegating is a strong Nash equilibrium. When both players delegate, the mediator has them cooperate. https://t.co/nDnW3qmk7Y"", 'To see what this would look like on a restaurant-reservation platform, if delegating agents choose restaurants that are full, the mediator will simply reserve spots for them at open restaurants. https://t.co/RFV7JoO2iI', 'In sequential social dilemmas, the mediator will simply play a cooperative policy for both players if they choose to delegate. https://t.co/hsZjKyOTTp', ""Here's some more results on some random normal form games. https://t.co/2bhzFYnfGT"", 'Joint work with @JB_Lanier, @MichaelD1729, Pierre Baldi, and @roydfox']",21,06,922
96,66,1470764141300355079,601568012,Dr. Michelle Ntampaka,"We have a new paper out today! tl;dr: if weâre careful about how we engineer our deep learning architectures, we can build models that are inherently more interpretable and trustworthy. We explore an ML approach to infer cosmology from cluster mock observations. Figure 5 shows our architecture. It mimics a human approach to the same problem, which makes our model inherently more interpretable! We spend most of the paper interpreting. We show that ML can be used to make a new discovery, rather than just being a tool to black-box your way from point A to point B. Our model pointed us to a new self-calibration mode for x-ray surveys of galaxy clusters, in S 5.2.1 This is my favorite research thread I've ever pursued, and as a result, the paper long, both in time invested and ink spilt. Good collaborators are hard to find, and it was a tremendous pleasure to work with @AlexeyVikhlinin on this research!",https://arxiv.org/abs/2112.05768,"We present a deep machine learning (ML) approach to constraining cosmological parameters with multi-wavelength observations of galaxy clusters. The ML approach has two components: an encoder that builds a compressed representation of each galaxy cluster and a flexible CNN to estimate the cosmological model from a cluster sample. It is trained and tested on simulated cluster catalogs built from the Magneticum simulations. From the simulated catalogs, the ML method estimates the amplitude of matter fluctuations, sigma_8, at approximately the expected theoretical limit. More importantly, the deep ML approach can be interpreted. We lay out three schemes for interpreting the ML technique: a leave-one-out method for assessing cluster importance, an average saliency for evaluating feature importance, and correlations in the terse layer for understanding whether an ML technique can be safely applied to observational data. These interpretation schemes led to the discovery of a previously unknown self-calibration mode for flux- and volume-limited cluster surveys. We describe this new mode, which uses the amplitude and peak of the cluster mass PDF as anchors for mass calibration. We introduce the term ""overspecialized"" to describe a common pitfall in astronomical applications of machine learning in which the ML method learns simulation-specific details, and we show how a carefully constructed architecture can be used to check for this source of systematic error. ","The Importance of Being Interpretable: Toward An Understandable Machine
Learning Encoder for Galaxy Cluster Cosmology",4,"['We have a new paper out today! tl;dr: if weâre careful about how we engineer our deep learning architectures, we can build models that are inherently more interpretable and trustworthy. \n\n', 'We explore an ML approach to infer cosmology from cluster mock observations. Figure 5 shows our architecture. It mimics a human approach to the same problem, which makes our model inherently more interpretable! https://t.co/lB0x0OfkA8', 'We spend most of the paper interpreting. We show that ML can be used to make a new discovery, rather than just being a tool to black-box your way from point A to point B. Our model pointed us to a new self-calibration mode for x-ray surveys of galaxy clusters, in S 5.2.1', ""This is my favorite research thread I've ever pursued, and as a result, the paper long, both in time invested and ink spilt. Good collaborators are hard to find, and it was a tremendous pleasure to work with @AlexeyVikhlinin on this research!""]",21,12,926
97,42,1339979635359182850,2423179856,Edward Raff,"Used MalConv and annoyed with the huge training cost? Looked at it with disdain for being unable to learn feature interactions? With @willcfleshman @rjzak @drhyrum & @filar we have what you need: A new @RealAAAI #AAAI2021 paperđ đ§âđ» MalConv up to 25x faster and over 100x+ more memory efficient with a new chunked approach to temporal max pooling. Max pooling's gradient is sparse, but no one ever exploits it because their problems are too small. With this we can now train on files with T=100,000,000+ steps! We want feature interactions, but 100M+ steps is still too much for transformers. So we develop a new attention based gating mechanism ""Global Channel Gating"" (GCG) that allows learning interactions over 100M+ steps! Using GCG we have MalConv2, which has a feature extraction and context extraction sub-networks, which interact through GCG to modulate parts of the input based on other content. All of this work isn't getting us to domain knowledge levels yet, but a nice improvement work learning from raw bytes! We are also circumventing a trivial attack, we can process the _entire_ file even if its hundreds of MB. No trivial ""just append to the end"" evasion. As mentioned, also way fafster! We are processing more data in less time and less RAM! The huge reductions is what allows us to re-invest those savings into MalConv2.0 Impossible without @willcfleshman in particular, we can also look at how the GCG attention impacts what is/is-not used to make decisions. The results are pretty good, and seems to learn kind of intuitive logic an analyst might use. Hard to scale these kinds of studies up though. Happy to get this work out there, and would not have been possible without so many people. Especially all the wonderful people who have used and attacked MalConv! Hearing about the successes and troubles in using it drove a lot of the thought behind this work. If MalConv2 is useful in anyway to you, please drop a line and let us know! Especially with this pandemic thing, my normal feedback network of conferences doesn't work as well as normal đ",https://arxiv.org/abs/2012.09390,"Recent works within machine learning have been tackling inputs of ever-increasing size, with cybersecurity presenting sequence classification problems of particularly extreme lengths. In the case of Windows executable malware detection, inputs may exceed $100$ MB, which corresponds to a time series with $T=100,000,000$ steps. To date, the closest approach to handling such a task is MalConv, a convolutional neural network capable of processing up to $T=2,000,000$ steps. The $\mathcal{O}(T)$ memory of CNNs has prevented further application of CNNs to malware. In this work, we develop a new approach to temporal max pooling that makes the required memory invariant to the sequence length $T$. This makes MalConv $116\times$ more memory efficient, and up to $25.8\times$ faster to train on its original dataset, while removing the input length restrictions to MalConv. We re-invest these gains into improving the MalConv architecture by developing a new Global Channel Gating design, giving us an attention mechanism capable of learning feature interactions across 100 million time steps in an efficient manner, a capability lacked by the original MalConv CNN. Our implementation can be found at this https URL ","Classifying Sequences of Extreme Length with Constant Memory Applied to
Malware Detection",9,"['Used MalConv and annoyed with the huge training cost? Looked at it with disdain for being unable to learn feature interactions? With @willcfleshman @rjzak @drhyrum & @filar we have what you need: A new @RealAAAI #AAAI2021 paperđ\nđ§\u200dđ» ', ""MalConv up to 25x faster and over 100x+ more memory efficient with a new chunked approach to temporal max pooling. Max pooling's gradient is sparse, but no one ever exploits it because their problems are too small. With this we can now train on files with T=100,000,000+ steps! https://t.co/FFtxMs0i8W"", 'We want feature interactions, but 100M+ steps is still too much for transformers. So we develop a new attention based gating mechanism ""Global Channel Gating"" (GCG) that allows learning interactions over 100M+ steps! https://t.co/NXkJxQFOy8', 'Using GCG we have MalConv2, which has a feature extraction and context extraction sub-networks, which interact through GCG to modulate parts of the input based on other content. https://t.co/OCMzlyeqyh', 'All of this work isn\'t getting us to domain knowledge levels yet, but a nice improvement work learning from raw bytes! We are also circumventing a trivial attack, we can process the _entire_ file even if its hundreds of MB. No trivial ""just append to the end"" evasion. https://t.co/8Zcsw4kBKC', 'As mentioned, also way fafster! We are processing more data in less time and less RAM! The huge reductions is what allows us to re-invest those savings into MalConv2.0 https://t.co/RxL6pio2qc', 'Impossible without @willcfleshman in particular, we can also look at how the GCG attention impacts what is/is-not used to make decisions. The results are pretty good, and seems to learn kind of intuitive logic an analyst might use. Hard to scale these kinds of studies up though. https://t.co/5AnA6MBbuY', 'Happy to get this work out there, and would not have been possible without so many people. Especially all the wonderful people who have used and attacked MalConv! Hearing about the successes and troubles in using it drove a lot of the thought behind this work.', ""If MalConv2 is useful in anyway to you, please drop a line and let us know! Especially with this pandemic thing, my normal feedback network of conferences doesn't work as well as normal đ""]",20,12,2133
98,2,1161260162100936704,3393842567,"Shane Ross, PhD",Cylinder transit boundaries close to become ellipsoids when friction is present. New paper summarizes geometry of transition & escape across saddles in physical systems of many degrees of freedom w/ dissipation #AppliedMath #DynamicalSystems #TubeDynamics ,https://arxiv.org/abs/1907.10728,"Escape from a potential well can occur in different physical systems, such as capsize of ships, resonance transitions in celestial mechanics, and dynamic snap-through of arches and shells, as well as molecular reconfigurations in chemical reactions. The criteria and routes of escape in one-degree of freedom systems has been well studied theoretically with reasonable agreement with experiment. The trajectory can only transit from the hilltop of the one-dimensional potential energy surface. The situation becomes more complicated when the system has higher degrees of freedom since it has multiple routes to escape through an equilibrium of saddle-type, specifically, an index-1 saddle. This paper summarizes the geometry of escape across a saddle in some widely known physical systems with two degrees of freedom and establishes the criteria of escape providing both a methodology and results under the conceptual framework known as tube dynamics. These problems are classified into two categories based on whether the saddle projection and focus projection in the symplectic eigenspace are coupled or not when damping and/or gyroscopic effects are considered. To simplify the process, only the linearized system around the saddle points are analyzed. We define a transition region, $\mathcal{T}_h$, as the region of initial conditions of a given initial energy $h$ which transit from one side of a saddle to the other. We find that in conservative systems, the boundary of the transition region, $\partial \mathcal{T}_h$, is a cylinder, while in dissipative systems, $\partial \mathcal{T}_h$ is an ellipsoid. ","Geometry of escape and transition dynamics in the presence of
dissipative and gyroscopic forces in two degree of freedom systems",1,['Cylinder transit boundaries close to become ellipsoids when friction is present. New paper summarizes geometry of transition & escape across saddles in physical systems of many degrees of freedom w/ dissipation \n\n#AppliedMath #DynamicalSystems #TubeDynamics '],19,07,269
99,35,1264753233072816130,408932801,Florian Richoux,"New #GameAI paper about decision-making under uncertainty in the RTS game #microRTS, with probability distributions changing on-the-fly according to the opponent strategy. As always, source code is available on GitHub. In the paper's conclusion, I propose a new track for the #microRTS AI competition: the chaotic track, where rules change at each game, and eventually even during a game! That could interest @santiontanon :) My bot microPhantom will participate in the #microRTS AI competition partial observability track this year. Our previous bot POAdaptive won the 2018 and 2019 PO track and microPhantom is clearly stronger than POAdaptive. Looking forward to seeing the competition results! @santiontanon I mostly aim hardcoded bots. :p (I wrote about that in the introduction)",https://arxiv.org/abs/2005.11019,"This competition paper presents microPhantom, a bot playing microRTS and participating in the 2020 microRTS AI competition. microPhantom is based on our previous bot POAdaptive which won the partially observable track of the 2018 and 2019 microRTS AI competitions. In this paper, we focus on decision-making under uncertainty, by tackling the Unit Production Problem with a method based on a combination of Constraint Programming and decision theory. We show that using our method to decide which units to train improves significantly the win rate against the second-best microRTS bot from the partially observable track. We also show that our method is resilient in chaotic environments, with a very small loss of efficiency only. To allow replicability and to facilitate further research, the source code of microPhantom is available, as well as the Constraint Programming toolkit it uses. ",microPhantom: Playing microRTS under uncertainty and chaos,5,"['New #GameAI paper about decision-making under uncertainty in the RTS game #microRTS, with probability distributions changing on-the-fly according to the opponent strategy.\n', 'As always, source code is available on GitHub.\nhttps://t.co/fGckqbin9Y', ""In the paper's conclusion, I propose a new track for the #microRTS AI competition: the chaotic track, where rules change at each game, and eventually even during a game! That could interest @santiontanon :)"", 'My bot microPhantom will participate in the #microRTS AI competition partial observability track this year.\n\nOur previous bot POAdaptive won the 2018 and 2019 PO track and microPhantom is clearly stronger than POAdaptive. Looking forward to seeing the competition results!', '@santiontanon I mostly aim hardcoded bots. :p (I wrote about that in the introduction)']",20,05,798
100,99,1358976863754801153,2377407248,Daniel Whiteson,"New paper! ""Feasibility of Correlated Extensive Air Shower Detection with a Distributed Cosmic Ray Network"" With Eric Albin. This paper asks: is it possible that there are cosmic ray signal SO BIG that they cover the Earth? We found a mechanism, the GZ effect, which would shatter cosmic rays in half before they reach the Earth, creating two impact sites that are correlated in time. Current cosmic ray observatories are awesome, but not big enough to see Earth-sized effects like this. So we wondered if one could see them using a cosmic-ray telescope made out of smartphones, based on this idea: Turns out you can! It's a bit tricky, and you need something more common than vanilla GZ effect, but there's a laundry list of exotic theories that would generate such effects. Right now, we're just not looking for it.",https://arxiv.org/abs/2102.03466,"We explore the sensitivity offered by a global network of cosmic ray detectors to a novel, unobserved phenomena: widely separated simultaneous extended air showers. Existing localized observatories work independently to observe individual showers, offering insight into the source and nature of ultra-high energy cosmic rays. However no current observatory is large enough to provide sensitivity to anticipated processes such as the GZ effect or potential new physics that generate simultaneous air showers separated by hundreds to thousands of kilometers. A global network of consumer electronics (the CRAYFIS experiment), may provide a novel opportunity for observation of such phenomena. Two user scenarios are explored. In the first, with maximal user adoption, we find that statistically significant discoveries of spatially-separated but coincident showers are possible within a couple years. In the second, more practical adoption model with $10^6$ active devices, we find a worldwide CRAYFIS to be sensitive to novel ""burst"" phenomena where many simultaneous EASs occur at once. ","Feasibility of Correlated Extensive Air Shower Detection with a
Distributed Cosmic Ray Network",6,"['New paper!\n\n""Feasibility of Correlated Extensive Air Shower Detection with a Distributed Cosmic Ray Network""\n\n\nWith Eric Albin.\n\nThis paper asks: is it possible that there are cosmic ray signal SO BIG that they cover the Earth?', 'We found a mechanism, the GZ effect, which would shatter cosmic rays in half before they reach the Earth, creating two impact sites that are correlated in time.', 'Current cosmic ray observatories are awesome, but not big enough to see Earth-sized effects like this.', 'So we wondered if one could see them using a cosmic-ray telescope made out of smartphones, based on this idea: https://t.co/7LDvu0qNNL', ""Turns out you can! It's a bit tricky, and you need something more common than vanilla GZ effect, but there's a laundry list of exotic theories that would generate such effects."", ""Right now, we're just not looking for it.""]",21,02,831
101,93,1319311199188602880,171674815,Mark Marley,"New @PlanetImager paper led by Kim Ward-Duong on the dusty substellar companion orbiting in the debris disk of an F5V star. Are we seeing pollution of the atmosphere by infalling dust? Look how red it is, almost falling off the plot on the right. It is very hard for any of the self consistent forward models---from any group--to get so red. This reminded me a bit of the post-impact Jupiter after the S/L-9 impacts. This brown dwarf is sitting inside of a dusty debris disk. Are we seen a dirty atmosphere? Kim used the approach we had followed in last decade in a paper with Kay Hiranaka and @kellecruz and checked to see if silicate dust could redden the spectrum. Bottom line is that it seems plausible but a lot more work needs to be done. This object is ripe for retrieval studies to see if we can nail down the atmospheric structure. See the paper for more details. @AstroThayne @sciboinkhobbes @PlanetImager You don't know how the pollution might be delivered. Maybe, as in SL/9 through larger objects. All the mass budget issues and dust lifetimes need to be looked at.",https://arxiv.org/abs/2010.10546,"We present new near-infrared Gemini Planet Imager (GPI) spectroscopy of HD 206893 B, a substellar companion orbiting within the debris disk of its F5V star. The $J$, $H$, $K1$, and $K2$ spectra from GPI demonstrate the extraordinarily red colors of the object, confirming it as the reddest substellar object observed to date. The significant flux increase throughout the infrared presents a challenging atmosphere to model with existing grids. Best-fit values vary from 1200 K to 1800 K for effective temperature and from 3.0 to 5.0 for log($g$), depending on which individual wavelength band is fit and which model suite is applied. The extreme redness of the companion can be partially reconciled by invoking a high-altitude layer of sub-micron dust particles, similar to dereddening approaches applied to the peculiar red field L-dwarf population. However, reconciling the HD 206893 B spectra with even those of the reddest low-gravity L-dwarf spectra still requires the contribution of additional atmospheric dust, potentially due to the debris disk environment in which the companion resides. Orbit fitting from four years of astrometric monitoring is consistent with a $\sim$30-year period, orbital inclination of 147$^{\circ}$, and semimajor axis of 10 au, well within the estimated disk inner radius of $\sim$50 au. As one of very few substellar companions imaged interior to a circumstellar disk, the properties of this system offer important dynamical constraints on companion-disk interaction and provide a benchmark for substellar and planetary atmospheric study. ","Gemini Planet Imager Spectroscopy of the Dusty Substellar Companion HD
206893 B",6,"['New @PlanetImager paper led by Kim Ward-Duong on the dusty substellar companion orbiting in the debris disk of an F5V star. Are we seeing pollution of the atmosphere by infalling dust? \n', 'Look how red it is, almost falling off the plot on the right. https://t.co/LR5BYLBXdP', 'It is very hard for any of the self consistent forward models---from any group--to get so red. This reminded me a bit of the post-impact Jupiter after the S/L-9 impacts. https://t.co/WlMzxhqHUj', 'This brown dwarf is sitting inside of a dusty debris disk. Are we seen a dirty atmosphere? Kim used the approach we had followed in last decade in a paper with Kay Hiranaka and @kellecruz and checked to see if silicate dust could redden the spectrum.', 'Bottom line is that it seems plausible but a lot more work needs to be done. This object is ripe for retrieval studies to see if we can nail down the atmospheric structure. See the paper for more details.', ""@AstroThayne @sciboinkhobbes @PlanetImager You don't know how the pollution might be delivered. Maybe, as in SL/9 through larger objects. All the mass budget issues and dust lifetimes need to be looked at.""]",20,10,1099
102,106,1448296299560853504,584142796,Carl Rodriguez,"Extremely cool new paper from @ClaireShiYe on modeling 47 Tuc and all the compact objects (particularly neutron stars and millisecond pulsars in it)! Of particular note: Claire shows that you *can't* reproduce 47 Tuc by starting from an equilibrium King model; you need to start with a shallower profile. In this case, she started with an ""Elson"" profile inspired by observations of young massive clusters in the local universe.",https://arxiv.org/abs/2110.05495,"The globular cluster 47 Tucanae (47 Tuc) is one of the most massive star clusters in the Milky Way and is exceptionally rich in exotic stellar populations. For several decades it has been a favorite target of observers, and yet it is computationally very challenging to model because of its large number of stars ($N\gtrsim 10^6$) and high density. Here we present detailed and self-consistent 47 Tuc models computed with the \texttt{Cluster Monte Carlo} code (\texttt{CMC}). The models include all relevant dynamical interactions coupled to stellar and binary evolution, and reproduce various observations, including the surface brightness and velocity dispersion profiles, pulsar accelerations, and numbers of compact objects. We show that the present properties of 47 Tuc are best reproduced by adopting an initial stellar mass function that is both bottom-heavy and top-light relative to standard assumptions \citep[as in, e.g.,][]{Kroupa2001}, and an initial Elson profile \citep{Elson1987} that is overfilling the cluster's tidal radius. We include new prescriptions in \texttt{CMC} for the formation of binaries through giant star collisions and tidal captures, and we show that these mechanisms play a crucial role in the formation of neutron star binaries and millisecond pulsars in 47 Tuc; our best-fit model contains $\sim 50$ millisecond pulsars, $80\%$ of which are formed through giant collisions and tidal captures. Our models also suggest that 47 Tuc presently contains up to $\sim 200$ stellar-mass black holes, $\sim 5$ binary black holes, $\sim 15$ low-mass X-ray binaries, and $\sim 300$ cataclysmic variables. ",Compact Object Modeling in the Globular Cluster 47 Tucanae,2,"['Extremely cool new paper from @ClaireShiYe on modeling 47 Tuc and all the compact objects (particularly neutron stars and millisecond pulsars in it)! ', 'Of particular note: Claire shows that you *can\'t* reproduce 47 Tuc by starting from an equilibrium King model; you need to start with a shallower profile. In this case, she started with an ""Elson"" profile inspired by observations of young massive clusters in the local universe.']",21,10,435
103,45,974255416875147264,185910194,Graham Neubig,"Posted ""Neural Lattice Language Models"", our new paper (TACL) on LMs that calculate probability of a sentence by marginalizing over a lattice! It's a nice and flexible framework for LMs that lets you consider ambiguity such as word sense, segmentation, etc ",https://arxiv.org/abs/1803.05071,"In this work, we propose a new language modeling paradigm that has the ability to perform both prediction and moderation of information flow at multiple granularities: neural lattice language models. These models construct a lattice of possible paths through a sentence and marginalize across this lattice to calculate sequence probabilities or optimize parameters. This approach allows us to seamlessly incorporate linguistic intuitions - including polysemy and existence of multi-word lexical items - into our language model. Experiments on multiple language modeling tasks show that English neural lattice language models that utilize polysemous embeddings are able to improve perplexity by 9.95% relative to a word-level baseline, and that a Chinese model that handles multi-character tokens is able to improve perplexity by 20.94% relative to a character-level baseline. ",Neural Lattice Language Models,1,"['Posted ""Neural Lattice Language Models"", our new paper (TACL) on LMs that calculate probability of a sentence by marginalizing over a lattice! \nIt\'s a nice and flexible framework for LMs that lets you consider ambiguity such as word sense, segmentation, etc ']",18,03,270
104,17,1034454643139641345,2869101210,Jenn Wortman Vaughan,Uncanny case of great-minds-think-alike: New paper on The Disparate Effects of Strategic Manipulation w/ @uhlily @immorlica We study the social impact of classification in systems marked by inequality+potential for manipulation. Complementary analyses. ,https://arxiv.org/abs/1808.08646,"When consequential decisions are informed by algorithmic input, individuals may feel compelled to alter their behavior in order to gain a system's approval. Models of agent responsiveness, termed ""strategic manipulation,"" analyze the interaction between a learner and agents in a world where all agents are equally able to manipulate their features in an attempt to ""trick"" a published classifier. In cases of real world classification, however, an agent's ability to adapt to an algorithm is not simply a function of her personal interest in receiving a positive classification, but is bound up in a complex web of social factors that affect her ability to pursue certain action responses. In this paper, we adapt models of strategic manipulation to capture dynamics that may arise in a setting of social inequality wherein candidate groups face different costs to manipulation. We find that whenever one group's costs are higher than the other's, the learner's equilibrium strategy exhibits an inequality-reinforcing phenomenon wherein the learner erroneously admits some members of the advantaged group, while erroneously excluding some members of the disadvantaged group. We also consider the effects of interventions in which a learner subsidizes members of the disadvantaged group, lowering their costs in order to improve her own classification performance. Here we encounter a paradoxical result: there exist cases in which providing a subsidy improves only the learner's utility while actually making both candidate groups worse-off--even the group receiving the subsidy. Our results reveal the potentially adverse social ramifications of deploying tools that attempt to evaluate an individual's ""quality"" when agents' capacities to adaptively respond differ. ",The Disparate Effects of Strategic Manipulation,1,['Uncanny case of great-minds-think-alike:\n\nNew paper on The Disparate Effects of Strategic Manipulation w/ @uhlily @immorlica\n \n\nWe study the social impact of classification in systems marked by inequality+potential for manipulation. Complementary analyses. '],18,08,267
105,234,1435350231437045762,1387514922,Irwan Bello,"Wondering how simple 3D-ResNets perform on video recognition given all the recent architecture craze? In Revisiting 3D ResNets for Video Recognition, we study the impact of improved training and scaling methods on 3D ResNets. We present revised 3D-ResNet baselines, termed 3D-ResNet-RS, and show that they are on par with more recent works. This follows a series of works revisiting âolderâ architectures and showing that they can be competitive with recent alternatives, when scaled and trained properly. Object Detection: Image classification: Work led by @Phyyysalis in collaboration with @yeqing133, @YinCui1, @RuiQian3 and Jing Li.",https://arxiv.org/abs/2109.01696,"A recent work from Bello shows that training and scaling strategies may be more significant than model architectures for visual recognition. This short note studies effective training and scaling strategies for video recognition models. We propose a simple scaling strategy for 3D ResNets, in combination with improved training strategies and minor architectural changes. The resulting models, termed 3D ResNet-RS, attain competitive performance of 81.0 on Kinetics-400 and 83.8 on Kinetics-600 without pre-training. When pre-trained on a large Web Video Text dataset, our best model achieves 83.5 and 84.3 on Kinetics-400 and Kinetics-600. The proposed scaling rule is further evaluated in a self-supervised setup using contrastive learning, demonstrating improved performance. Code is available at: this https URL ",Revisiting 3D ResNets for Video Recognition,4,"['Wondering how simple 3D-ResNets perform on video recognition given all the recent architecture craze?\n\nIn Revisiting 3D ResNets for Video Recognition, we study the impact of improved training and scaling methods on 3D ResNets.\n\n ', 'We present revised 3D-ResNet baselines, termed 3D-ResNet-RS, and show that they are on par with more recent works. https://t.co/Fa1nSl9loe', 'This follows a series of works revisiting âolderâ architectures and showing that they can be competitive with recent alternatives, when scaled and trained properly.\n\nObject Detection:\nhttps://t.co/VrSbG9RuvM\n\nImage classification:\nhttps://t.co/K7hMTtS0eR', 'Work led by @Phyyysalis in collaboration with @yeqing133, @YinCui1, @RuiQian3 and Jing Li.']",21,09,671
106,34,1408286165858410503,1059281382025977856,Polina Kirichenko,"Excited to share that (1) It is my birthday đ (2) We have a new paper ""Task-agnostic Continual Learning with Hybrid Probabilistic Models"" on arxiv today! We design a hybrid generative-discriminative model based on normalizing flows for continual learning In task-agnostic continual learning, we need to train the model continually on a sequence of tasks without knowing the task boundaries. Turns out, we can use a single hybrid model to (1) detect task changes, (2) avoid forgetting and (3) make predictions! We propose HCL, which uses a single normalizing flow to map the data into a latent space where we model the distribution of each class in each task as a Gaussian. When a new task appears we (1) identify it using the flow and (2) initialize the latent Gaussians for the new task. To make predictions, we use the Bayes rule: we predict the class (and task) for which the input has the highest density. To avoid forgetting, we propose two techniques: - HCL-GR uses generative replay, where we sample data from a snapshot of the model and re-train the model on this data - HCL-FR uses functional regularization, where we force the model to map replay data to the same latent position HCL-FR provides stronger regularization and prevents forgetting better! In the figure: (b) data distribution with squares showing our replay data; grey is the first and orange is the second task; (c) HCL-GR model and (d) HCL-FR model after training on the second task. We compare HCL variants with other generative continual learning models, with promising results. HCL performs well both in task-aware and task-agnostic settings! Finally, HCL enables us to automatically detect new tasks as well as recurring tasks! To do so we can use the normalizing flow likelihood, or an advanced anomaly detection technique such as DoSE (). With a great team of coauthors at @DeepMind @GoogleAI and @NYUDataScience: @MFarajtabar, @drao64, @balajiln, Nir Levine, @__angli, Huiyi Hu, @andrewgwils and @rpascanu! @KevinKaichuang Happy birthday! đ @balajiln Thank you! :) @MFarajtabar Haha thank you Mehrdad! đ",https://arxiv.org/abs/2106.12772,"Learning new tasks continuously without forgetting on a constantly changing data distribution is essential for real-world problems but extremely challenging for modern deep learning. In this work we propose HCL, a Hybrid generative-discriminative approach to Continual Learning for classification. We model the distribution of each task and each class with a normalizing flow. The flow is used to learn the data distribution, perform classification, identify task changes, and avoid forgetting, all leveraging the invertibility and exact likelihood which are uniquely enabled by the normalizing flow model. We use the generative capabilities of the flow to avoid catastrophic forgetting through generative replay and a novel functional regularization technique. For task identification, we use state-of-the-art anomaly detection techniques based on measuring the typicality of the model's statistics. We demonstrate the strong performance of HCL on a range of continual learning benchmarks such as split-MNIST, split-CIFAR, and SVHN-MNIST. ",Task-agnostic Continual Learning with Hybrid Probabilistic Models,12,"['Excited to share that\n(1) It is my birthday đ\n(2) We have a new paper ""Task-agnostic Continual Learning with Hybrid Probabilistic Models"" on arxiv today! We design a hybrid generative-discriminative model based on normalizing flows for continual learning ', 'In task-agnostic continual learning, we need to train the model continually on a sequence of tasks without knowing the task boundaries. Turns out, we can use a single hybrid model to (1) detect task changes, (2) avoid forgetting and (3) make predictions!', 'We propose HCL, which uses a single normalizing flow to map the data into a latent space where we model the distribution of each class in each task as a Gaussian. When a new task appears we (1) identify it using the flow and (2) initialize the latent Gaussians for the new task.', 'To make predictions, we use the Bayes rule: we predict the class (and task) for which the input has the highest density.', 'To avoid forgetting, we propose two techniques: \n- HCL-GR uses generative replay, where we sample data from a snapshot of the model and re-train the model on this data\n- HCL-FR uses functional regularization, where we force the model to map replay data to the same latent position https://t.co/1kIUJYq1NK', 'HCL-FR provides stronger regularization and prevents forgetting better! In the figure: (b) data distribution with squares showing our replay data; grey is the first and orange is the second task; (c) HCL-GR model and (d) HCL-FR model after training on the second task. https://t.co/0veMNcEbpB', 'We compare HCL variants with other generative continual learning models, with promising results. HCL performs well both in task-aware and task-agnostic settings! https://t.co/r77wd7gZHe', 'Finally, HCL enables us to automatically detect new tasks as well as recurring tasks! To do so we can use the normalizing flow likelihood, or an advanced anomaly detection technique such as DoSE (https://t.co/woliqw12Ma).', 'With a great team of coauthors at @DeepMind @GoogleAI and @NYUDataScience:\n\n@MFarajtabar, @drao64, @balajiln, Nir Levine, @__angli, Huiyi Hu, @andrewgwils and @rpascanu!', '@KevinKaichuang Happy birthday! đ', '@balajiln Thank you! :)', '@MFarajtabar Haha thank you Mehrdad! đ']",21,06,2126
107,184,1453527034483941377,764045672,Mayank Agarwal,"Delighted to share our upcoming #NeurIPS2021 paper ""On sensitivity of meta-learning to support data""() We study the sensitivity of meta-learning methods to adaptation data, and show the existence of... (1/2) unaltered, in-distribution, and natural images that, when used for adaptation, yield accuracy as low as 4% or as high as 95% on standard few-shot image classification benchmarks. (2/2) @icepieces Thank you Yas!",https://arxiv.org/abs/2110.13953,"Meta-learning algorithms are widely used for few-shot learning. For example, image recognition systems that readily adapt to unseen classes after seeing only a few labeled examples. Despite their success, we show that modern meta-learning algorithms are extremely sensitive to the data used for adaptation, i.e. support data. In particular, we demonstrate the existence of (unaltered, in-distribution, natural) images that, when used for adaptation, yield accuracy as low as 4\% or as high as 95\% on standard few-shot image classification benchmarks. We explain our empirical findings in terms of class margins, which in turn suggests that robust and safe meta-learning requires larger margins than supervised learning. ",On sensitivity of meta-learning to support data,3,"['Delighted to share our upcoming #NeurIPS2021 paper ""On sensitivity of meta-learning to support data""()\n\nWe study the sensitivity of meta-learning methods to adaptation data, and show the existence of... (1/2) ', 'unaltered, in-distribution, and natural images that, when used for adaptation, yield accuracy as low as 4% or as high as 95% on standard few-shot image classification benchmarks. (2/2)', '@icepieces Thank you Yas!']",21,10,431
108,28,932557279454412800,794346991627010048,Lauren Oakden-Rayner (Dr.Dr. đ„ł),"Our new paper: ""Detecting hip fractures with radiologist-level performance using deep neural networks"". 55,000 hip x-rays, big improvement over SOTA, AUC 0.994 (!). Huge team effort, especially from my exceptional co-author William Gale (an undergrad!) @AndrewLBeam We discussed this a lot. In the end we decided to pre-print. Many good journals allow pre-prints, just no media. Also, what we do submit will have completely different experiments. Hopefully it doesn't bite us :) @AndrewLBeam Yeah, it seems to be a medical journal thing. There are other 'big' journals :) @DrDeclanORegan @DrHughHarvey I'm uncomfortable with cherry picked images in papers which is why there aren't any. The stats speak loudly IMO. But I'm happy to post some up tomorrow, the system does identify really subtle fractures. One that it got right took two CTs and an MRI before dx. @alexattia Not yet, we are still working on the project. The dataset is not public right now either, so any attempt to ""re-implement"" is unfortunately doomed. The network itself is just a DenseNet, there are many public implementations. @alexattia We haven't, building a dataset for each new task is a huge effort. I see no reason why it wouldn't be achievable with good data, but that can be hard to achieve with other fractures (patients can be much more easily missed). I do know @enlitic did some work on wrist fractures. @alexattia @enlitic In our next experiments we will do some ablation. It depends what we mean by ""reasonable performance"". I think 95 AUC is totally achievable with less data, because the majority of fractures are obvious. Learning to deal with the subtle ones is much harder @DrDeclanORegan @DrHughHarvey Here is an example of the minimally displaced ones that it gets right. In the next paper we will include lots of images (it will be more justified with our experiments). ",https://arxiv.org/abs/1711.06504v1,"We developed an automated deep learning system to detect hip fractures from frontal pelvic x-rays, an important and common radiological task. Our system was trained on a decade of clinical x-rays (~53,000 studies) and can be applied to clinical data, automatically excluding inappropriate and technically unsatisfactory studies. We demonstrate diagnostic performance equivalent to a human radiologist and an area under the ROC curve of 0.994. Translated to clinical practice, such a system has the potential to increase the efficiency of diagnosis, reduce the need for expensive additional testing, expand access to expert level medical image interpretation, and improve overall patient outcomes. ","] Detecting hip fractures with radiologist-level performance using deep
neural networks",8,"['Our new paper: ""Detecting hip fractures with radiologist-level performance using deep neural networks"". \n\n55,000 hip x-rays, big improvement over SOTA, AUC 0.994 (!). Huge team effort, especially from my exceptional co-author William Gale (an undergrad!)\n\n ', ""@AndrewLBeam We discussed this a lot. In the end we decided to pre-print. Many good journals allow pre-prints, just no media. Also, what we do submit will have completely different experiments. Hopefully it doesn't bite us :)"", ""@AndrewLBeam Yeah, it seems to be a medical journal thing. There are other 'big' journals :)"", ""@DrDeclanORegan @DrHughHarvey I'm uncomfortable with cherry picked images in papers which is why there aren't any. The stats speak loudly IMO. But I'm happy to post some up tomorrow, the system does identify really subtle fractures. One that it got right took two CTs and an MRI before dx."", '@alexattia Not yet, we are still working on the project. The dataset is not public right now either, so any attempt to ""re-implement"" is unfortunately doomed. The network itself is just a DenseNet, there are many public implementations.', ""@alexattia We haven't, building a dataset for each new task is a huge effort. I see no reason why it wouldn't be achievable with good data, but that can be hard to achieve with other fractures (patients can be much more easily missed). I do know @enlitic did some work on wrist fractures."", '@alexattia @enlitic In our next experiments we will do some ablation. It depends what we mean by ""reasonable performance"". I think 95 AUC is totally achievable with less data, because the majority of fractures are obvious. Learning to deal with the subtle ones is much harder', '@DrDeclanORegan @DrHughHarvey Here is an example of the minimally displaced ones that it gets right. In the next paper we will include lots of images (it will be more justified with our experiments). https://t.co/MQLo4Cvp4X']",17,11,1885
109,81,1048200598330511360,436029997,Noah Haber,"New on arXiv: This paper identifies a policy-relevant causal effect, BUT 1) Controls for almost nothing in the main spec 2) Uses only basic regression methods 3) Has no meaningful effect size estimate 4) Is not a natural experiment Curiosity stoked yet? No? How about these: 5) Uses a threshold for effect estimation, BUT 6) Does not measure the effect across the threshold (i.e. is not regression discontinuity) 7) Uses p-values because in this setting they are more sensible than CIs",https://arxiv.org/abs/1810.01971,"South Africa's disability grants program is tied to its HIV/AIDS recovery program, such that individuals who are ill enough may qualify. Qualification is historically tied to a CD4 count of 200 cells/mm3, which improve when a person adheres to antiretroviral therapy. This creates a potential unintended consequence where poor individuals, faced with potential loss of their income, may choose to limit their recovery through non-adherence. To test for manipulation caused by grant rules, we identify differences in disability grant recipients and non-recipients' rate of CD4 recovery around the qualification threshold, implemented as a fixed-effects difference-in-difference around the threshold. We use data from the Africa Health Research Institute Demographic and Health Surveillance System (AHRI DSS) in rural KwaZulu-Natal, South Africa, utilizing DG status and laboratory CD4 count records for 8,497 individuals to test whether there are any systematic differences in CD4 recover rates among eligible patients. We find that disability grant threshold rules caused recipients to have a relatively slower CD4 recovery rate of about 20-30 cells/mm3/year, or a 20% reduction in the speed of recovery around the threshold. ","Disability for HIV and Disincentives for Health: The Impact of South
Africa's Disability Grant on HIV/AIDS Recovery",2,"['New on arXiv: This paper identifies a policy-relevant causal effect, BUT\n1) Controls for almost nothing in the main spec\n2) Uses only basic regression methods\n3) Has no meaningful effect size estimate\n4) Is not a natural experiment\nCuriosity stoked yet?\n', 'No? How about these:\n5) Uses a threshold for effect estimation, BUT\n6) Does not measure the effect across the threshold (i.e. is not regression discontinuity)\n7) Uses p-values because in this setting they are more sensible than CIs']",18,10,492
110,160,1380322125903360002,1115946897431244800,Darsh J Shah,"What do we do when sources of information aren't in full agreement? Check out our new paper -> Nutribullets Hybrid . We generate summaries of multiple documents which have varying degrees of consensus. @BarzilayRegina @taolei15949106 @liliyu_lili NAACL 2021, Camera Ready!",https://arxiv.org/abs/2104.03465,"We present a method for generating comparative summaries that highlights similarities and contradictions in input documents. The key challenge in creating such summaries is the lack of large parallel training data required for training typical summarization systems. To this end, we introduce a hybrid generation approach inspired by traditional concept-to-text systems. To enable accurate comparison between different sources, the model first learns to extract pertinent relations from input documents. The content planning component uses deterministic operators to aggregate these relations after identifying a subset for inclusion into a summary. The surface realization component lexicalizes this information using a text-infilling language model. By separately modeling content selection and realization, we can effectively train them with limited annotations. We implemented and tested the model in the domain of nutrition and health -- rife with inconsistencies. Compared to conventional methods, our framework leads to more faithful, relevant and aggregation-sensitive summarization -- while being equally fluent. ",Nutribullets Hybrid: Multi-document Health Summarization,2,"[""What do we do when sources of information aren't in full agreement? Check out our new paper -> Nutribullets Hybrid .\nWe generate summaries of multiple documents which have varying degrees of consensus. @BarzilayRegina @taolei15949106 @liliyu_lili "", 'NAACL 2021, Camera Ready!']",21,04,288
111,37,1255768143596830721,135150782,Michele Ginolfi,"My new paper is out! ""CGM pollution and gas mixing by tidal stripping in a merging system at z~4.57"" Here we show ALMA/HST+ observations of an interesting major merging system at z~4.5 (observed in ALPINE), close to a density peak of a protocluster. 1/ ALMA reveals [CII] arising from an extended structure (~30 kpc) surrounding the system, and about 50% of the total flux resides *between* the individual galaxy components, in a sort of metal-enriched gaseous envelope with a disturbed morphology and complex kinematics. 2/ Similarly to shock-excited [CII] observed in tidal tails in local groups, we interpret our results as a possible signature of ISM stripped by strong gravitational interactions, with some contribution from material ejected by outflows and SF in small faint satellites. 3/ Our findings suggest that strong dynamical interactions in major merging systems at high-z can be an efficient mechanism for extracting gas out of galaxies and mixing the CGM with metals. This might also represent a natural channel to feed and enrich the nascent proto-ICM. 4/4 PS: the paper is submitted to A&A. Any comments are welcome! PPS: stay tuned for future ALPINE papers on the morpho-kinematical characterisation of high-z galaxies.",http://arxiv.org/abs/2004.13737,"We present ALMA observations of a merging system at z ~ 4.57, observed as a part of the ALMA Large Program to INvestigate [CII] at Early times (ALPINE) survey. Combining ALMA [CII] 158 micron and far-infrared continuum data with multi-wavelength ancillary data we find that the system is composed of two massive (Mstar >~ 10^10 Msun) star-forming galaxies experiencing a major merger (stellar mass ratio r_mass ~ 0.9) at close spatial (~13 kpc; projected) and velocity (delta_v < 300 km/s) separations, and two additional faint narrow [CII]-emitting satellites. The overall system belongs to a larger-scale protocluster environment and is coincident to one of its overdensity peaks. ALMA reveals also the presence of [CII] emission arising from a circumgalactic gas structure, extending up to a diameter-scale of ~30 kpc. Our morpho-spectral decomposition analysis shows that about 50% of the total flux resides between the individual galaxy components, in a metal-enriched gaseous envelope characterized by a disturbed morphology and complex kinematics. Similarly to observations of shock-excited [CII] emitted from tidal tails in local groups, our results can be interpreted as a possible signature of interstellar gas stripped by strong gravitational interactions, with a possible contribution from material ejected by galactic outflows and emission triggered by star formation in small faint satellites. Our findings suggest that mergers could be an efficient mechanism of gas mixing in the circumgalactic medium around high-z galaxies, and thus play a key role in the galaxy baryon cycle at early epochs. ","The ALPINE-ALMA [CII] Survey: CGM pollution and gas mixing by tidal
stripping in a merging system at z~4.57",5,"['My new paper is out!\n""CGM pollution and gas mixing by tidal stripping in a merging system at z~4.57"" \nHere we show ALMA/HST+ observations of an interesting major merging system at z~4.5 (observed in ALPINE), close to a density peak of a protocluster.\n1/ ', 'ALMA reveals [CII] arising from an extended structure (~30 kpc) surrounding the system, and about 50% of the total flux resides *between* the individual galaxy components, in a sort of metal-enriched gaseous envelope with a disturbed morphology and complex kinematics. \n2/', 'Similarly to shock-excited [CII] observed in tidal tails in local groups, we interpret our results as a possible signature of ISM stripped by strong gravitational interactions, with some contribution from material ejected by outflows and SF in small faint satellites.\n3/', 'Our findings suggest that strong dynamical interactions in major merging systems at high-z can be an efficient mechanism for extracting gas out of galaxies and mixing the CGM with metals. This might also represent a natural channel to feed and enrich the nascent proto-ICM.\n4/4', 'PS: the paper is submitted to A&A. Any comments are welcome!\n\nPPS: stay tuned for future ALPINE papers on the morpho-kinematical characterisation of high-z galaxies.']",20,04,1252
112,150,1316911104279314432,16677638,Alex Beutel,"New research from a collaboration between the Language Research team and my team on gendered correlations in language models (led by Kellie Webster, Xuezhi Wang, and @iftenney)! Paper: A few exciting results in addition to those in the blog post (1/): The blog post discusses how applying different language models to coreference resolution can give nearly the same accuracy but make assumptions based on gender at very different rates. We see this not just for coref, but also 3 other tasks (including 2 new ones). (2/) From the blog post but worth highlighting again: these significant differences in the usage of gendered correlations can come from *seemingly innocuous* configuration changes, like changing dropout. Among many take-aways, test thoroughly! (3/) We find counterfactual data augmentation (CDA) also can help decrease usage of gendered correlations. Particularly (academically) interesting, we find that CDA generalizes---applying CDA for some names decreases making gendered assumptions for other names! (4/) Further, both of these *pre-training* changes can improve gendered correlations in downstream tasks---we can see the benefits maintained over the course of fine-tuning. (5/) Based on this research we discuss some best practices for language modeling. And more generally I'm excited that this research suggests there is a path for broad benefits in this space from improving underlying pre-trained language models. (/end)",https://arxiv.org/abs/2010.06032,"Pre-trained models have revolutionized natural language understanding. However, researchers have found they can encode artifacts undesired in many applications, such as professions correlating with one gender more than another. We explore such gendered correlations as a case study for how to address unintended correlations in pre-trained models. We define metrics and reveal that it is possible for models with similar accuracy to encode correlations at very different rates. We show how measured correlations can be reduced with general-purpose techniques, and highlight the trade offs different strategies have. With these results, we make recommendations for training robust models: (1) carefully evaluate unintended correlations, (2) be mindful of seemingly innocuous configuration differences, and (3) focus on general mitigations. ",Measuring and Reducing Gendered Correlations in Pre-trained Models,6,"['New research from a collaboration between the Language Research team and my team on gendered correlations in language models (led by Kellie Webster, Xuezhi Wang, and @iftenney)!\n\nPaper: \n\nA few exciting results in addition to those in the blog post (1/): ', 'The blog post discusses how applying different language models to coreference resolution can give nearly the same accuracy but make assumptions based on gender at very different rates. We see this not just for coref, but also 3 other tasks (including 2 new ones). (2/)', 'From the blog post but worth highlighting again: these significant differences in the usage of gendered correlations can come from *seemingly innocuous* configuration changes, like changing dropout. Among many take-aways, test thoroughly! (3/) https://t.co/PG93ascGHG', 'We find counterfactual data augmentation (CDA) also can help decrease usage of gendered correlations. Particularly (academically) interesting, we find that CDA generalizes---applying CDA for some names decreases making gendered assumptions for other names! (4/)', 'Further, both of these *pre-training* changes can improve gendered correlations in downstream tasks---we can see the benefits maintained over the course of fine-tuning. (5/) https://t.co/HDt4JS5eyC', ""Based on this research we discuss some best practices for language modeling. And more generally I'm excited that this research suggests there is a path for broad benefits in this space from improving underlying pre-trained language models. (/end)""]",20,10,1476
113,268,1313507579310551041,1152338625654226944,Megan Mansfield,"New paper on the arXiv today! Let's talk about eclipse mapping. #scicomm Before I even start, this was a group effort and it would never have happened without all my lovely colleagues: Everett Schlawin, Jake Lustig-Yaeger, Arthur Adams, @astronemly, Jacob Arcangeli, @cloudfreekat, Prashansa Gupta, Dylan Keating, @kevinbstevenson, Thomas Beatty. Crash course on eclipse mapping: by observing secondary eclipse ingress/egress, when the planet is slowly disappearing/appearing behind the star, we can construct a 2D map of the dayside of a planet. With JWST, we can make ~spectroscopic~ eclipse mapping observations. And, since observing at different wavelengths = observing at different pressures/altitudes in a planet, this means 3D DAYSIDE MAPS But there's degeneracies in the info we get from these maps, since we are actually just observing flux over time but we want to determine flux over latitude and longitude. How to do it? Enter the Eigenspectra Mapping Method So there's a LOT of math in this paper, but the basic idea is we use a method developed by @astronemly (Rauscher+18) to make 2D maps, then stack them together to create a 3D spatial+spectral map. We use a clustering algorithm to look for patterns on this 3D map. This lets us identify a few spectra that show all the variability across the map. So you can see how temperature/composition changes over the dayside, but only have to analyze a couple high-precision spectra. So how does it work? Pretty well! We can identify large-scale features and flux gradients on dayside maps. This method has 2 big applications: 1. To get a first look at eclipse mapping data and identify large-scale features without assuming your observations will match your GCM predictions. 2. To plan what precision of observations you need to observe certain features in your models. Finally, we want the whole JWST observer community to be able to use this tool! GitHub repo currently in development, with a planned release of a user-friendly version in the next couple of months. PS I skipped a lot of the meat of this paper (bc who wants to read a tweet about PCA?) but if you want to know more just email/message me! @_astronomay Thanks, me too!",https://arxiv.org/abs/2010.02197,"Planetary atmospheres are inherently 3D objects that can have strong gradients in latitude, longitude, and altitude. Secondary eclipse mapping is a powerful way to map the 3D distribution of the atmosphere, but the data can have large correlations and errors in the presence of photon and instrument noise. We develop a technique to mitigate the large uncertainties of eclipse maps by identifying a small number of dominant spectra to make them more tractable for individual analysis via atmospheric retrieval. We use the eigencurves method to infer a multi-wavelength map of a planet from spectroscopic secondary eclipse light curves. We then apply a clustering algorithm to the planet map to identify several regions with similar emergent spectra. We combine the similar spectra together to construct an ""eigenspectrum"" for each distinct region on the planetary map. We demonstrate how this approach could be used to isolate hot from cold regions and/or regions with different chemical compositions in observations of hot Jupiters with the James Webb Space Telescope (JWST). We find that our method struggles to identify sharp edges in maps with sudden discontinuities, but generally can be used as a first step before a more physically motivated modeling approach to determine the primary features observed on the planet. ","Eigenspectra: A Framework for Identifying Spectra from 3D Eclipse
Mapping",12,"[""New paper on the arXiv today! Let's talk about eclipse mapping. \n\n#scicomm "", 'Before I even start, this was a group effort and it would never have happened without all my lovely colleagues: Everett Schlawin, Jake Lustig-Yaeger, Arthur Adams, @astronemly, Jacob Arcangeli, @cloudfreekat, Prashansa Gupta, Dylan Keating, @kevinbstevenson, Thomas Beatty.', 'Crash course on eclipse mapping: by observing secondary eclipse ingress/egress, when the planet is slowly disappearing/appearing behind the star, we can construct a 2D map of the dayside of a planet. https://t.co/H77sVdJfj0', 'With JWST, we can make ~spectroscopic~ eclipse mapping observations. And, since observing at different wavelengths = observing at different pressures/altitudes in a planet, this means 3D DAYSIDE MAPS https://t.co/FxcHesjsSY', ""But there's degeneracies in the info we get from these maps, since we are actually just observing flux over time but we want to determine flux over latitude and longitude. How to do it? Enter the Eigenspectra Mapping Method https://t.co/MCt7UFqjrM"", ""So there's a LOT of math in this paper, but the basic idea is we use a method developed by @astronemly (Rauscher+18) to make 2D maps, then stack them together to create a 3D spatial+spectral map. https://t.co/y91BxSs0ax"", 'We use a clustering algorithm to look for patterns on this 3D map. This lets us identify a few spectra that show all the variability across the map. So you can see how temperature/composition changes over the dayside, but only have to analyze a couple high-precision spectra. https://t.co/1NafH8H4fR', 'So how does it work? Pretty well! We can identify large-scale features and flux gradients on dayside maps. https://t.co/YI6uYWrSlc', 'This method has 2 big applications:\n1. To get a first look at eclipse mapping data and identify large-scale features without assuming your observations will match your GCM predictions.\n2. To plan what precision of observations you need to observe certain features in your models.', 'Finally, we want the whole JWST observer community to be able to use this tool! GitHub repo currently in development, with a planned release of a user-friendly version in the next couple of months. https://t.co/AFKW3xAhUq', 'PS I skipped a lot of the meat of this paper (bc who wants to read a tweet about PCA?) but if you want to know more just email/message me!', '@_astronomay Thanks, me too!']",20,10,2258
114,59,1407247333499285510,62044012,Michael Bronstein,"#GNNs are related to PDEs governing information diffusion on graphs. In a new paper with @b_p_chamberlain James Rowbottom @migorinova @stefan_webb @emaros96 we study a new class of Neural Graph Diffusion PDEs Blog post: Paper: Thinking of GNNs as partial differential equations leads to a new broad class of GNNs that are able to address in a principled way some of the prominent issues of current Graph ML models such as depth, oversmoothing, bottlenecks, and graph rewiring. Popular GNNs can be formalized as discretized diffusion PDEs with explicit single-step Euler scheme, where an iteration corresponds to a GNN layer, and running the diffusion for multiple iterations amounts to applying a GNN layer multiple times. In Neural PDEs formalism, the diffusion time parameter acts as a continuous analogy of the layersâan interpretation allowing us to exploit more efficient and stable numerical schemes that use adaptive steps in time. It might be advantageous to decouple the graph used for diffusion from the input graph (or ""rewire"" the graph). The diffusion framework offers a principled view on graph rewiring by considering the graph as a spatial discretization of some continuous object (manifold). In particular, the popular GAT of @PetarV_93 is a nonlinear diffusion PDE with a learnable diffusivity function similar to the Perona-Malik diffusion model used in the 90s in image processing",https://arxiv.org/abs/2106.10934,"We present Graph Neural Diffusion (GRAND) that approaches deep learning on graphs as a continuous diffusion process and treats Graph Neural Networks (GNNs) as discretisations of an underlying PDE. In our model, the layer structure and topology correspond to the discretisation choices of temporal and spatial operators. Our approach allows a principled development of a broad new class of GNNs that are able to address the common plights of graph learning models such as depth, oversmoothing, and bottlenecks. Key to the success of our models are stability with respect to perturbations in the data and this is addressed for both implicit and explicit discretisation schemes. We develop linear and nonlinear versions of GRAND, which achieve competitive results on many standard graph benchmarks. ",GRAND: Graph Neural Diffusion,6,"['#GNNs are related to PDEs governing information diffusion on graphs. In a new paper with @b_p_chamberlain James Rowbottom @migorinova @stefan_webb @emaros96 we study a new class of Neural Graph Diffusion PDEs\n\nBlog post: \n\nPaper: ', 'Thinking of GNNs as partial differential equations leads to a new broad class of GNNs that are able to address in a principled way some of the prominent issues of current Graph ML models such as depth, oversmoothing, bottlenecks, and graph rewiring.', 'Popular GNNs can be formalized as discretized diffusion PDEs with explicit single-step Euler scheme, where an iteration corresponds to a GNN layer, and running the diffusion for multiple iterations amounts to applying a GNN layer multiple times.', 'In Neural PDEs formalism, the diffusion time parameter acts as a continuous analogy of the layersâan interpretation allowing us to exploit more efficient and stable numerical schemes that use adaptive steps in time.', 'It might be advantageous to decouple the graph used for diffusion from the input graph (or ""rewire"" the graph). The diffusion framework offers a principled view on graph rewiring by considering the graph as a spatial discretization of some continuous object (manifold).', 'In particular, the popular GAT of @PetarV_93 is a nonlinear diffusion PDE with a learnable diffusivity function similar to the Perona-Malik diffusion model used in the 90s in image processing']",21,06,1421
115,138,1428336031078756353,1164202716,Magnus Jonsson,"Thank you @KAWstiftelsen for the nice description of our project, and for enabling our research. Our latest study just got available online, where we demonstrate electrical tuning of the nanoantennas. ...and here is the previous publication that is mentioned in the text. ",https://arxiv.org/abs/2108.04045,"Nanostructures of conventional metals offer manipulation of light at the nanoscale but are limited to static behavior due to their fixed material properties. To develop the next frontier of dynamic nanooptics and metasurfaces, we utilize the redox-tunable optical properties of conducting polymers, which were recently shown to be capable of sustaining plasmons in their most conducting oxidized state. Using nanodisks of poly(3,4-ethylenedioxythiophene:sulfate) (PEDOT:Sulf) as a model system, we present the first electrically tunable conducting polymer nanooptical antennas. In addition to repeated on/off switching of the polymeric nanoantennas, we demonstrate the possibility for gradual electrical tuning of their nanooptical response, which was found to be related to the modulation of both density and mobility of the mobile polaronic charge carriers in the polymer. The presented concept takes important steps towards electrically tunable metasurfaces with truly dynamic optical nanoantenna pixels, with not only varying farfield but also tunable nearfield. The work paves the way for applications ranging from tunable flat metaoptics to adaptable smart windows. ",Electrical Tuning of Plasmonic Conducting Polymer Nanoantennas,2,"['Thank you @KAWstiftelsen for the nice description of our project, and for enabling our research. Our latest study just got available online, where we demonstrate electrical tuning of the nanoantennas. ', '...and here is the previous publication that is mentioned in the text. https://t.co/0MVp22eSE6']",21,08,292
116,125,1479431716049670148,593911501,James Cadman,"New paper out today! We examine the possibility of triggered fragmentation from binary companions in self-gravitating discs. Observations (e.g Fontanive et al. 2019) find an excess of binary companions to close in (< 1AU), massive planets/BDs (7-60 Mjup), suggesting that stellar multiplicity may play a role in their formation. There are also indications that these objects may have formed through GI. Using SPH to explore a large range of binary parameter space (in separation, eccentricity and inclination) we find a sweet spot where intermediate separation binaries (a~100-400AU) can trigger fragmentation in a marginally unstable disc. The sweet spot found in binary parameter space is consistent with the projected separations observed to show an excess of close in, massive planets/BDs. Could triggered fragmentation contribute to this excess? This work was done with lots of kind help from @astroclem @cassidentprone and @theresphysics @theresphysics @TomHaworthAstro @astroclem @cassidentprone That would be interesting. We could definitely set up runs with circumprimary and circumsecondary discs. Iâm not sure about the current status of phantomâs dust growth algorithms, or using multiple dust types - would have to look into that.",https://arxiv.org/abs/2201.02032,"Observations of systems hosting close in ($<1$ AU) giant planets and brown dwarfs ($M\gtrsim7$ M$_{\rm Jup}$) find an excess of binary star companions, indicating that stellar multiplicity may play an important role in their formation. There is now increasing evidence that some of these objects may have formed via fragmentation in gravitationally unstable discs. We present a suite of 3D smoothed particle hydrodynamics (SPH) simulations of binary star systems with circumprimary self-gravitating discs, which include a realistic approximation to radiation transport, and extensively explore the companion's orbital parameter space for configurations which may trigger fragmentation. We identify a ""sweet spot"" where intermediate separation binary companions ($100$ AU $\lesssim a\lesssim400$ AU) can cause a marginally stable disc to fragment. The exact range of ideal binary separations is a function of the companion's eccentricity, inclination and mass. Heating is balanced by efficient cooling, and fragmentation occurs inside a spiral mode driven by the companion. Short separation, disc penetrating binary encounters ($a\lesssim100$ AU) are prohibitive to fragmentation, as mass stripping and disc heating quench any instability. This is also true of binary companions with high orbital eccentricities ($e\gtrsim0.75$). Wide separation companions ($a\gtrsim500$ AU) have little effect on the disc properties for the setup parameters considered here. The sweet spot found is consistent with the range of binary separations which display an excess of close in giant planets and brown dwarfs. Hence we suggest that fragmentation triggered by a binary companion may contribute to the formation of these substellar objects. ",Binary companions triggering fragmentation in self-gravitating discs,6,"['New paper out today! We examine the possibility of triggered fragmentation from binary companions in self-gravitating discs. ', 'Observations (e.g Fontanive et al. 2019) find an excess of binary companions to close in (< 1AU), massive planets/BDs (7-60 Mjup), suggesting that stellar multiplicity may play a role in their formation. There are also indications that these objects may have formed through GI.', 'Using SPH to explore a large range of binary parameter space (in separation, eccentricity and inclination) we find a sweet spot where intermediate separation binaries (a~100-400AU) can trigger fragmentation in a marginally unstable disc.', 'The sweet spot found in binary parameter space is consistent with the projected separations observed to show an excess of close in, massive planets/BDs. Could triggered fragmentation contribute to this excess?', 'This work was done with lots of kind help from @astroclem @cassidentprone and @theresphysics', '@theresphysics @TomHaworthAstro @astroclem @cassidentprone That would be interesting. We could definitely set up runs with circumprimary and circumsecondary discs. Iâm not sure about the current status of phantomâs dust growth algorithms, or using multiple dust types - would have to look into that.']",22,01,1260
117,111,1391829045889863683,32713500,Rob Nowak,"What kinds of functions do deep neural networks learn? Now we know! Our paper presents a new representer theorem for deep ReLU networks and provides theoretical insights into weight decay, sparsity, skip connections, and low-rank weight matrices. @roydanroy The main result says that a solution to a variational problem in a certain Banach space (compositional Radon BV) is a deep ReLU network having widths proportional to sample size, and that this optimization is equivalent to one over NNs that is min loss+ridge penalty on weights. @roydanroy It really falls together seamlessly. I think these are the ârightâ spaces to be working with, at least for multilayer relu nets.",https://arxiv.org/abs/2105.03361,"We develop a variational framework to understand the properties of functions learned by fitting deep neural networks with rectified linear unit activations to data. We propose a new function space, which is reminiscent of classical bounded variation-type spaces, that captures the compositional structure associated with deep neural networks. We derive a representer theorem showing that deep ReLU networks are solutions to regularized data fitting problems over functions from this space. The function space consists of compositions of functions from the Banach spaces of second-order bounded variation in the Radon domain. These are Banach spaces with sparsity-promoting norms, giving insight into the role of sparsity in deep neural networks. The neural network solutions have skip connections and rank bounded weight matrices, providing new theoretical support for these common architectural choices. The variational problem we study can be recast as a finite-dimensional neural network training problem with regularization schemes related to the notions of weight decay and path-norm regularization. Finally, our analysis builds on techniques from variational spline theory, providing new connections between deep neural networks and splines. ","What Kinds of Functions do Deep Neural Networks Learn? Insights from
Variational Spline Theory",3,"['What kinds of functions do deep neural networks learn? Now we know!\n\nOur paper presents a new representer theorem for deep ReLU networks and provides theoretical insights into weight decay, sparsity, skip connections, and low-rank weight matrices.', '@roydanroy The main result says that a solution to a variational problem in a certain Banach space (compositional Radon BV) is a deep ReLU network having widths proportional to sample size, and that this optimization is equivalent to one over NNs that is min loss+ridge penalty on weights.', '@roydanroy It really falls together seamlessly. I think these are the ârightâ spaces to be working with, at least for multilayer relu nets.']",21,05,683
118,194,1393268069703639040,713117884239753216,Xiaolong Wang,"Cycle comes again: we propose contrastive learning with cross-video cycle-consistency. Instead of learning by augmenting a single image, our method forms a cycle across different videos to provide positive training pairs from different instances. (1/n) Start from one frame in a video, we find its soft nearest neighbor from other videos as a forward step. The cycle-consistency is achieved when the soft nearest neighbor finds its closest frame back within the same video as the start frame. (2/n) The self-supervised image representation can be generalized to multiple downstream tasks beyond action recognition in videos, including image classification and object tracking. Project page: ⊠Joint work with @Happy_Wu (3/n) @kevin_zakka It needs to have two videos from the same action class to form the cycle. So in the sense of knowing the two videos are from the same action class, it is labeled. @kevin_zakka But this is just stating the fact in the TCC experiments. If you are using PennAction category labels to get the pairs, it is ""require human annotators to provide grouth-truth pairs"" no? We are not trying to downgrade TCC, it is great work, and the statement is also neutral. @kevin_zakka And I also think it makes perfect sense for TCC to align two videos from the same category since it is performing a frame-level alignment. It will not make sense for TCC to apply on aligning two completely different action videos. For us, we are not trying to align two videos.",https://arxiv.org/abs/2105.06463,"Recent works have advanced the performance of self-supervised representation learning by a large margin. The core among these methods is intra-image invariance learning. Two different transformations of one image instance are considered as a positive sample pair, where various tasks are designed to learn invariant representations by comparing the pair. Analogically, for video data, representations of frames from the same video are trained to be closer than frames from other videos, i.e. intra-video invariance. However, cross-video relation has barely been explored for visual representation learning. Unlike intra-video invariance, ground-truth labels of cross-video relation is usually unavailable without human labors. In this paper, we propose a novel contrastive learning method which explores the cross-video relation by using cycle-consistency for general image representation learning. This allows to collect positive sample pairs across different video instances, which we hypothesize will lead to higher-level semantics. We validate our method by transferring our image representation to multiple downstream tasks including visual object tracking, image classification, and action recognition. We show significant improvement over state-of-the-art contrastive learning methods. Project page is available at this https URL ","Contrastive Learning of Image Representations with Cross-Video
Cycle-Consistency",6,"['Cycle comes again: we propose contrastive learning with cross-video cycle-consistency. Instead of learning by augmenting a single image, our method forms a cycle across different videos to provide positive training pairs from different instances. \n\n\n(1/n) ', 'Start from one frame in a video, we find its soft nearest neighbor from other videos as a forward step. The cycle-consistency is achieved when the soft nearest neighbor finds its closest frame back within the same video as the start frame. \n\n(2/n) https://t.co/HYxWYKAcki', 'The self-supervised image representation can be generalized to multiple downstream tasks beyond action recognition in videos, including image classification and object tracking. \n\nProject page: https://t.co/Fd4lOmC97MâŠ\n\nJoint work with @Happy_Wu\n\n(3/n)', '@kevin_zakka It needs to have two videos from the same action class to form the cycle. So in the sense of knowing the two videos are from the same action class, it is labeled.', '@kevin_zakka But this is just stating the fact in the TCC experiments. If you are using PennAction category labels to get the pairs, it is ""require human annotators to provide grouth-truth pairs"" no? We are not trying to downgrade TCC, it is great work, and the statement is also neutral.', '@kevin_zakka And I also think it makes perfect sense for TCC to align two videos from the same category since it is performing a frame-level alignment. It will not make sense for TCC to apply on aligning two completely different action videos. For us, we are not trying to align two videos.']",21,05,1510
119,113,1181381037303177216,174052756,Thiago Serra,New paper on combining classical and quantum computing for #orms with @Ivey_Huang: - We show that IP can be used as a preprocessing step for adiabatic quantum optimization - We review adiabatic quantum computing & related work on minor embeddings Link: ,http://arxiv.org/abs/1910.02179,"Quantum Annealing (QA) can be used to quickly obtain near-optimal solutions for Quadratic Unconstrained Binary Optimization (QUBO) problems. In QA hardware, each decision variable of a QUBO should be mapped to one or more adjacent qubits in such a way that pairs of variables defining a quadratic term in the objective function are mapped to some pair of adjacent qubits. However, qubits have limited connectivity in existing QA hardware. This has spurred work on preprocessing algorithms for embedding the graph representing problem variables with quadratic terms into the hardware graph representing qubits adjacencies, such as the Chimera graph in hardware produced by D-Wave Systems. In this paper, we use integer linear programming to search for an embedding of the problem graph into certain classes of minors of the Chimera graph, which we call template embeddings. One of these classes corresponds to complete bipartite graphs, for which we show the limitation of the existing approach based on minimum Odd Cycle Transversals (OCTs). One of the formulations presented is exact, and thus can be used to certify the absence of a minor embedding using that template. On an extensive test set consisting of random graphs from five different classes of varying size and sparsity, we can embed more graphs than a state-of-the-art OCT-based approach, our approach scales better with the hardware size, and the runtime is generally orders of magnitude smaller. ",Template-based Minor Embedding for Adiabatic Quantum Optimization,1,['New paper on combining classical and quantum computing for #orms with @Ivey_Huang:\n\n- We show that IP can be used as a preprocessing step for adiabatic quantum optimization\n\n- We review adiabatic quantum computing & related work on minor embeddings\n\nLink: '],19,10,259
120,134,1402869254286946309,1357700078333550592,Keri Vos,"Very happy to announce my new paper, the first (of hopefully many!) together with Marzia @mukkietto. An analysis of Lepton Flavor Violation in Lambda decays Work to do for our #lhcb friends! Thanks to @v_researcher @mmulderphysics @lgreeven @PrPetitesMains for their comments and help",https://arxiv.org/abs/2106.05192,"Lepton flavour violation (LFV) naturally occurs in many new physics models, specifically in those explaining the $B$ anomalies. While LFV has already been studied for mesonic decays, it is important to consider also baryonic decays mediated by the same quark transition. In this paper, we study LFV in the baryonic $\Lambda_b \to \Lambda \ell_1 \ell_2$ using for the first time a full basis of New Physics operators. We present expected bounds on the branching ratio in a model-independent framework and using two specific new physics models. Finally, we point out the interplay and orthogonality between the baryonic and mesonic LFV searches. ",Lepton flavour violation in rare $\Lambda_b$ decays,2,"['Very happy to announce my new paper, the first (of hopefully many!) together with Marzia @mukkietto. An analysis of Lepton Flavor Violation in Lambda decays \nWork to do for our #lhcb friends!', 'Thanks to @v_researcher @mmulderphysics @lgreeven @PrPetitesMains for their comments and help']",21,06,291
121,198,1402665891062505478,757255726553333760,Amir Feder,"đđNew ACL paper: Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions đđ With: Daniel Rosenberg, @Itai_Gat, @roireichart Asks whether VQA systems understand language. 1/6 VQA systems seem to be constantly improving, with many different models achieving similar state-of-the-art performance. However, a more careful look reveals that they often do not understand the rich signal they are being fed with. 2/6 To understand and measure their generalization capabilities, we look at their robustness to counterfactual augmentations. These augmentations are designed to make a focused intervention on a specific property of the question such that the answer changes. 3/6 Using the augmentations, we introduce a new robustness measure, Robustness to Augmented Data (RAD), which measures the consistency of model predictions between original and counterfactual examples. 4/6 We show that RAD, unlike classical accuracy measures, can quantify when state-of-the-art systems are not robust to counterfactuals. We find substantial failure cases which reveal that current VQA systems are still brittle. 5/6 We also connect between robustness and out-of-distribution generalization, demonstrating the predictive power of RAD for performance on unseen augmentations. 6/6",https://arxiv.org/abs/2106.04484,"Deep learning algorithms have shown promising results in visual question answering (VQA) tasks, but a more careful look reveals that they often do not understand the rich signal they are being fed with. To understand and better measure the generalization capabilities of VQA systems, we look at their robustness to counterfactually augmented data. Our proposed augmentations are designed to make a focused intervention on a specific property of the question such that the answer changes. Using these augmentations, we propose a new robustness measure, Robustness to Augmented Data (RAD), which measures the consistency of model predictions between original and augmented examples. Through extensive experimentation, we show that RAD, unlike classical accuracy measures, can quantify when state-of-the-art systems are not robust to counterfactuals. We find substantial failure cases which reveal that current VQA systems are still brittle. Finally, we connect between robustness and generalization, demonstrating the predictive power of RAD for performance on unseen augmentations. ","Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused
Interventions",6,"['đđNew ACL paper: Are VQA Systems RAD? Measuring Robustness to Augmented Data with Focused Interventions đđ\n\nWith: Daniel Rosenberg, @Itai_Gat, @roireichart\n\nAsks whether VQA systems understand language. \n\n1/6', 'VQA systems seem to be constantly improving, with many different models achieving similar state-of-the-art performance. However, a more careful look reveals that they often do not understand the rich signal they are being fed with.\n\n2/6', 'To understand and measure their generalization capabilities, we look at their robustness to counterfactual augmentations.\n\nThese augmentations are designed to make a focused intervention on a specific property of the question such that the answer changes.\n\n3/6', 'Using the augmentations, we introduce a new robustness measure, Robustness to Augmented\nData (RAD), which measures the consistency\nof model predictions between original and counterfactual examples.\n\n4/6', 'We show that RAD, unlike classical accuracy measures, can quantify when state-of-the-art systems are not robust to counterfactuals. \n\nWe find substantial failure cases which reveal that current VQA systems are still brittle. \n\n5/6', 'We also connect between robustness and out-of-distribution generalization, demonstrating the predictive power of RAD for performance on unseen augmentations.\n\n6/6']",21,06,1299
122,42,1194606557272645633,139122509,Stephen Burgess,"New paper on an approach for performing robust instrumental variable analyses by Andrew Grant and myself: ""An efficient and robust approach to Mendelian randomization with measured pleiotropic effects in a high-dimensional setting"" - . Tweetorial follows!: Suppose genetic variants influence the outcome not via the risk factor (hence are not valid instruments), but pleiotropic pathways operate via a measured variable. Then we can perform multivariable Mendelian randomization including the variable to recover an unbiased estimate. However, there may be multiple different measured potentially pleiotropic variables. If we ignore them, then we get bias, but if we adjust for all of them, we will typically have an inefficient estimate as not all the pathways may influence the outcome. Additionally, if there are more potentially pleiotropic variables than genetic variants, then we can't include all variables in the analysis model. We have developed a lasso approach that only includes relevant pleiotropic variables in the analysis - hence the estimate is unbiased, but more efficient than that including all variables. The approach is implemented using summarized genetic association data. One problem is inference post-variable selection - we discuss this at length in the paper. Thanks to Stijn Vansteelandt for helpful conversations on this topic. We present as an example the effect of serum urate on coronary heart disease risk, showing that blood pressure is the key pleiotropic variable, and that the estimated direct effect of urate on CHD risk not via blood pressure is substantially attenuated. We recommend the method when it is believed that pleiotropic effects of genetic predictors of a risk factor could influence the outcome via measured pleiotropic pathways - could use as a main analysis or sensitivity. Comments welcome! @dpsg108 True, could be either direct (horizontal) or indirect (vertical/mediated) pleiotropy.",https://arxiv.org/abs/1911.00347,"Valid estimation of a causal effect using instrumental variables requires that all of the instruments are independent of the outcome conditional on the risk factor of interest and any confounders. In Mendelian randomization studies with large numbers of genetic variants used as instruments, it is unlikely that this condition will be met. Any given genetic variant could be associated with a large number of traits, all of which represent potential pathways to the outcome which bypass the risk factor of interest. Such pleiotropy can be accounted for using standard multivariable Mendelian randomization with all possible pleiotropic traits included as covariates. However, the estimator obtained in this way will be inefficient if some of the covariates do not truly sit on pleiotropic pathways to the outcome. We present a method which uses regularization to identify which out of a set of potential covariates need to be accounted for in a Mendelian randomization analysis in order to produce an efficient and robust estimator of a causal effect. The method can be used in the case where individual-level data are not available and the analysis must rely on summary-level data only. It can also be used in the case where there are more covariates under consideration than instruments, which is not possible using standard multivariable Mendelian randomization. We show the results of simulation studies which demonstrate the performance of the proposed regularization method in realistic settings. We also illustrate the method in an applied example which looks at the causal effect of urate plasma concentration on coronary heart disease. ","An efficient and robust approach to Mendelian randomization with
measured pleiotropic effects in a high-dimensional setting",9,"['New paper on an approach for performing robust instrumental variable analyses by Andrew Grant and myself: ""An efficient and robust approach to Mendelian randomization with measured pleiotropic effects in a high-dimensional setting"" - . Tweetorial follows!:', 'Suppose genetic variants influence the outcome not via the risk factor (hence are not valid instruments), but pleiotropic pathways operate via a measured variable. Then we can perform multivariable Mendelian randomization including the variable to recover an unbiased estimate.', 'However, there may be multiple different measured potentially pleiotropic variables. If we ignore them, then we get bias, but if we adjust for all of them, we will typically have an inefficient estimate as not all the pathways may influence the outcome.', ""Additionally, if there are more potentially pleiotropic variables than genetic variants, then we can't include all variables in the analysis model."", 'We have developed a lasso approach that only includes relevant pleiotropic variables in the analysis - hence the estimate is unbiased, but more efficient than that including all variables. The approach is implemented using summarized genetic association data.', 'One problem is inference post-variable selection - we discuss this at length in the paper. Thanks to Stijn Vansteelandt for helpful conversations on this topic.', 'We present as an example the effect of serum urate on coronary heart disease risk, showing that blood pressure is the key pleiotropic variable, and that the estimated direct effect of urate on CHD risk not via blood pressure is substantially attenuated.', 'We recommend the method when it is believed that pleiotropic effects of genetic predictors of a risk factor could influence the outcome via measured pleiotropic pathways - could use as a main analysis or sensitivity. Comments welcome!', '@dpsg108 True, could be either direct (horizontal) or indirect (vertical/mediated) pleiotropy.']",19,11,1947
123,113,1225166638909050882,39865873,Ryan Lowe,"New (ICLR) paper out on arXiv! đ„ł We started with the question ""how do you go from emergent communication to natural language?"", and ended up here: Spoiler: emergent communication may not be super useful for NLP, but self-play can be. Thread đ: (1/n) Back in 2017 when interning at OpenAI and working on multi-agent RL, @j_foerst and I came up with an idea. what if we: (1) trained a *population* of agents via emergent communication, and then (2) trained a *meta-agent* to adapt to all these populations? (2/n) The hope would be that after (2), you had an agent that can quickly learn new languages, and could hopefully learn English with fewer samples (probably with appropriate conditions on the initial distribution over languages) (3/n) @backpropper and I finally started working on this in early 2019. Initial results weren't promising: even in a simple game with an artificial compositional language (the 'target language'), it was hard for the agents to emerge languages that were useful for the meta-agent (4/n) (Although if you pre-defined the set of emergent languages to train on, it worked okay. See our early workshop submission here: , which also reveals how we were thinking about it at the time.) (5/n) We eventually realized that if you first did a bit of supervised learning on the target language (which is feasible in practice), things worked a lot better. So our focus shifted towards: ""how do we combine supervised learning + self-play for language learning?"" (6/n) The conclusion: it seems like (in our games) it's not actually a good idea to emerge a language from scratch! Instead, better to start with some supervised learning, then put some self-play updates in. (7/n) This is maybe obvious in retrospect, but for me it was surprising, since one of my main motivations for working on emergent communication was that it could be a way to get really good natural language-using agents. (8/n) So I'm now less optimistic on emergent communication as a path to good NLP, though I think self-play could still be a very useful signal to use in some cases. (9/n) All of this is joint work with the fantastic @backpropper, @j_foerst, @douwekiela, and Joelle Pineau. đ (n/n)",https://arxiv.org/abs/2002.01093,"A promising approach for teaching artificial agents to use natural language involves using human-in-the-loop training. However, recent work suggests that current machine learning methods are too data inefficient to be trained in this way from scratch. In this paper, we investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency: imitating human language data via supervised learning, and maximizing reward in a simulated multi-agent environment via self-play (as done in emergent communication), and introduce the term supervised self-play (S2P) for algorithms using both of these signals. We find that first training agents via supervised learning on human data followed by self-play outperforms the converse, suggesting that it is not beneficial to emerge languages from scratch. We then empirically investigate various S2P schedules that begin with supervised learning in two environments: a Lewis signaling game with symbolic inputs, and an image-based referential game with natural language descriptions. Lastly, we introduce population based approaches to S2P, which further improves the performance over single-agent methods. ","On the interaction between supervision and self-play in emergent
communication",10,"['New (ICLR) paper out on arXiv! đ„ł We started with the question ""how do you go from emergent communication to natural language?"", and ended up here: \n\nSpoiler: emergent communication may not be super useful for NLP, but self-play can be. \n\nThread đ: (1/n) ', 'Back in 2017 when interning at OpenAI and working on multi-agent RL, @j_foerst and I came up with an idea. what if we: \n\n(1) trained a *population* of agents via emergent communication, and then \n\n(2) trained a *meta-agent* to adapt to all these populations? (2/n)', 'The hope would be that after (2), you had an agent that can quickly learn new languages, and could hopefully learn English with fewer samples (probably with appropriate conditions on the initial distribution over languages) (3/n)', ""@backpropper and I finally started working on this in early 2019. Initial results weren't promising: even in a simple game with an artificial compositional language (the 'target language'), it was hard for the agents to emerge languages that were useful for the meta-agent (4/n)"", '(Although if you pre-defined the set of emergent languages to train on, it worked okay. See our early workshop submission here: https://t.co/3ml6ukO6RR, which also reveals how we were thinking about it at the time.) (5/n)', 'We eventually realized that if you first did a bit of supervised learning on the target language (which is feasible in practice), things worked a lot better. \n\nSo our focus shifted towards: ""how do we combine supervised learning + self-play for language learning?"" (6/n)', ""The conclusion: it seems like (in our games) it's not actually a good idea to emerge a language from scratch! \n\nInstead, better to start with some supervised learning, then put some self-play updates in. (7/n)"", 'This is maybe obvious in retrospect, but for me it was surprising, since one of my main motivations for working on emergent communication was that it could be a way to get really good natural language-using agents. (8/n)', ""So I'm now less optimistic on emergent communication as a path to good NLP, though I think self-play could still be a very useful signal to use in some cases. (9/n)"", 'All of this is joint work with the fantastic @backpropper, @j_foerst, @douwekiela, and Joelle Pineau. đ (n/n)']",20,02,2216
124,63,951717524122492928,749437811015704577,Ted Mackereth,"My latest efforts now on the arXiv (, with @rcrain_astro and other fine folks), in which we study the origins of element abundance patterns like that of our Galaxy in the EAGLE simulations #MilkyWay @rcrain_astro We show that galaxies with these patterns are rare in the sims (~5%), which is important to understand as we start to understand MW better with surveys like #Gaia In the paper, we relate this to an atypical dark matter accretion history in the galaxies that have these abund. trends - a prediction that can be tested in the MW",https://arxiv.org/abs/1801.03593,"Spectroscopic surveys of the Galaxy reveal that its disc stars exhibit a spread in $\mathrm{[\alpha/Fe]}$ at fixed $\mathrm{[Fe/H]}$, manifest at some locations as a bimodality. The origin of these diverse, and possibly distinct, stellar populations in the Galactic disc is not well understood. We examine the Fe and $\alpha$-element evolution of 133 Milky Way-like galaxies from the EAGLE simulation, to investigate the origin and diversity of their $\mathrm{[\alpha/Fe]}$-$\mathrm{[Fe/H]}$ distributions. We find that bimodal $\mathrm{[\alpha/Fe]}$ distributions arise in galaxies whose gas accretion histories exhibit episodes of significant infall at both early and late times, with the former fostering more intense star formation than the latter. The shorter characteristic consumption timescale of gas accreted in the earlier episode suppresses its enrichment with iron synthesised by Type Ia SNe, resulting in the formation of a high-$\mathrm{[\alpha/Fe]}$ sequence. We find that bimodality in $\mathrm{[\alpha/Fe]}$ similar to that seen in the Galaxy is rare, appearing in approximately 5 percent of galaxies in our sample. We posit that this is a consequence of an early gas accretion episode requiring the mass accretion history of a galaxy's dark matter halo to exhibit a phase of atypically-rapid growth at early epochs. The scarcity of EAGLE galaxies exhibiting distinct sequences in the $\mathrm{[\alpha/Fe]}$-$\mathrm{[Fe/H]}$ plane may therefore indicate that the Milky Way's elemental abundance patterns, and its accretion history, are not representative of the broader population of $\sim L^\star$ disc galaxies. ",The origin of diverse $\alpha$-element abundances in galaxy discs,3,"['My latest efforts now on the arXiv (, with @rcrain_astro and other fine folks), in which we study the origins of element abundance patterns like that of our Galaxy in the EAGLE simulations #MilkyWay ', '@rcrain_astro We show that galaxies with these patterns are rare in the sims (~5%), which is important to understand as we start to understand MW better with surveys like #Gaia', 'In the paper, we relate this to an atypical dark matter accretion history in the galaxies that have these abund. trends - a prediction that can be tested in the MW']",18,01,552
125,101,1127299092445421568,348637346,Diana Powell,"Can we measure total protoplanetary disk mass without assuming a tracer-to-H2 ratio? Check out our new paper on the locations of planet formation where we derive total gaseous surface densities for 7 protoplanetary disks using dust lines! The disks in our sample have newly derived masses that are 9-27% of their host stellar mass, substantially larger than the minimum mass solar nebula! All are stable to gravitational collapse except for one which approaches the limit of Toomre-Q stability. These masses are determined independent of an assumed dust opacity! Check out how the new total surface densities compared to previously derived values and the minimum mass solar nebula! These are massive disks! Our mass estimates are 2-15 times larger than estimates from integrated optically thin dust emission. In these models, the disks formed with an initial dust mass that is a factor of âŒ10 greater than is presently observed. More dust mass at early times! Of the three disks in our sample with resolved CO line emission, the masses of HD 163296, AS 209, and TW Hya are roughly 3, 115, and 40 times more massive than estimates from CO respectively. More evidence that the CO story in disks is complicated! Our method of determining surface density using dust lines is robust even if particles form as aggregates and is useful even in the presence of dust substructure caused by pressure traps. The low Toomre-Q values observed in this sample indicate that at least some disks do not accrete efficiently. ",https://arxiv.org/abs/1905.03252,"We present new determinations of disk surface density, independent of an assumed dust opacity, for a sample of 7 bright, diverse protoplanetary disks using measurements of disk dust lines. We develop a robust method for determining the location of dust lines by modeling disk interferometric visibilities at multiple wavelengths. The disks in our sample have newly derived masses that are 9-27% of their host stellar mass, substantially larger than the minimum mass solar nebula. All are stable to gravitational collapse except for one which approaches the limit of Toomre-Q stability. Our mass estimates are 2-15 times larger than estimates from integrated optically thin dust emission. We derive depleted dust-to-gas ratios with typical values of ~$10^{-3}$ in the outer disk. Using coagulation models we derive dust surface density profiles that are consistent with millimeter dust observations. In these models, the disks formed with an initial dust mass that is a factor of ~10 greater than is presently observed. Of the three disks in our sample with resolved CO line emission, the masses of HD 163296, AS 209, and TW Hya are roughly 3, 115, and 40 times more massive than estimates from CO respectively. This range indicates that CO depletion is not uniform across different disks and that dust is a more robust tracer of total disk mass. Our method of determining surface density using dust lines is robust even if particles form as aggregates and is useful even in the presence of dust substructure caused by pressure traps. The low Toomre-Q values observed in this sample indicate that at least some disks do not accrete efficiently. ","New Constraints From Dust Lines On The Surface Densities Of
Protoplanetary Disks",8,"['Can we measure total protoplanetary disk mass without assuming a tracer-to-H2 ratio? Check out our new paper on the locations of planet formation where we derive total gaseous surface densities for 7 protoplanetary disks using dust lines! ', 'The disks in our sample have newly derived masses that are 9-27% of their host stellar mass, substantially larger than the minimum mass solar nebula! All are stable to gravitational collapse except for one which approaches the limit of Toomre-Q stability.', 'These masses are determined independent of an assumed dust opacity!', 'Check out how the new total surface densities compared to previously derived values and the minimum mass solar nebula! These are massive disks! https://t.co/d4PhLzwPsA', 'Our mass estimates are 2-15 times larger than estimates from integrated optically thin dust emission. In these models, the disks formed with an initial dust mass that is a factor of âŒ10 greater than is presently observed. More dust mass at early times!', 'Of the three disks in our sample with resolved CO line emission, the masses of HD 163296, AS 209, and TW Hya are roughly 3, 115, and 40 times more massive than estimates from CO respectively. More evidence that the CO story in disks is complicated!', 'Our method of determining surface density using dust lines is robust even if particles form as aggregates and is useful even in the presence of dust substructure caused by pressure traps. https://t.co/gmVGS7NNPx', 'The low Toomre-Q values observed in this sample indicate that at least some disks do not accrete efficiently. https://t.co/P9Vf2aj4jX']",19,05,1534
126,116,996320945186004993,776074951262699521,Ivan Titov,New #ACL2018 semantic parsing paper by @chunchuan_lyu: AMR Graph Prediction with Latent Alignment: +3.4% in AMR (74.4%). Non-autoregressive model for graph prediction (VAE with Gumbel-Sinkhorn to relax discrete latent alignments) #NLProc @EdinburghNLP ,https://arxiv.org/abs/1805.05286,"Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational auto-encoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25). ",AMR Parsing as Graph Prediction with Latent Alignment,1,['New #ACL2018 semantic parsing paper by @chunchuan_lyu: \nAMR Graph Prediction with Latent Alignment: +3.4% in AMR (74.4%). Non-autoregressive model for graph prediction (VAE with Gumbel-Sinkhorn to relax discrete latent alignments)\n #NLProc @EdinburghNLP '],18,05,266
127,122,1224724623910154242,458603378,Marian,"New Paper: Adversarial Generation of Continuous Implicit Shape Representations We propose two GANs with a DeepSDF network as the generator and either a 3D CNN or a Pointnet as the discriminator. Written by @rusty1s and me! Paper: @BriamMor I started with 3D GANs and only did 2D GANs after that. My advice would be to start with 2D GANs since that makes data preparation and visualization a lot easier. If you want to get results quickly, try to get existing implementations to run. @BriamMor But if you want to understand how it works, try to implement it yourself. I would start with a fully connected GAN on something like MNIST, then a DCGAN, then WGAN-GP, then implement progressive growing, then StyleGAN. You could also go for StyleGAN right away. @rusty1s Check out my blog post about the paper. It has animated visualizations! ",https://arxiv.org/abs/2002.00349,"This work presents a generative adversarial architecture for generating three-dimensional shapes based on signed distance representations. While the deep generation of shapes has been mostly tackled by voxel and surface point cloud approaches, our generator learns to approximate the signed distance for any point in space given prior latent information. Although structurally similar to generative point cloud approaches, this formulation can be evaluated with arbitrary point density during inference, leading to fine-grained details in generated outputs. Furthermore, we study the effects of using either progressively growing voxel- or point-processing networks as discriminators, and propose a refinement scheme to strengthen the generator's capabilities in modeling the zero iso-surface decision boundary of shapes. We train our approach on the ShapeNet benchmark dataset and validate, both quantitatively and qualitatively, its performance in generating realistic 3D shapes. ",Adversarial Generation of Continuous Implicit Shape Representations,4,"['New Paper: Adversarial Generation of Continuous Implicit Shape Representations\n\nWe propose two GANs with a DeepSDF network as the generator and either a 3D CNN or a Pointnet as the discriminator. Written by @rusty1s and me!\n\nPaper: ', '@BriamMor I started with 3D GANs and only did 2D GANs after that. My advice would be to start with 2D GANs since that makes data preparation and visualization a lot easier. If you want to get results quickly, try to get existing implementations to run.', '@BriamMor But if you want to understand how it works, try to implement it yourself. I would start with a fully connected GAN on something like MNIST, then a DCGAN, then WGAN-GP, then implement progressive growing, then StyleGAN. You could also go for StyleGAN right away.', '@rusty1s Check out my blog post about the paper. It has animated visualizations!\nhttps://t.co/wLFpUKgMv7 https://t.co/5QVT7xUx0w']",20,02,863
128,192,1334872698133106691,2445322540,Pascal Fua,"We propose a way to exploit contrastive self-supervised learning to extract rich latent vectors from videos. Given this representation, a mapping to 3D pose can be learned from a very little annotated data. @HelgeRhodin #DeepLearning #computervision ",https://arxiv.org/abs/2012.01511,"In this paper we propose an unsupervised learning method to extract temporal information on monocular videos, where we detect and encode subject of interest in each frame and leverage contrastive self-supervised (CSS) learning to extract rich latent vectors. Instead of simply treating the latent features of nearby frames as positive pairs and those of temporally-distant ones as negative pairs as in other CSS approaches, we explicitly disentangle each latent vector into a time-variant component and a time-invariant one. We then show that applying CSS only to the time-variant features and encouraging a gradual transition on them between nearby and away frames while also reconstructing the input, extract rich temporal features into the time-variant component, well-suited for human pose estimation. Our approach reduces error by about 50\% compared to the standard CSS strategies, outperforms other unsupervised single-view methods and matches the performance of multi-view techniques. ","Unsupervised Temporal Learning on Monocular Videos for 3D Human Pose
Estimation",1,"['We propose a way to exploit contrastive self-supervised learning to extract rich latent vectors from videos. Given this representation, a mapping to 3D pose can be learned from a very little annotated data. @HelgeRhodin #DeepLearning #computervision ']",20,12,263
129,18,1410039972422328322,1250475209959698432,Dara Bahri,"Excited to share our new paper from @GoogleAI. ""SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption"" (). This is joint work with Heinrich Jiang, @ytay017 and @metzlerd. TLDR -- a neat contrastive pre-training scheme for deep nets on tabular data that improves classification performance generally, when labeled data is limited, and when there are noisy labels.",https://arxiv.org/abs/2106.15147,"Self-supervised contrastive representation learning has proved incredibly successful in the vision and natural language domains, enabling state-of-the-art performance with orders of magnitude less labeled data. However, such methods are domain-specific and little has been done to leverage this technique on real-world tabular datasets. We propose SCARF, a simple, widely-applicable technique for contrastive learning, where views are formed by corrupting a random subset of features. When applied to pre-train deep neural networks on the 69 real-world, tabular classification datasets from the OpenML-CC18 benchmark, SCARF not only improves classification accuracy in the fully-supervised setting but does so also in the presence of label noise and in the semi-supervised setting where only a fraction of the available training data is labeled. We show that SCARF complements existing strategies and outperforms alternatives like autoencoders. We conduct comprehensive ablations, detailing the importance of a range of factors. ","SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption",2,"['Excited to share our new paper from @GoogleAI. ""SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption"" (). This is joint work with Heinrich Jiang, @ytay017 and @metzlerd. ', 'TLDR -- a neat contrastive pre-training scheme for deep nets on tabular data that improves classification performance generally, when labeled data is limited, and when there are noisy labels.']",21,06,397
130,41,1194712172183474176,24859650,Jan-Willem van de Meent,"1/ New work by Alican (@alicanb_) and Babak (@BabakEsmaeili10): ""Evaluating Combinatorial Generalization in Variational Autoencoders"" () In this paper we ask the question: ""To what extent do VAEs generalize to unseen combinations of features?""(thread) 2/ In any dataset that is characterized by even a moderate number of factors of variation, we cannot hope to have representatives of all combinations of latent features in our training data, since the size of a ""complete"" dataset grows exponentially with the number of features. 3/ In this paper, we test what happens when test-time examples differ substantially from training examples with respect to a least one feature. We are particularly interested in the effect of model capacity on generalization. 4/ Alican and Babak trained over 3000 VAE instances to systematically evaluate the effect of network width & depth, KL regularization, the amount of training data, and the density of the training set in feature space. We were surprised by the results! 5/ For easy generalization problems (i.e. when test set examples are similar in pixel space to training examples), increasing model capacity always improves generalization. In this regime, memorizing training data does not hurt generalization performance. 6/ For more difficult problems we observe a range of behaviors. In this regime, increasing the layer depth can lead to data memorization, which adversely affects generalization when the test data are not similar to the training data. However this is not always the case. 7/ The level of KL regularization also plays a crucial role, as evidenced by rate-distortion analysis. 1-layer networks show a U-shaped curve as a function of the level of regularization. In 3-layer networks, decreasing the regularization paradoxically improves generalization. 8/ Our results suggest that the generalization characteristics of VAEs are by no means straightforward. Network depth, KL regularization, and the difficulty of the problem can give rise of a range of possible behaviors, and need to be considered when evaluating generalization. 9/ In this paper, we restrict ourselves to fully-connected encoders and decoders in this work. As part of this paper, we will be releasing our Tetraminoes dataset, along with source code for all experiments. We would be excited to hear about results for different architectures! ",https://arxiv.org/abs/1911.04594,"Variational autoencoders optimize an objective that combines a reconstruction loss (the distortion) and a KL term (the rate). The rate is an upper bound on the mutual information, which is often interpreted as a regularizer that controls the degree of compression. We here examine whether inclusion of the rate also acts as an inductive bias that improves generalization. We perform rate-distortion analyses that control the strength of the rate term, the network capacity, and the difficulty of the generalization problem. Decreasing the strength of the rate paradoxically improves generalization in most settings, and reducing the mutual information typically leads to underfitting. Moreover, we show that generalization continues to improve even after the mutual information saturates, indicating that the gap on the bound (i.e. the KL divergence relative to the inference marginal) affects generalization. This suggests that the standard Gaussian prior is not an inductive bias that typically aids generalization, prompting work to understand what choices of priors improve generalization in VAEs. ",Rate-Regularization and Generalization in VAEs,9,"['1/ New work by Alican (@alicanb_) and Babak (@BabakEsmaeili10): ""Evaluating Combinatorial Generalization in Variational Autoencoders"" ()\n\nIn this paper we ask the question: ""To what extent do VAEs generalize to unseen combinations of features?""(thread) ', '2/ In any dataset that is characterized by even a moderate number of factors of variation, we cannot hope to have representatives of all combinations of latent features in our training data, since the size of a ""complete"" dataset grows exponentially with the number of features.', '3/ In this paper, we test what happens when test-time examples differ substantially from training examples with respect to a least one feature. We are particularly interested in the effect of model capacity on generalization.', '4/ Alican and Babak trained over 3000 VAE instances to systematically evaluate the effect of network width & depth, KL regularization, the amount of training data, and the density of the training set in feature space. We were surprised by the results! https://t.co/qGEOmKEc9V', '5/ For easy generalization problems (i.e. when test set examples are similar in pixel space to training examples), increasing model capacity always improves generalization. In this regime, memorizing training data does not hurt generalization performance. https://t.co/FipiYRorev', '6/ For more difficult problems we observe a range of behaviors. In this regime, increasing the layer depth can lead to data memorization, which adversely affects generalization when the test data are not similar to the training data. However this is not always the case.', '7/ The level of KL regularization also plays a crucial role, as evidenced by rate-distortion analysis. 1-layer networks show a U-shaped curve as a function of the level of regularization. In 3-layer networks, decreasing the regularization paradoxically improves generalization. https://t.co/Q92RNMVx9U', '8/ Our results suggest that the generalization characteristics of VAEs are by no means straightforward. Network depth, KL regularization, and the difficulty of the problem can give rise of a range of possible behaviors, and need to be considered when evaluating generalization.', '9/ In this paper, we restrict ourselves to fully-connected encoders and decoders in this work. As part of this paper, we will be releasing our Tetraminoes dataset, along with source code for all experiments. We would be excited to hear about results for different architectures! https://t.co/RaflagGjfB']",19,11,2411
131,14,1111739025696657411,2427184074,Christopher Berry,"New on @arxiv this week, our #Astro2020 white paper on observing binary black holes with next-generation #GravitationalWave detectors Deeper, wider, sharper: Next-generation ground-based gravitational-wave observations of binary black holes We ask what is required of a detector to đ Detect binary black holes throughout the observable universe? âïž Precisely measure their masses, spins, etc. and how these distributions evolve? đ±Uncover the seeds of the massive black holes found in galactic centres? #Astro2020 Current detectors like @LIGO can do great things, but we're limited in how far away we can detect sources. More sensitive detectors = deeper survey of the Universe. We'd like to survey back before the peak in star formation (z ~ 2) to the end of the cosmological dark ages z ~ 20 How far could we see a binary black hole system with a future detector? The white solid line shows horizon with Cosmic Explore, the white dashed with the Einstein telescope. The colour coding is for a boost ÎČ relative to @LIGO A+, and the blue line is for ÎČ =10 #Astro2020 If we increase the frequency range of our detectors we can detect more systems. Wider frequency range = wider range of sources. Lower frequency sensitivities lets us detect higher mass systems. A 100+100 solar mass binary at z = 10 is unobservable above 10 Hz #Astro2020 We want to detect heavier systems as these might tell us where supermassive black holes come from. This plot shows the low frequency sensitivity we need to detect a 100+100 solar mass system at z = 10 if we parametrize the PSD as S = S10 (f/10 Hz)^α down to a cut-off f = fmin Improved detector sensitivity + wider frequency bandwidth = sharper parameter measurements. With next-gen #GravitationalWave detectors, we would precisely pin down the mass and spin distributions of black holes, which are clues to how they form #Astro2020 Next-gen #GravitationalWave detectors will let us measure how mass and spin distributions evolve through cosmic time. We need lots of detections to do this. This plot shows detections per year for different redshift binsâthey max out when we detect ALL of them #Astro2020 ",https://arxiv.org/abs/1903.09220,"Next-generation observations will revolutionize our understanding of binary black holes and will detect new sources, such as intermediate-mass black holes. Primary science goals include: Discover binary black holes throughout the observable Universe; Reveal the fundamental properties of black holes; Uncover the seeds of supermassive black holes. ","Deeper, Wider, Sharper: Next-Generation Ground-Based Gravitational-Wave
Observations of Binary Black Holes",8,"['New on @arxiv this week, our #Astro2020 white paper on observing binary black holes with next-generation #GravitationalWave detectors\n\nDeeper, wider, sharper: Next-generation ground-based gravitational-wave observations of binary black holes\n ', 'We ask what is required of a detector to\n đ Detect binary black holes throughout the observable universe?\n âïž Precisely measure their masses, spins, etc. and how these distributions evolve?\n đ±Uncover the seeds of the massive black holes found in galactic centres?\n#Astro2020', ""Current detectors like @LIGO can do great things, but we're limited in how far away we can detect sources. More sensitive detectors = deeper survey of the Universe. We'd like to survey back before the peak in star formation (z ~ 2) to the end of the cosmological dark ages z ~ 20"", 'How far could we see a binary black hole system with a future detector? \nThe white solid line shows horizon with Cosmic Explore, the white dashed with the Einstein telescope. The colour coding is for a boost ÎČ relative to @LIGO A+, and the blue line is for ÎČ =10 #Astro2020 https://t.co/jfQyu52U20', 'If we increase the frequency range of our detectors we can detect more systems. Wider frequency range = wider range of sources. Lower frequency sensitivities lets us detect higher mass systems. A 100+100 solar mass binary at z = 10 is unobservable above 10 Hz #Astro2020 https://t.co/xIsVzhsXfB', 'We want to detect heavier systems as these might tell us where supermassive black holes come from. This plot shows the low frequency sensitivity we need to detect a 100+100 solar mass system at z = 10 if we parametrize the PSD as S = S10 (f/10 Hz)^α down to a cut-off f = fmin https://t.co/PgmMsGTlkc', 'Improved detector sensitivity + wider frequency bandwidth = sharper parameter measurements. With next-gen #GravitationalWave detectors, we would precisely pin down the mass and spin distributions of black holes, which are clues to how they form https://t.co/4oZISD4pat #Astro2020', 'Next-gen #GravitationalWave detectors will let us measure how mass and spin distributions evolve through cosmic time. We need lots of detections to do this. This plot shows detections per year for different redshift binsâthey max out when we detect ALL of them #Astro2020 https://t.co/I1vo81yEIA']",19,03,2190
132,153,1499019305475887107,336358884,Swati Gupta,"Excited about our new paper with Yuri Faenza and Xuan Zhang on ""Discovering Opportunities in New York City's Discovery Program"". Check it out here: [1/n] The Discovery Program is an affirmative action policy for admissions to the top specialized high schools in New York City. It has been instrumental in increasing the number of disadvantaged students admitted to top high schools. [2/n] However, our analysis on past data from NY DOE shows that this program has (i) created ~950 in-group blocking pairs, impacting ~650 students yearly, (ii) created an incentive for top disadvantaged students to underperform! (lower performers matched to better schools). [3/n] In this work, we analyze to mechanisms, which minimally change the discovery program - Joint Seat Allocation (JSA) and Minority Reserve (MR). Both are weakly group-strategy proof and result in no in-group blocking pairs. [4/n] We show that for ""highly competitive"" markets such as NY High Schools, JSA dominates MR. This leads to our proposal to DOE: use JSA instead of Discovery, to account for students' preference of summer school seats as higher than general admissions seats. @XuanZhang816 !!",https://arxiv.org/abs/2203.00544,"Discovery program (DISC) is an affirmative action policy used by the New York City Department of Education (NYC DOE). It has been instrumental in increasing the number of admissions for disadvantaged students at specialized high schools. However, our empirical analysis of the student-school matches shows that about 950 in-group blocking pairs were created each year amongst the disadvantaged group of students, impacting about 650 disadvantaged students. Moreover, we find that this program usually benefits lower-performing disadvantaged students more than the top-performing ones, thus unintentionally creating an incentive to under-perform. In this work, we explore two affirmative action policies that can be used to minimally modify and improve the discovery program: minority reserve (MR) and joint-seat allocation (JSA). We show that (i) both MR and JSA result in no in-group blocking pairs, and (ii) JSA is weakly group strategy-proof, ensures that at least one disadvantaged is not worse off, and when reservation quotas are carefully chosen then no disadvantaged student is worse-off. In the general setting, we show that there is no clear winner in terms of the matchings provided by DISC, JSA and MR, from the perspective of disadvantaged students. We however characterize a condition for markets, that we term high competitiveness, where JSA dominates MR for disadvantaged students. This condition is verified in markets when there is a higher demand for seats than supply, and the performances of disadvantaged students are significantly lower than that of advantaged students. Data from NYC DOE satisfy the high competitiveness condition, and for this dataset our empirical results corroborate our theoretical predictions, showing the superiority of JSA. We believe that the discovery program, and more generally affirmative action mechanisms, can be changed for the better by implementing JSA. ","Discovering Opportunities in New York City's Discovery Program: an
Analysis of Affirmative Action Mechanisms",6,"['Excited about our new paper with Yuri Faenza and Xuan Zhang on ""Discovering Opportunities in New York City\'s Discovery Program"". Check it out here: [1/n]', 'The Discovery Program is an affirmative action policy for admissions to the top specialized high schools in New York City. It has been instrumental in increasing the number of disadvantaged students admitted to top high schools. [2/n]', 'However, our analysis on past data from NY DOE shows that this program has (i) created ~950 in-group blocking pairs, impacting ~650 students yearly, (ii) created an incentive for top disadvantaged students to underperform! (lower performers matched to better schools). [3/n]', 'In this work, we analyze to mechanisms, which minimally change the discovery program - Joint Seat Allocation (JSA) and Minority Reserve (MR). Both are weakly group-strategy proof and result in no in-group blocking pairs. [4/n]', 'We show that for ""highly competitive"" markets such as NY High Schools, JSA dominates MR. This leads to our proposal to DOE: use JSA instead of Discovery, to account for students\' preference of summer school seats as higher than general admissions seats.', '@XuanZhang816 !!']",22,03,1168
133,109,1121370039012212736,3334758406,Alejandro Lumbreras-Calle,"We published a new paper today! We analyze the morphology of low-mass star-forming galaxies, masking the star-forming regions, and found two different classes! A smaller, roundish, redder class and a larger, disk-like, bluer one. Check it out here: To mask the star-forming regions and analyze only the stellar host, we had to create high-resolution Hα images using broad-band HST data, and we used a new bayesian code, PHI () to perform the fitting. We applied it to 7 HST bands, independently. Enjoy!",https://arxiv.org/abs/1904.10462,"The morphological evolution of star-forming galaxies provides important clues to understand their physical properties, as well as the triggering and quenching mechanisms of star formation. We aim at connecting morphology and star-formation properties of low-mass galaxies (median stellar mass $\sim$ 10$^{8.5}$ M$_{\odot}$) at low redshift ($z<0.36$). We use a sample of medium-band selected star-forming galaxies from the GOODS-North field. H$\alpha$ images for the sample are created combining both spectral energy distribution fits and HST data. Using them, we mask the star forming regions to obtain an unbiased two-dimensional model of the light distribution of the host galaxies. For this purpose we use $\texttt{PHI}$, a new Bayesian photometric decomposition code. We apply it independently to 7 HST bands assuming a S\'ersic surface brightness model. Star-forming galaxy hosts show low S\'ersic index (with median $n$ $\sim$ 0.9), as well as small sizes (median $R_e$ $\sim$ 1.6 kpc), and negligible change of the parameters with wavelength (except for the axis ratio, which grows with wavelength). Using a clustering algorithm, we find two different classes of star-forming galaxies: A more compact, redder, and high-$n$ (class A) and a more extended, bluer and lower-$n$ one (class B). We also find evidence that the first class is more spheroidal-like. In addition, we find that 48% of the analyzed galaxies present negative color gradients (only 5% are positive). The host component of low-mass star-forming galaxies at $z<0.36$ separates into two different classes, similar to what has been found for their higher mass counterparts. The results are consistent with an evolution from class B to class A. Several mechanisms from the literature, like minor and major mergers, and violent disk instability, can explain the physical process behind the likely transition between the classes. [abridged] ","The stellar host in star-forming low-mass galaxies: Evidence for two
classes",2,"['We published a new paper today! We analyze the morphology of low-mass star-forming galaxies, masking the star-forming regions, and found two different classes! A smaller, roundish, redder class and a larger, disk-like, bluer one. Check it out here: ', 'To mask the star-forming regions and analyze only the stellar host, we had to create high-resolution Hα images using broad-band HST data, and we used a new bayesian code, PHI (https://t.co/i7BqQjJ1UU) to perform the fitting. We applied it to 7 HST bands, independently. Enjoy!']",19,04,522
134,19,1265644032291717123,2797254210,Michael A. Perlin,"New paper from a summer project at Argonne National Lab (@argonne)! We present an improved method to ""cut"" quantum circuits into smaller sub-circuits. We also show that circuit cutting can do *better* than running the full circuit to estimate its output. ",https://arxiv.org/abs/2005.12702,"We introduce maximum likelihood fragment tomography (MLFT) as an improved circuit cutting technique for running clustered quantum circuits on quantum devices with a limited number of qubits. In addition to minimizing the classical computing overhead of circuit cutting methods, MLFT finds the most likely probability distribution for the output of a quantum circuit, given the measurement data obtained from the circuit's fragments. We demonstrate the benefits of MLFT for accurately estimating the output of a fragmented quantum circuit with numerical experiments on random unitary circuits. Finally, we show that circuit cutting can estimate the output of a clustered circuit with higher fidelity than full circuit execution, thereby motivating the use of circuit cutting as a standard tool for running clustered circuits on quantum hardware. ",Quantum Circuit Cutting with Maximum Likelihood Tomography,1,"['New paper from a summer project at Argonne National Lab (@argonne)!\n\nWe present an improved method to ""cut"" quantum circuits into smaller sub-circuits. We also show that circuit cutting can do *better* than running the full circuit to estimate its output.\n\n']",20,05,261
135,39,1471823867404509194,824380220367060992,Dr. Amy McGovern,Our new paper on AI and Ethics for environmental sciences is on arxiv - submitted to EDS for first issue & next paper in prep for AIES! Fun and important work with @DJGagneDos @Iebertu Ann Bostrom @ai2enviro #AIEthics #ArtificialIntelligence #wxtwitter ,https://arxiv.org/abs/2112.08453,"Given the growing use of Artificial Intelligence (AI) and machine learning (ML) methods across all aspects of environmental sciences, it is imperative that we initiate a discussion about the ethical and responsible use of AI. In fact, much can be learned from other domains where AI was introduced, often with the best of intentions, yet often led to unintended societal consequences, such as hard coding racial bias in the criminal justice system or increasing economic inequality through the financial system. A common misconception is that the environmental sciences are immune to such unintended consequences when AI is being used, as most data come from observations, and AI algorithms are based on mathematical formulas, which are often seen as objective. In this article, we argue the opposite can be the case. Using specific examples, we demonstrate many ways in which the use of AI can introduce similar consequences in the environmental sciences. This article will stimulate discussion and research efforts in this direction. As a community, we should avoid repeating any foreseeable mistakes made in other domains through the introduction of AI. In fact, with proper precautions, AI can be a great tool to help {\it reduce} climate and environmental injustice. We primarily focus on weather and climate examples but the conclusions apply broadly across the environmental sciences. ","The Need for Ethical, Responsible, and Trustworthy Artificial
Intelligence for Environmental Sciences",1,['Our new paper on AI and Ethics for environmental sciences is on arxiv - submitted to EDS for first issue & next paper in prep for AIES! Fun and important work with @DJGagneDos @Iebertu Ann Bostrom @ai2enviro #AIEthics #ArtificialIntelligence #wxtwitter\n'],21,12,259
136,67,1130727097079619584,1020088099,Umberto Picchini,"New paper with @bayesian_stats: we accelerate pseudomarginal ABC-MCMC sampling for expensive models via data resampling. We reduce bias due to resampling using stratified Monte Carlo. This also allows using a large ABC threshold. thanks to a larger threshold, when using appropriate strata we experience lower integrated autocorrelation times (IAT), which means the MCMC procedure is more efficient.",http://arxiv.org/abs/1905.07976,"Approximate Bayesian computation (ABC) is computationally intensive for complex model simulators. To exploit expensive simulations, data-resampling via bootstrapping can be employed to obtain many artificial datasets at little cost. However, when using this approach within ABC, the posterior variance is inflated, thus resulting in biased posterior inference. Here we use stratified Monte Carlo to considerably reduce the bias induced by data resampling. We also show empirically that it is possible to obtain reliable inference using a larger than usual ABC threshold. Finally, we show that with stratified Monte Carlo we obtain a less variable ABC likelihood. Ultimately we show how our approach improves the computational efficiency of the ABC samplers. We construct several ABC samplers employing our methodology, such as rejection and importance ABC samplers, and ABC-MCMC samplers. We consider simulation studies for static (Gaussian, g-and-k distribution, Ising model, astronomical model) and dynamic models (Lotka-Volterra). We compare against state-of-art sequential Monte Carlo ABC samplers, synthetic likelihoods, and likelihood-free Bayesian optimization. For a computationally expensive Lotka-Volterra case study, we found that our strategy leads to a more than 10-fold computational saving, compared to a sampler that does not use our novel approach. ","Stratified sampling and bootstrapping for approximate Bayesian
computation",2,"['New paper with @bayesian_stats: we accelerate pseudomarginal ABC-MCMC sampling for expensive models via data resampling. We reduce bias due to resampling using stratified Monte Carlo. This also allows using a large ABC threshold. ', 'thanks to a larger threshold, when using appropriate strata we experience lower integrated autocorrelation times (IAT), which means the MCMC procedure is more efficient.']",19,05,413
137,21,1166334749729857536,185910194,Graham Neubig,"#EMNLP2019 paper ""A Little Annotation does a Lot of Good: A Study in Bootstrapping Low-resource NER"", examines cross-lingual transfer and active learning to create NER systems in new languages: Interestingly, even a little annotation is enough to get quite good results, only a few 100 or 1000 entities. Differences between simulation and human annotation are also interesting.",https://arxiv.org/abs/1908.08983,"Most state-of-the-art models for named entity recognition (NER) rely on the availability of large amounts of labeled data, making them challenging to extend to new, lower-resourced languages. However, there are now several proposed approaches involving either cross-lingual transfer learning, which learns from other highly resourced languages, or active learning, which efficiently selects effective training data based on model predictions. This paper poses the question: given this recent progress, and limited human annotation, what is the most effective method for efficiently creating high-quality entity recognizers in under-resourced languages? Based on extensive experimentation using both simulated and real human annotation, we find a dual-strategy approach best, starting with a cross-lingual transferred model, then performing targeted annotation of only uncertain entity spans in the target language, minimizing annotator effort. Results demonstrate that cross-lingual transfer is a powerful tool when very little data can be annotated, but an entity-targeted annotation strategy can achieve competitive accuracy quickly, with just one-tenth of training data. ","A Little Annotation does a Lot of Good: A Study in Bootstrapping
Low-resource Named Entity Recognizers",2,"['#EMNLP2019 paper ""A Little Annotation does a Lot of Good: A Study in Bootstrapping Low-resource NER"", examines cross-lingual transfer and active learning to create NER systems in new languages: ', 'Interestingly, even a little annotation is enough to get quite good results, only a few 100 or 1000 entities. Differences between simulation and human annotation are also interesting.']",19,08,391
138,78,996006570868822016,364410282,Kevin H. Wilson,"Old project new paper acceptance! ""Using Machine Learning to Assess the Risk of and Prevent Water Main Breaks"" @ KDD. 1/ This was a project at @datascifellows. We had a _lot_ of co-authors (which I think is awesome). @samedelstein, @adriaf, @CKenneyJr, @rayidghani, Avishek Kumar, Ali Rizvi, Ben Brooks, Ali Vanderveld, Andrew Maxwell, and Joe Zuckerbraun 2/ Water main breaks are a huge problem in Rust Belt cities. The infrastructure was built assuming a growing population and growing suburanization. The tax base hasn't kept up with original projections, so maintenance $$$s are declining. 3/ On the other hand, people aren't moving into the city core, so cities can't simply decomission parts of their infrastructure without cutting water off to residents. The result? 4/ We also know that preventive maintenance is chepar than reactive maintenance. A bunch of restaurants with no water for a day is a _really bad thing_. So how should we order maintenance? 5/ A first pass indicates that if you just replaced 50 pipes that have broken in the past, you would stop about 24 water main breaks. This indicates that the way that pipes are temporarily repaired doesn't hold for very long. 6/ On the other hand, a lot of pipes have broken in the past. Can we do even better? Accounting for pipe age, diameter, and material, as well as the surrounding soil type, we could go from preventing 24 breaks to 31 breaks / 50 repaired. 7/ The methods we used (GBDT, a little clever GIS trickery) aren't revolutionary. The key part of this project was that it took subject matter experts in data science, local government, and water maintenance and asked what they could do. 8/ In particular, our data was very messy. When water mains were laid, meticulous notes were kept about them. But they were laid in the late 1800s, so those notes look like this: 9/ Without Sam, Adria, Joe, and Andrew, this sort of data would never have been able to go anywhere. 10/ Nowadays I'm doing more of this sort of work with @TheLab_DC, finding housing violations, getting people enrolled for benefits, and tackling the rat problem in the District. If you're interested in this kind of work, hit me up. 11/11",https://arxiv.org/abs/1805.03597,"Water infrastructure in the United States is beginning to show its age, particularly through water main breaks. Main breaks cause major disruptions in everyday life for residents and businesses. Water main failures in Syracuse, N.Y. (as in most cities) are handled reactively rather than proactively. A barrier to proactive maintenance is the city's inability to predict the risk of failure on parts of its infrastructure. In response, we worked with the city to build a ML system to assess the risk of a water mains breaking. Using historical data on which mains have failed, descriptors of pipes, and other data sources, we evaluated several models' abilities to predict breaks three years into the future. Our results show that our system using gradient boosted decision trees performed the best out of several algorithms and expert heuristics, achieving precision at 1\% (P@1) of 0.62. Our model outperforms a random baseline (P@1 of 0.08) and expert heuristics such as water main age (P@1 of 0.10) and history of past main breaks (P@1 of 0.48). The model is deployed in the City of Syracuse. We are running a pilot by calculating the risk of failure for each city block over the period 2016-2018 using data up to the end of 2015 and, as of the end of 2017, there have been 33 breaks on our riskiest 52 mains. This has been a successful initiative for the city of Syracuse in improving their infrastructure and we believe this approach can be applied to other cities. ","Using Machine Learning to Assess the Risk of and Prevent Water Main
Breaks",11,"['Old project new paper acceptance! ""Using Machine Learning to Assess the Risk of and Prevent Water Main Breaks"" @ KDD. 1/', 'This was a project at @datascifellows. We had a _lot_ of co-authors (which I think is awesome). @samedelstein, @adriaf, @CKenneyJr, @rayidghani, Avishek Kumar, Ali Rizvi, Ben Brooks, Ali Vanderveld, Andrew Maxwell, and Joe Zuckerbraun 2/', ""Water main breaks are a huge problem in Rust Belt cities. The infrastructure was built assuming a growing population and growing suburanization. The tax base hasn't kept up with original projections, so maintenance $$$s are declining. 3/ https://t.co/atEYdRrgdf"", ""On the other hand, people aren't moving into the city core, so cities can't simply decomission parts of their infrastructure without cutting water off to residents. The result? https://t.co/3a8Nx8a8Tw 4/"", 'We also know that preventive maintenance is chepar than reactive maintenance. A bunch of restaurants with no water for a day is a _really bad thing_. So how should we order maintenance? 5/', ""A first pass indicates that if you just replaced 50 pipes that have broken in the past, you would stop about 24 water main breaks. This indicates that the way that pipes are temporarily repaired doesn't hold for very long. 6/"", 'On the other hand, a lot of pipes have broken in the past. Can we do even better? Accounting for pipe age, diameter, and material, as well as the surrounding soil type, we could go from preventing 24 breaks to 31 breaks / 50 repaired. 7/ https://t.co/V1Pj6Picq4', ""The methods we used (GBDT, a little clever GIS trickery) aren't revolutionary. The key part of this project was that it took subject matter experts in data science, local government, and water maintenance and asked what they could do. 8/"", 'In particular, our data was very messy. When water mains were laid, meticulous notes were kept about them. But they were laid in the late 1800s, so those notes look like this: 9/ https://t.co/jtyUAlDOFI', 'Without Sam, Adria, Joe, and Andrew, this sort of data would never have been able to go anywhere. 10/', ""Nowadays I'm doing more of this sort of work with @TheLab_DC, finding housing violations, getting people enrolled for benefits, and tackling the rat problem in the District. If you're interested in this kind of work, hit me up. 11/11""]",18,05,2217
139,124,1001983568384577538,1152296594,Swabha Swayamdipta,"Our #ACL2018 paper with Phoebe Mulcaire is now available on arXiv: . We find that combining languages for training SRL systems can tricky, because of differences between the languages, and the diverse annotation schemes used in the CoNLL 2009 shared task.",https://arxiv.org/abs/1805.11598,"Previous approaches to multilingual semantic dependency parsing treat languages independently, without exploiting the similarities between semantic structures across languages. We experiment with a new approach where we combine resources from a pair of languages in the CoNLL 2009 shared task to build a polyglot semantic role labeler. Notwithstanding the absence of parallel data, and the dissimilarity in annotations between languages, our approach results in an improvement in SRL performance on multiple languages over a monolingual baseline. Analysis of the polyglot model shows it to be advantageous in lower-resource settings. ",Polyglot Semantic Role Labeling,1,"['Our #ACL2018 paper with Phoebe Mulcaire is now available on arXiv: . We find that combining languages for training SRL systems can tricky, because of differences between the languages, and the diverse annotation schemes used in the CoNLL 2009 shared task.']",18,05,261
140,18,1134630118930997249,24141244,Karl Krauth,"New paper with @stephenltu and @beenwrekt on analyzing the sample complexity of policy iteration on the linear quadratic regulator. This paper spans ideas from control theory, reinforcement learning, and nonasymptotic statistics. It might be quite hard to parse if you haven't encountered some of those ideas before. In case you're having a hard time understanding it here's a tl;dr instead. There's recently been a lot of interest in analyzing how many samples RL algorithms need to perform well. Unfortunately we don't have the mathematical tools to analyze these algorithms in full generality. So instead we study specialized problems. One such problem is called the linear quadratic regulator (LQR) which is a classic continuous controls problem with a known optimal solution. The continuous part is very important since continuous RL has seen much less analysis than its discrete counterpart. In our paper we study policy iteration (PI) when applied to LQR. Lucky for us specializing PI to LQR just involves taking this paper: and making a few variable substitutions. Most of the work is in proving bounds on the algorithm's sample complexity. That's where ideas from statistics come into play. Thanks to recently published results we're able to provide a non-asymptotic analysis that assumes very little of our problem. So what's the result of our analysis? Well first of all we show that the number of samples we need for PI has a quadratic (up to logarithmic factors) dependence on how close we want to get to the optimal policy. Second we show that when we run PI and compare it against the optimal policy we won't be too worse off in terms of cost. In the language of multi-armed bandits we say that our algorithm exhibits sublinear regret. In particular we show that the regret grows at a rate of at most T^(2/3) where T is the number of iterations we run for. This is in contrast to model-based methods that are able to achieve a regret of T^(1/2). However our work doesn't definitively say that PI can't achieve a rate of T^(1/2). We only provide an upper bound on regret. It'd be very exciting if someone derived a corresponding lower bound to answer this question!",https://arxiv.org/abs/1905.12842,"We study the sample complexity of approximate policy iteration (PI) for the Linear Quadratic Regulator (LQR), building on a recent line of work using LQR as a testbed to understand the limits of reinforcement learning (RL) algorithms on continuous control tasks. Our analysis quantifies the tension between policy improvement and policy evaluation, and suggests that policy evaluation is the dominant factor in terms of sample complexity. Specifically, we show that to obtain a controller that is within $\varepsilon$ of the optimal LQR controller, each step of policy evaluation requires at most $(n+d)^3/\varepsilon^2$ samples, where $n$ is the dimension of the state vector and $d$ is the dimension of the input vector. On the other hand, only $\log(1/\varepsilon)$ policy improvement steps suffice, resulting in an overall sample complexity of $(n+d)^3 \varepsilon^{-2} \log(1/\varepsilon)$. We furthermore build on our analysis and construct a simple adaptive procedure based on $\varepsilon$-greedy exploration which relies on approximate PI as a sub-routine and obtains $T^{2/3}$ regret, improving upon a recent result of Abbasi-Yadkori et al. ","Finite-time Analysis of Approximate Policy Iteration for the Linear
Quadratic Regulator",10,"['New paper with @stephenltu and @beenwrekt on analyzing the sample complexity of policy iteration on the linear quadratic regulator. ', ""This paper spans ideas from control theory, reinforcement learning, and nonasymptotic statistics. It might be quite hard to parse if you haven't encountered some of those ideas before. In case you're having a hard time understanding it here's a tl;dr instead."", ""There's recently been a lot of interest in analyzing how many samples RL algorithms need to perform well. Unfortunately we don't have the mathematical tools to analyze these algorithms in full generality. So instead we study specialized problems."", 'One such problem is called the linear quadratic regulator (LQR) which is a classic continuous controls problem with a known optimal solution. The continuous part is very important since continuous RL has seen much less analysis than its discrete counterpart.', ""In our paper we study policy iteration (PI) when applied to LQR. Lucky for us specializing PI to LQR just involves taking this paper: https://t.co/AU0jCMmF0s and making a few variable substitutions. Most of the work is in proving bounds on the algorithm's sample complexity."", ""That's where ideas from statistics come into play. Thanks to recently published results we're able to provide a non-asymptotic analysis that assumes very little of our problem."", ""So what's the result of our analysis? Well first of all we show that the number of samples we need for PI has a quadratic (up to logarithmic factors) dependence on how close we want to get to the optimal policy."", ""Second we show that when we run PI and compare it against the optimal policy we won't be too worse off in terms of cost. In the language of multi-armed bandits we say that our algorithm exhibits sublinear regret."", 'In particular we show that the regret grows at a rate of at most T^(2/3) where T is the number of iterations we run for. This is in contrast to model-based methods that are able to achieve a regret of T^(1/2).', ""However our work doesn't definitively say that PI can't achieve a rate of T^(1/2). We only provide an upper bound on regret. It'd be very exciting if someone derived a corresponding lower bound to answer this question!""]",19,05,2193
141,74,1294311608227856385,56113666,Mengye Ren,"Towards more interactive #selfdriving, we propose a new motion forecasting network based on the transformer architecture to explicitly model interaction among actors. Check out our recent IROS'20 paper, available on arXiv: #SelfDrivingCars @uber @uberatg Joint work with Lingyun (Luke) Li, Bin Yang, Ming Liang @zengwenyuan1995 @seanseg @RaquelUrtasun",https://arxiv.org/abs/2008.05927,"In this paper, we tackle the problem of detecting objects in 3D and forecasting their future motion in the context of self-driving. Towards this goal, we design a novel approach that explicitly takes into account the interactions between actors. To capture their spatial-temporal dependencies, we propose a recurrent neural network with a novel Transformer architecture, which we call the Interaction Transformer. Importantly, our model can be trained end-to-end, and runs in real-time. We validate our approach on two challenging real-world datasets: ATG4D and nuScenes. We show that our approach can outperform the state-of-the-art on both datasets. In particular, we significantly improve the social compliance between the estimated future trajectories, resulting in far fewer collisions between the predicted actors. ","End-to-end Contextual Perception and Prediction with Interaction
Transformer",2,"[""Towards more interactive #selfdriving, we propose a new motion forecasting network based on the transformer architecture to explicitly model interaction among actors. Check out our recent IROS'20 paper, available on arXiv:\n\n\n#SelfDrivingCars @uber @uberatg "", 'Joint work with Lingyun (Luke) Li, Bin Yang, Ming Liang @zengwenyuan1995 @seanseg @RaquelUrtasun']",20,08,365
142,65,1361663915072118784,806058672619212800,Guillaume Lample,"New paper on code de-obfuscation: We show that if you obfuscate the name of identifiers in source code, a model can retrieve the original names with very high accuracy. It even works when you remove the name of each variable / function! 1/3 Retrieving masked variable names is much more difficult than retrieving randomly masked tokens, and turns out to be an excellent pretraining objective, that significantly outperforms Masked LM in unsupervised code translation and language code search. 2/3 Also very useful if you try to understand other people's code when it is written with uninformative variable names! With @b_roziere @MaLachaux @MarcSzafraniec 3/3 ",https://arxiv.org/abs/2102.07492,"Recent advances in self-supervised learning have dramatically improved the state of the art on a wide variety of tasks. However, research in language model pre-training has mostly focused on natural languages, and it is unclear whether models like BERT and its variants provide the best pre-training when applied to other modalities, such as source code. In this paper, we introduce a new pre-training objective, DOBF, that leverages the structural aspect of programming languages and pre-trains a model to recover the original version of obfuscated source code. We show that models pre-trained with DOBF significantly outperform existing approaches on multiple downstream tasks, providing relative improvements of up to 13% in unsupervised code translation, and 24% in natural language code search. Incidentally, we found that our pre-trained model is able to de-obfuscate fully obfuscated source files, and to suggest descriptive variable names. ",DOBF: A Deobfuscation Pre-Training Objective for Programming Languages,3,"['New paper on code de-obfuscation: \nWe show that if you obfuscate the name of identifiers in source code, a model can retrieve the original names with very high accuracy. It even works when you remove the name of each variable / function! 1/3 ', 'Retrieving masked variable names is much more difficult than retrieving randomly masked tokens, and turns out to be an excellent pretraining objective, that significantly outperforms Masked LM in unsupervised code translation and language code search. 2/3 https://t.co/7l8VWDP9OW', ""Also very useful if you try to understand other people's code when it is written with uninformative variable names!\nWith @b_roziere @MaLachaux @MarcSzafraniec 3/3 https://t.co/NF0qG68REm""]",21,02,687
143,121,1456076469772378114,1653127615,Chris Monahan,"Our new paper just arrived on the arXiv ! We study how the momentum carried by quarks inside a neutron is related to their spin (relative to the neutron), directly from the theory of the strong nuclear force (which holds the proton together). #physics #QCD @ZhongboK The neutron is very special! :) Yes, you are completely right, in the paper we discuss nucleons, but I thought maybe on twitter more people would have heard of a neutron than a nucleon. Of course that doesn't account for the fact that my followers are basically all physicists...",https://arxiv.org/abs/2111.01808,"We present a determination of the non-singlet transversity parton distribution function (PDF) of the nucleon, normalized with respect to the tensor charge at $\mu^2=2$ GeV$^2$ from lattice quantum chromodynamics. We apply the pseudo-distribution approach, using a gauge ensemble with a lattice spacing of 0.094 fm and the light quark mass tuned to a pion mass of 358 MeV. We extract the transversity PDF from the analysis of the short-distance behavior of the Ioffe-time pseudo-distribution using the leading-twist next-to-leading order (NLO) matching coefficients calculated for transversity. We reconstruct the $x$-dependence of the transversity PDF through an expansion in a basis of Jacobi polynomials in order to reduce the PDF ansatz dependence. Within the limitations imposed by a heavier-than-physical pion mass and a fixed lattice spacing, we present a comparison of our estimate for the valence transversity PDF with the recent global fit results based on single transverse spin asymmetry. We find the intrinsic nucleon sea to be isospin symmetric with respect to transversity. ","The transversity parton distribution function of the nucleon using the
pseudo-distribution approach",2,"['Our new paper just arrived on the arXiv ! We study how the momentum carried by quarks inside a neutron is related to their spin (relative to the neutron), directly from the theory of the strong nuclear force (which holds the proton together). #physics #QCD', ""@ZhongboK The neutron is very special! :) Yes, you are completely right, in the paper we discuss nucleons, but I thought maybe on twitter more people would have heard of a neutron than a nucleon. Of course that doesn't account for the fact that my followers are basically all physicists...""]",21,11,552
144,39,984715238422364161,234398193,Hal Tasaki,"Hal Tasaki ""Topological phase transition and Z2 index for S=1 quantum spin chains"" My new paper on #QuantumSpin chains, which contains the first rigorous proof that the AKLT model is in a nontrivial SPT phase. The argument, which I like, is quite pretty. ",https://arxiv.org/abs/1804.04337,"We study $S=1$ quantum spin systems on the infinite chain with short ranged Hamiltonians which have certain rotational and discrete symmetry. We define a $\mathbb{Z}_2$ index for any gapped unique ground state, and prove that it is invariant under smooth deformation. By using the index, we provide the first rigorous proof of the existence of a ""topological"" phase transition, which cannot be characterized by any conventional order parameters, between the AKLT ground state and trivial ground states. This rigorously establishes that the AKLT model is in a nontrivial symmetry protected topological phase. ","Topological phase transition and $\mathbb{Z}_2$ index for $S=1$ quantum
spin chains",1,"['Hal Tasaki\n""Topological phase transition and Z2 index for S=1 quantum spin chains""\nMy new paper on #QuantumSpin chains, which contains the first rigorous proof that the AKLT model is in a nontrivial SPT phase. The argument, which I like, is quite pretty.\n']",18,04,261
145,12,1234522555995783168,88806960,Dr. Vivienne Baldassare,"New paper out today on an unusually long lived transient in the tiny blue compact dwarf galaxy galaxy PHL 293B (). This paper was led by Colin Burke, a grad student at UIUC. This mysterious source was lots fun to work on! PHL 293B weighs a few tens of millions of solar masses and is extremely metal poor (~1/10th of solar metallicity). The image below is from the Legacy Survey (). PHL 293B was found to have broad emission lines with blue-shifted absorption (800 km/s) in its SDSS spectroscopy in 2000. The broad emission was persistent for at least 10 years, and the source of the broad lines has been the topic of a lot of back and forth in the literature. Broad lines, blue shifted absorption, but no strong optical variability or outbursts. Active galactic nucleus? Luminous blue variable? Stellar wind? Nothing seemed to quite fit the observational properties of PHL 293B. In my 2018 paper searching for active galactic nuclei in Stripe 82 through optical photometric variability, I found this object to have very low-level variability. PHL 293B also turned up as variable in a similar search being done by Colin using the Dark Energy Survey. We joined forces, and in our combined 20 year light curve, we see a very slow fading. This was a newly discovered feature of PHL 293B! We decided to investigate further, and obtained new optical spectroscopy with Gemini in December 2019. This revealed that the broad H-alpha emission had finally faded! Between the (1) broad H-alpha emission lasting over a decade (2) recent disappearance of the broad lines (3) blue shifted absorption lines and (4) low-level photometric variability indicating a slow fade, we think that PHL 293B hosted an extremely long lived Type IIn supernova. Broad lines from supernovae don't usually stick around for a decade+, but there are some indications that these extremely long lived transients are more common in these metal poor, gas-rich environments. This really demonstrates the power of decades long light curves and multi-epoch spectroscopy! You can read lots more about the various scenarios we explored for PHL 293B here: Special thank you to @GeminiObs for granting us Director's Discretionary time! This work also uses repeat imaging data from @sdssurveys and @theDESurvey.",https://arxiv.org/abs/2002.12369,"We report on small-amplitude optical variability and recent dissipation of the unusually persistent broad emission lines in the blue compact dwarf galaxy PHL 293B. The galaxy's unusual spectral features (P Cygni-like profiles with $\sim$800 km s$^{-1}$ blueshifted absorption lines) have resulted in conflicting interpretations of the nature of this source in the literature. However, analysis of new Gemini spectroscopy reveals the broad emission has begun to fade after being persistent for over a decade prior. Precise difference imaging light curves constructed with the Sloan Digital Sky Survey and the Dark Energy Survey reveal small-amplitude optical variability of $\sim$0.1 mag in the g band offset by $100\pm21$ pc from the brightest pixel of the host. The light curve is well-described by an active galactic nuclei (AGN)-like damped random walk process. However, we conclude that the origin of the optical variability and spectral features of PHL 293B is due to a long-lived stellar transient, likely a Type IIn supernova or non-terminal outburst, mimicking long-term AGN-like variability. This work highlights the challenges of discriminating between scenarios in such extreme environments, relevant to searches for AGNs in dwarf galaxies. This is the second long-lived transient discovered in a blue compact dwarf, after SDSS1133. Our result implies such long-lived stellar transients may be more common in metal-deficient galaxies. Systematic searches for low-level variability in dwarf galaxies will be possible with the upcoming Legacy Survey of Space and Time at Vera C. Rubin Observatory. ","The Curious Case of PHL 293B: A Long-Lived Transient in a Metal-Poor
Blue Compact Dwarf Galaxy",11,"['New paper out today on an unusually long lived transient in the tiny blue compact dwarf galaxy galaxy PHL 293B (). This paper was led by Colin Burke, a grad student at UIUC. This mysterious source was lots fun to work on!', 'PHL 293B weighs a few tens of millions of solar masses and is extremely metal poor (~1/10th of solar metallicity). The image below is from the Legacy Survey (https://t.co/sVOtCBrAqw). https://t.co/vEJ1ZoHJH5', 'PHL 293B was found to have broad emission lines with blue-shifted absorption (800 km/s) in its SDSS spectroscopy in 2000. The broad emission was persistent for at least 10 years, and the source of the broad lines has been the topic of a lot of back and forth in the literature.', 'Broad lines, blue shifted absorption, but no strong optical variability or outbursts. Active galactic nucleus? Luminous blue variable? Stellar wind? Nothing seemed to quite fit the observational properties of PHL 293B.', 'In my 2018 paper searching for active galactic nuclei in Stripe 82 through optical photometric variability, I found this object to have very low-level variability. PHL 293B also turned up as variable in a similar search being done by Colin using the Dark Energy Survey.', 'We joined forces, and in our combined 20 year light curve, we see a very slow fading. This was a newly discovered feature of PHL 293B! https://t.co/lt5DrbrV8T', 'We decided to investigate further, and obtained new optical spectroscopy with Gemini in December 2019. This revealed that the broad H-alpha emission had finally faded! https://t.co/fJ4IxiQoa1', 'Between the (1) broad H-alpha emission lasting over a decade (2) recent disappearance of the broad lines (3) blue shifted absorption lines and (4) low-level photometric variability indicating a slow fade, we think that PHL 293B hosted an extremely long lived Type IIn supernova.', ""Broad lines from supernovae don't usually stick around for a decade+, but there are some indications that these extremely long lived transients are more common in these metal poor, gas-rich environments."", 'This really demonstrates the power of decades long light curves and multi-epoch spectroscopy! You can read lots more about the various scenarios we explored for PHL 293B here: https://t.co/fSVnxcEdGp', ""Special thank you to @GeminiObs for granting us Director's Discretionary time! This work also uses repeat imaging data from @sdssurveys and @theDESurvey.""]",20,02,2305
146,80,1470420339536678920,1368538327809409025,Johan Henriksson,In this new paper with Ashish Kakkar and Brian McPeak we present an explicit construction of higher-genus partition functions of 2d CFTs deriving from error-correcting codes. Hopefully this will give some new handles (pun intended) on the modular bootstrap ,https://arxiv.org/abs/2112.05168,"Higher genus modular invariance of two-dimensional conformal field theories (CFTs) is a largely unexplored area. In this paper, we derive explicit expressions for the higher genus partition functions of a specific class of CFTs: code CFTs, which are constructed using classical error-correcting codes. In this setting, the $\mathrm{Sp}(2g,\mathbb Z)$ modular transformations of genus $g$ Riemann surfaces can be recast as a simple set of linear maps acting on $2^g$ polynomial variables, which comprise an object called the code enumerator polynomial. The CFT partition function is directly related to the enumerator polynomial, meaning that solutions of the linear constraints from modular invariance immediately give a set of seemingly consistent partition functions at a given genus. We then find that higher genus constraints, plus consistency under degeneration limits of the Riemann surface, greatly reduces the number of possible code CFTs. This work provides a step towards a full understanding of the constraints from higher genus modular invariance on 2d CFTs. ",Classical Codes and Chiral CFTs at Higher Genus,1,['In this new paper with Ashish Kakkar and Brian McPeak we present an explicit construction of higher-genus partition functions of 2d CFTs deriving from error-correcting codes. Hopefully this will give some new handles (pun intended) on the modular bootstrap '],21,12,270
147,21,1134402279656828928,872072000906424321,Martin Mundt,"Maybe my favorite figure from our new continual learning paper: I find it cool how our approach can decide if samples from the prior will generate ambiguous or class interpolated images that lead to confusion, before even decoding. Helps quite a bit. ",https://arxiv.org/abs/1905.12019,"Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge. Although it is inevitable for continual-learning systems to encounter such unseen concepts, the corresponding literature appears to nonetheless focus primarily on alleviating catastrophic interference with learned representations. In this work, we introduce a probabilistic approach that connects these perspectives based on variational inference in a single deep autoencoder model. Specifically, we propose to bound the approximate posterior by fitting regions of high density on the basis of correctly classified data points. These bounds are shown to serve a dual purpose: unseen unknown out-of-distribution data can be distinguished from already trained known tasks towards robust application. Simultaneously, to retain already acquired knowledge, a generative replay process can be narrowed to strictly in-distribution samples, in order to significantly alleviate catastrophic interference. ","Unified Probabilistic Deep Continual Learning through Generative Replay
and Open Set Recognition",1,"['Maybe my favorite figure from our new continual learning paper: \n\nI find it cool how our approach can decide if samples from the prior will generate ambiguous or class interpolated images that lead to confusion, before even decoding. Helps quite a bit. ']",19,05,265
148,180,1494636885674799107,1247125153110204416,Roland Szakacs,"Paper day! ""The Column Densities of Molecular Gas across Cosmic Time: Bridging Observations and Simulations"" is accepted for publication in #MNRAS! In this paper we study the evolution of the molecular gas column density distribution in sim. and obs.: ",https://arxiv.org/abs/2202.08777,"Observations of the cosmic evolution of different gas phases across time indicate a marked increase in the molecular gas mass density towards $z\sim 2-3$. Such a transformation implies an accompanied change in the global distribution of molecular hydrogen column densities ($N_{\rm{H_2}}$). Using observations by PHANGS-ALMA/SDSS and simulations by GRIFFIN/IllustrisTNG we explore the evolution of this H$_2$ column density distribution function [$f(N_{\rm{H}_2})$]. The H$_2$ (and HI) column density maps for TNG50 and TNG100 are derived in post-processing and are made available through the IllustrisTNG online API. The shape and normalization of $f(N_{\rm{H}_2})$ of individual main-sequence star-forming galaxies are correlated with the star formation rate (SFR), stellar mass (${M_*}$), and H$_2$ mass ($M_{\rm{H}_2}$) in both observations and simulations. TNG100, combined with H$_2$ post-processing models, broadly reproduces observations, albeit with differences in slope and normalization. Also, an analytically modelled $f(N)$, based on exponential gas disks, matches well with the simulations. The GRIFFIN simulation gives first indications that the slope of $f(N_{\rm{H}_2})$ might not majorly differ when including non-equilibrium chemistry in simulations. The $f(N_{\rm{H}_2})$ by TNG100 implies that higher molecular gas column densities are reached at $z=3$ than at $z=0$. Further, denser regions contribute more to the molecular mass density at $z=3$. Finally, H$_2$ starts dominating compared to HI only at column densities above log($N_{\rm{H}_2} / \rm{cm}^{-2}) \sim 21.8-22$ at both redshifts. These results imply that neutral atomic gas is an important contributor to the overall cold gas mass found in the ISM of galaxies including at densities typical for molecular clouds at $z=0$ and $z=3$. ","The Column Densities of Molecular Gas across Cosmic Time: Bridging
Observations and Simulations",1,"['Paper day! ""The Column Densities of Molecular Gas across Cosmic Time: Bridging Observations and Simulations"" is accepted for publication in #MNRAS! In this paper we study the evolution of the molecular gas column density distribution in sim. and obs.: ']",22,02,258
149,6,1324312287402987522,797888987675365377,Tom Rainforth,"Improving Transformation Invariance in Contrastive Representation Learning, our new paper led by @AdamEFoster & @rpukdeee. We strengthen representation invariance in contrastive learning using training gradient regularization & test-time feature averaging ",https://arxiv.org/abs/2010.09515,"We propose methods to strengthen the invariance properties of representations obtained by contrastive learning. While existing approaches implicitly induce a degree of invariance as representations are learned, we look to more directly enforce invariance in the encoding process. To this end, we first introduce a training objective for contrastive learning that uses a novel regularizer to control how the representation changes under transformation. We show that representations trained with this objective perform better on downstream tasks and are more robust to the introduction of nuisance transformations at test time. Second, we propose a change to how test time representations are generated by introducing a feature averaging approach that combines encodings from multiple transformations of the original input, finding that this leads to across the board performance gains. Finally, we introduce the novel Spirograph dataset to explore our ideas in the context of a differentiable generative process with multiple downstream tasks, showing that our techniques for learning invariance are highly beneficial. ","Improving Transformation Invariance in Contrastive Representation
Learning",1,"['Improving Transformation Invariance in Contrastive Representation Learning, our new paper led by @AdamEFoster & @rpukdeee. We strengthen representation invariance in contrastive learning using training gradient regularization & test-time feature averaging\n\n ']",20,10,269
150,2,1037394871718301696,16495124,Sam Thomson,"WFSAs are dead. Long live WFSAs! Our new EMNLP paper ""Rational Recurrences"" is out: Work by Hao Peng, @royschwartz02, me, and @nlpnoah. We analyze several recent RNNs as finite-state automata with neural transition weights. These include quasi-RNNs (Bradbury et al), strongly-typed RNNs (Balduzzi & Ghifary), simple recurrent units (Lei et al), structurally constrained RNNs (Mikolov et al), input switched affine networks (Foerster et al)... *hand-waving* forget gates are self-loops!",https://arxiv.org/abs/1808.09357,"Despite the tremendous empirical success of neural models in natural language processing, many of them lack the strong intuitions that accompany classical machine learning approaches. Recently, connections have been shown between convolutional neural networks (CNNs) and weighted finite state automata (WFSAs), leading to new interpretations and insights. In this work, we show that some recurrent neural networks also share this connection to WFSAs. We characterize this connection formally, defining rational recurrences to be recurrent hidden state update functions that can be written as the Forward calculation of a finite set of WFSAs. We show that several recent neural models use rational recurrences. Our analysis provides a fresh view of these models and facilitates devising new neural architectures that draw inspiration from WFSAs. We present one such model, which performs better than two recent baselines on language modeling and text classification. Our results demonstrate that transferring intuitions from classical models like WFSAs can be an effective approach to designing and understanding neural models. ",Rational Recurrences,3,"['WFSAs are dead. Long live WFSAs!\n\nOur new EMNLP paper ""Rational Recurrences"" is out: \nWork by Hao Peng, @royschwartz02, me, and @nlpnoah. We analyze several recent RNNs as finite-state automata with neural transition weights. ', 'These include quasi-RNNs (Bradbury et al), strongly-typed RNNs (Balduzzi & Ghifary), simple recurrent units (Lei et al), structurally constrained RNNs (Mikolov et al), input switched affine networks (Foerster et al)...', '*hand-waving* forget gates are self-loops!']",18,08,499
151,124,1403147732215156745,960527787428900864,Michael Albergo,"*New paper* with the @MIT crew, @DaniloJRezende @sracaniere @DeepMind , @KyleCranmer and Julian Urban! We construct normalizing flows that are compatible with sampling path integrals of quantum field theories involving fermions. Fermions make this tricky! A primary goal in lattice field theory is to compute path integral expectations arising from the ""action"" of the theory. For a theory involving bosons and fermions, that may look something like this equation. These integrals are generally evaluated with Markov chain Monte Carlo: The second set of integration measures here refer to integrals over the fermion fields Ï, which aren't like bosonic, scalar fields. They are defined by Grassmann numbers, which are elements of the exterior algebra over the complex numbers. Grassmann numbers anti-commute! Because of this, the ""Grassmann"" part of these integrals can be analytically solved, which sounds like a blessing but can make the rest of the MCMC process difficult! For theories that we want to study, this means we have to compute the determinant of some pretty gnarly matrices: We present a number of MCMC samplers to deal with this fact that primarily rely on the equivalence of these determinant calculations with a Gaussian integral over a set of auxiliary fields that are called ""pseudofermions"": We then construct a variety of normalizing flows compatible with each of the samplers, making sure each captures the translational equivariance quirks (anti-periodicity in time) that arise from the fermionic degrees of freedom: We apply these ideas to a Yukawa theory, where bosons are coupled to mass-degenerate fermions, to show how it works! These tools should be applicable to a bunch of problems in lattice field theory and condensed matter, so we are excited to see where it takes us :)",https://arxiv.org/abs/2106.05934,"Algorithms based on normalizing flows are emerging as promising machine learning approaches to sampling complicated probability distributions in a way that can be made asymptotically exact. In the context of lattice field theory, proof-of-principle studies have demonstrated the effectiveness of this approach for scalar theories, gauge theories, and statistical systems. This work develops approaches that enable flow-based sampling of theories with dynamical fermions, which is necessary for the technique to be applied to lattice field theory studies of the Standard Model of particle physics and many condensed matter systems. As a practical demonstration, these methods are applied to the sampling of field configurations for a two-dimensional theory of massless staggered fermions coupled to a scalar field via a Yukawa interaction. ",Flow-based sampling for fermionic lattice field theories,8,"['*New paper* with the @MIT crew, @DaniloJRezende @sracaniere @DeepMind , @KyleCranmer and Julian Urban! We construct normalizing flows that are compatible with sampling path integrals of quantum field theories involving fermions. Fermions make this tricky! ', 'A primary goal in lattice field theory is to compute path integral expectations arising from the ""action"" of the theory. For a theory involving bosons and fermions, that may look something like this equation. These integrals are generally evaluated with Markov chain Monte Carlo: https://t.co/HpNlUqfo3Y', ""The second set of integration measures here refer to integrals over the fermion fields Ï, which aren't like bosonic, scalar fields. They are defined by Grassmann numbers, which are elements of the exterior algebra over the complex numbers. Grassmann numbers anti-commute!"", 'Because of this, the ""Grassmann"" part of these integrals can be analytically solved, which sounds like a blessing but can make the rest of the MCMC process difficult! For theories that we want to study, this means we have to compute the determinant of some pretty gnarly matrices: https://t.co/116GNPqwl1', 'We present a number of MCMC samplers to deal with this fact that primarily rely on the equivalence of these determinant calculations with a Gaussian integral over a set of auxiliary fields that are called ""pseudofermions"": https://t.co/h9ehhuTdMW', 'We then construct a variety of normalizing flows compatible with each of the samplers, making sure each captures the translational equivariance quirks (anti-periodicity in time) that arise from the fermionic degrees of freedom: https://t.co/TrcDd3Vwko', 'We apply these ideas to a Yukawa theory, where bosons are coupled to mass-degenerate fermions, to show how it works! https://t.co/rwmLqC68Rb', 'These tools should be applicable to a bunch of problems in lattice field theory and condensed matter, so we are excited to see where it takes us :)']",21,06,1853
152,29,1134082317629231106,785284514947932160,Thomas Davidson,"New paper ""Racial Bias in Hate Speech and Abusive Language Detection Datasets"" w @CornellCIS undergrad Debasmita Bhattacharya and @ingmarweber has been accepted for @AbusiveLangWS at ACL 2019 1/ We show that models trained on these datasets to detect hate speech and abuse may likely to discriminate against African-Americans 2/ We use five different datasets to train classifiers to predict different types of abusive language. Using data from we then compare how the classifiers perform on tweets written in language used by African-Americans and whites 3/ We find substantial and systematic bias, with AA tweets more frequently classified as negative classes, e.g. hate speech and sexism. This is something of a mea culpa, as I helped to build one of these datasets. Bias can enter into training data despite good intentions 4/ We expect bias is the result of a combination of factors including overrepresentation of language used by AAs in data, (implicit) biases of annotators, and imbalanced training data 5/ A lot more work is necessary to ensure that systems designed to detect one type of bias do not perpetuate another /end Of course there is a typo but I'm not redoing the thread. Second tweet should read: ""We show that models trained on datasets to detect hate speech and abuse may be likely to discriminate against African-Americans""",https://arxiv.org/abs/1905.12516,"Technologies for abusive language detection are being developed and applied with little consideration of their potential biases. We examine racial bias in five different sets of Twitter data annotated for hate speech and abusive language. We train classifiers on these datasets and compare the predictions of these classifiers on tweets written in African-American English with those written in Standard American English. The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will therefore have a disproportionate negative impact on African-American social media users. Consequently, these systems may discriminate against the groups who are often the targets of the abuse we are trying to detect. ",Racial Bias in Hate Speech and Abusive Language Detection Datasets,7,"['New paper ""Racial Bias in Hate Speech and Abusive Language Detection Datasets"" w @CornellCIS undergrad Debasmita Bhattacharya and @ingmarweber has been accepted for @AbusiveLangWS at ACL 2019 1/', 'We show that models trained on these datasets to detect hate speech and abuse may likely to discriminate against African-Americans 2/', 'We use five different datasets to train classifiers to predict different types of abusive language.\n\nUsing data from https://t.co/nsJWwFE2gw we then compare how the classifiers perform on tweets written in language used by African-Americans and whites 3/', 'We find substantial and systematic bias, with AA tweets more frequently classified as negative classes, e.g. hate speech and sexism.\nThis is something of a mea culpa, as I helped to build one of these datasets. Bias can enter into training data despite good intentions 4/', 'We expect bias is the result of a combination of factors including overrepresentation of language used by AAs in data, (implicit) biases of annotators, and imbalanced training data 5/', 'A lot more work is necessary to ensure that systems designed to detect one type of bias do not perpetuate another /end', 'Of course there is a typo but I\'m not redoing the thread. Second tweet should read: ""We show that models trained on datasets to detect hate speech and abuse may be likely to discriminate against African-Americans""']",19,05,1361
153,97,1251148473354731521,742027674403672064,Johnny Greco,"đšPAPER ALERTđš How far is a galaxy? Almost everything we want to know about it depends on the answer. In our new paper, we take a deep dive into the theory and feasibility of measuring distances to diffuse dwarf galaxies using *imaging data alone*: We use surface brightness fluctuations, which arise from pixel-to-pixel Poisson fluctuations in the number of bright stars in a galaxy image. The more distant the galaxy, the smaller the fluctuations. The effect is shown in this fig. Ω = resolution element. #DistancesAreHard @rareflwr41 For you, anythingđhere's another: We compare our calculations to observations of dwarf galaxies with independent distance measurements based on the TRGB. Single-age stellar pop models are in good agreement with red dwarfs, but blue dwarfs are more consistent with composite stellar pop models. #DistancesAreHard To study SBF in practice, we inject artificial galaxies star-by-star into images from the #HyperSuprimeCam survey using one of my favorite tools: #ArtPop! @DanieliShany #DistancesAreHard đđđđ Thereâs more, but Iâll end with an exciting plot for the future. LSST will be capable of measuring SBF distances to dwarf galaxies out to ~25 Mpc! HUGE thanks to my amazing collaborators @DokkumPieter, @DanieliShany, Scott Carlsten, and Charlie Conroy! @dstndstn Thanks, Dustin! That means a lot coming from you. Not naive all -- You want as many resolution elements as possible, so my expectation is that you want GOOD seeing (and you need to understand your PSF). And for the regime we are interested in, we are already short on pixels! @BenneHolwerda @BenneHolwerda You mean for the gif? I certainly can! @rareflwr41 wanted one too, so I can make a higher resolution version and put it somewhere public. But we are working on making ArtPop open source sometime soon-ish so you can make one too! @JohnFeldmeier Thank you đ",https://arxiv.org/abs/2004.07273,"We present an in-depth study of surface brightness fluctuations (SBFs) in low-luminosity stellar systems. Using the MIST models, we compute theoretical predictions for absolute SBF magnitudes in the LSST, HST ACS/WFC, and proposed Roman Space Telescope filter systems. We compare our calculations to observed SBF-color relations of systems that span a wide range of age and metallicity. Consistent with previous studies, we find that single-age population models show excellent agreement with observations of low-mass galaxies with $0.5 \lesssim g - i \lesssim 0.9$. For bluer galaxies, the observed relation is better fit by models with composite stellar populations. To study SBF recovery from low-luminosity systems, we perform detailed image simulations in which we inject fully populated model galaxies into deep ground-based images from real observations. Our simulations show that LSST will provide data of sufficient quality and depth to measure SBF magnitudes with precisions of ${\sim}0.2$-0.5 mag in ultra-faint $\left(\mathrm{10^4 \leq M_\star/M_\odot \leq 10^5}\right)$ and low-mass classical (M$_\star\leq10^7$ M$_\odot$) dwarf galaxies out to ${\sim}4$ Mpc and ${\sim}25$ Mpc, respectively, within the first few years of its deep-wide-fast survey. Many significant practical challenges and systematic uncertainties remain, including an irreducible ""sampling scatter"" in the SBFs of ultra-faint dwarfs due to their undersampled stellar mass functions. We nonetheless conclude that SBFs in the new generation of wide-field imaging surveys have the potential to play a critical role in the efficient confirmation and characterization of dwarf galaxies in the nearby universe. ","Measuring distances to low-luminosity galaxies using surface brightness
fluctuations",10,"['đšPAPER ALERTđš \n\nHow far is a galaxy? Almost everything we want to know about it depends on the answer. In our new paper, we take a deep dive into the theory and feasibility of measuring distances to diffuse dwarf galaxies using *imaging data alone*: ', 'We use surface brightness fluctuations, which arise from pixel-to-pixel Poisson fluctuations in the number of bright stars in a galaxy image. The more distant the galaxy, the smaller the fluctuations. The effect is shown in this fig. Ω = resolution element. #DistancesAreHard https://t.co/cOzrXKbgKJ', ""@rareflwr41 For you, anythingđhere's another: https://t.co/7C7ILHnGR2"", 'We compare our calculations to observations of dwarf galaxies with independent distance measurements based on the TRGB. Single-age stellar pop models are in good agreement with red dwarfs, but blue dwarfs are more consistent with composite stellar pop models. #DistancesAreHard https://t.co/Ba9ov8xSQQ', 'To study SBF in practice, we inject artificial galaxies star-by-star into images from the #HyperSuprimeCam survey using one of my favorite tools: #ArtPop! @DanieliShany #DistancesAreHard \n\nđđđđ\n\nhttps://t.co/Phq6a86JjG', 'Thereâs more, but Iâll end with an exciting plot for the future. LSST will be capable of measuring SBF distances to dwarf galaxies out to ~25 Mpc! HUGE thanks to my amazing collaborators @DokkumPieter, @DanieliShany, Scott Carlsten, and Charlie Conroy! https://t.co/YKYKw12onM https://t.co/ZgzMNjKsPi', '@dstndstn Thanks, Dustin! That means a lot coming from you. \n\nNot naive all -- You want as many resolution elements as possible, so my expectation is that you want GOOD seeing (and you need to understand your PSF). And for the regime we are interested in, we are already short on pixels!', '@BenneHolwerda https://t.co/cEsKO4zgnC', '@BenneHolwerda You mean for the gif? I certainly can! @rareflwr41 wanted one too, so I can make a higher resolution version and put it somewhere public. But we are working on making ArtPop open source sometime soon-ish so you can make one too!', '@JohnFeldmeier Thank you đ']",20,04,1930
154,172,1524078294458462209,1278822973575577601,Sander Tonkens,"đšNew exciting work: with @SylviaLHerbert We propose a simple yet powerful method to refine approximately valid CBFs (e.g. neural CBFs) using dynamic programming based reachability analysis, to guarantee safety. (1/n) đ§”đ Our method iteratively updates a CBF (pink to green). We prove that our method becomes provably less unsafe with every iteration, and converges to a valid CBF. In practice, we often recover the largest safe set (shaded green) (2/n) We show that we can update conservative CBFs, rendering the safety filter less invasive, while remaining safe! We are particularly excited about leveraging this method on hardware (currently ongoing!) to adapt to sudden changes in the environment (3/n) In addition, we can refine invalid CBFs to guarantee safety. We provide an open-source implementation . Give it a try! We also provide an intuitive CBF library in python: (4/n) ",http://arxiv.org/abs/2204.12507,"Safety filters based on Control Barrier Functions (CBFs) have emerged as a practical tool for the safety-critical control of autonomous systems. These approaches encode safety through a value function and enforce safety by imposing a constraint on the time derivative of this value function. However, synthesizing a valid CBF that is not overly conservative in the presence of input constraints is a notorious challenge. In this work, we propose refining candidate CBFs using formal verification methods to obtain a valid CBF. In particular, we update an expert-synthesized or backup CBF using dynamic programming (DP) based reachability analysis. Our framework guarantees that with every DP iteration the obtained CBF is provably at least as safe as the prior iteration and converges to a valid CBF. Therefore, our proposed method can be used in-the-loop for robotic systems. We demonstrate the practicality of our method to enhance safety and/or reduce conservativeness on a range of nonlinear control-affine systems using various CBF synthesis techniques in simulation. ",Refining Control Barrier Functions through Hamilton-Jacobi Reachability,4,"['đšNew exciting work: with @SylviaLHerbert \n\nWe propose a simple yet powerful method to refine approximately valid CBFs (e.g. neural CBFs) using dynamic programming based reachability analysis, to guarantee safety. \n(1/n) đ§”đ', 'Our method iteratively updates a CBF (pink to green). \n\nWe prove that our method becomes provably less unsafe with every iteration, and converges to a valid CBF. In practice, we often recover the largest safe set (shaded green) (2/n) https://t.co/f0nYvBFSfS', 'We show that we can update conservative CBFs, rendering the safety filter less invasive, while remaining safe! \n\nWe are particularly excited about leveraging this method on hardware (currently ongoing!) to adapt to sudden changes in the environment (3/n) https://t.co/cMmRAOq3ya', 'In addition, we can refine invalid CBFs to guarantee safety. We provide an open-source implementation https://t.co/nwaVQetGzO. Give it a try!\n\nWe also provide an intuitive CBF library in python: https://t.co/omJ5MuKtI5 \n(4/n) https://t.co/7xzaf8oL5n']",22,04,925
155,8,1377983221946839042,1912298966,Dr. L. C. Mayorga,"New paper was accepted with the fabulous @exowanderer and @kevinbstevenson! We observed a secondary eclipse of WASP-43b with HST in the optical! 1/10 WASP-43b is particularly exciting because modeling showed that it had the potential for MgSiO3 clouds or MnS clouds, which have very different reflected light properties. 2/10 We asked for a whole phase curve (24 orbits), but only got awarded an eclipse (4 orbits). The TAC strikes again! We suspected that MgSiO3 clouds would be inhomogeneous because they could be cold trapped (below the observable photosphere). 3/10 Instead of staring at a motionless star, we used a new mode for HST WFC3/UVIS that âscansâ the star across the detector, allowing us to build up lots of signal without saturating our bright target. 4/10 To analyze this new HST mode, @exowanderer built Arctor! a new scanning extraction and analysis pipeline! 5/10 Hereâs what we got!!! No, April Fools. The TAC was probably onto something⊠6/10 We place a 67ppm 3-sigma upper limit on the eclipse depth, a geometric albedo limit (in F350LP) at 0.06. My modeling suggests that only the most lofted cloud case scenario can escape a cold trap at the deepest layer of the atmosphere on the day side. 7/10 From nightside observations, we can be pretty sure that clouds there were limiting the depth that we could probe in the atmosphere. This means that the clouds on WASP-43b are inhomogeneously distributed! 8/10 Model predictions suggest that there is a correlation between planet equilibrium temperature and where this cold trap starts to occur -> causing an albedo drop, because MgSiO3 is normally really bright! (from @V_Parmentier work) 9/10 With WASP-43b proving to be the darkest of worlds, itâs now time to study other exoplanets at different temperatures and gravities to figure out where and how MgSiO3 clouds form. See the paper for more! 10/10 ",https://arxiv.org/abs/2103.16676,"Optical, reflected light eclipse observations provide a direct probe of the exoplanet scattering properties, such as from aerosols. We present here the photometric, reflected light observations of WASP-43b using the HST WFC3/UVIS instrument with the F350LP filter (346-822nm) encompassing the entire optical band. This is the first reflected light, photometric eclipse using UVIS in scanning mode; as such we further detail our scanning extraction and analysis pipeline Arctor. Our HST WFC3/UVIS eclipse light curve for WASP-43 b derived a 3-{\sigma} upper limit of 67 ppm on the eclipse depth, which implies that WASP-43b has a very dark dayside atmosphere. With our atmospheric modeling campaign, we compared our reflected light constraints with predictions from global circulation and cloud models, benchmarked with HST and Spitzer observations of WASP-43b. We infer that we do not detect clouds on the dayside within the pressure levels probed by HST WFC3/UVIS with the F350LP filter (P > 1 bar). This is consistent with the GCM predictions based on previous WASP-43b observations. Dayside emission spectroscopy results from WASP-43b with HST and Spitzer observations are likely to not be significantly affected by contributions from cloud particles. ",The Dark World: A Tale of WASP-43b in Reflected Light with HST WFC3/UVIS,10,"['New paper was accepted with the fabulous @exowanderer and @kevinbstevenson! \n\nWe observed a secondary eclipse of WASP-43b with HST in the optical! 1/10 ', 'WASP-43b is particularly exciting because modeling showed that it had the potential for MgSiO3 clouds or MnS clouds, which have very different reflected light properties. 2/10 https://t.co/WpN5lmKpT2', 'We asked for a whole phase curve (24 orbits), but only got awarded an eclipse (4 orbits). The TAC strikes again! We suspected that MgSiO3 clouds would be inhomogeneous because they could be cold trapped (below the observable photosphere). 3/10 https://t.co/kXaUzLkMSJ', 'Instead of staring at a motionless star, we used a new mode for HST WFC3/UVIS that âscansâ the star across the detector, allowing us to build up lots of signal without saturating our bright target. 4/10 https://t.co/4CHRWhdlQF', 'To analyze this new HST mode, @exowanderer built Arctor! https://t.co/KDGKqf9rSX a new scanning extraction and analysis pipeline! 5/10 https://t.co/plOPCn5R5e', 'Hereâs what we got!!! No, April Fools. The TAC was probably onto something⊠6/10 https://t.co/DnMvXc8Q7c', 'We place a 67ppm 3-sigma upper limit on the eclipse depth, a geometric albedo limit (in F350LP) at 0.06. My modeling suggests that only the most lofted cloud case scenario can escape a cold trap at the deepest layer of the atmosphere on the day side. 7/10 https://t.co/3yjYwibWDU', 'From nightside observations, we can be pretty sure that clouds there were limiting the depth that we could probe in the atmosphere. This means that the clouds on WASP-43b are inhomogeneously distributed! 8/10 https://t.co/Wd2jFMBv4a', 'Model predictions suggest that there is a correlation between planet equilibrium temperature and where this cold trap starts to occur -> causing an albedo drop, because MgSiO3 is normally really bright! (from @V_Parmentier work) 9/10 https://t.co/aaT6RRWipw', 'With WASP-43b proving to be the darkest of worlds, itâs now time to study other exoplanets at different temperatures and gravities to figure out where and how MgSiO3 clouds form. See the paper for more! https://t.co/OkthjJXNoN 10/10 https://t.co/4FxUmj8uxZ']",21,03,1967
156,135,1149037159875170304,1046598080,Brennan Klein,"Almost 2 years ago to the day, @erikphoel and I met in NY to chat about his terrific paper on causal emergence Since then, weâve been building a formalism to study causal emergence in networks. Today we posted our first paper on it Networks are such powerful objects. They've changed how we study complex systems. But Iâve always been struck by how nontrivial the âwhat is a node?â question can be. We provide a framework for identifying the most informative *scale* to describe interdependencies in a system... ...which is to say, we find that compressed or coarse-grained or macroscale descriptions of networks often have more *effective information* than the original microscale network (e.g. your raw network data). This noise-minimizing process is known as causal emergence. So what? It's a question of zoom: what's the right scale to represent brain networks, given what we want from our model? What's the best scale to model economic systems? What counts as a ""node"" in a genome? They're rich, tough, fun problems. And there's a long way to go. This research fascinates me, and there's a bunch of directions to go with it. Feel free to send feedback or questions, and stay tuned for the release of some tutorials / open python code. ",https://arxiv.org/abs/1907.03902,"The connectivity of a network contains information about the relationships between nodes, which can denote interactions, associations, or dependencies. We show that this information can be analyzed by measuring the uncertainty (and certainty) contained in paths along nodes and links in a network. Specifically, we derive from first principles a measure known as effective information and describe its behavior in common network models. Networks with higher effective information contain more information in the relationships between nodes. We show how subgraphs of nodes can be grouped into macro-nodes, reducing the size of a network while increasing its effective information (a phenomenon known as causal emergence). We find that informative higher scales are common in simulated and real networks across biological, social, informational, and technological domains. These results show that the emergence of higher scales in networks can be directly assessed and that these higher scales offer a way to create certainty out of uncertainty. ",The emergence of informative higher scales in complex networks,5,"['Almost 2 years ago to the day, @erikphoel and I met in NY to chat about his terrific paper on causal emergence \n\nSince then, weâve been building a formalism to study causal emergence in networks. Today we posted our first paper on it ', ""Networks are such powerful objects. They've changed how we study complex systems. But Iâve always been struck by how nontrivial the âwhat is a node?â question can be.\n\nWe provide a framework for identifying the most informative *scale* to describe interdependencies in a system... https://t.co/FgxIUkQu36"", '...which is to say, we find that compressed or coarse-grained or macroscale descriptions of networks often have more *effective information* than the original microscale network (e.g. your raw network data).\n\nThis noise-minimizing process is known as causal emergence.\n\nSo what? https://t.co/WQ39KyHs5B', 'It\'s a question of zoom: what\'s the right scale to represent brain networks, given what we want from our model? What\'s the best scale to model economic systems? What counts as a ""node"" in a genome?\n\nThey\'re rich, tough, fun problems. And there\'s a long way to go. https://t.co/ZKnNUvmxF9', ""This research fascinates me, and there's a bunch of directions to go with it.\n\nFeel free to send feedback or questions, and stay tuned for the release of some tutorials / open python code.\n\nhttps://t.co/Til7g7LCEk""]",19,07,1288
157,75,1506233940751024133,988837119408967680,Thuerey Group at TUM,"Our ICLRâ22 spotlight paper on half-inverse gradients is online now: , enjoy! It's a new method bridging âclassicalâ optimizers and machine learning methods. HIGs are motivated by a continuous interpolation between GD and Gauss-Newton, and outperforms other methods nicely: And here's the performance for a quantum control problem with differentiable physics training over 384 time steps: ",https://arxiv.org/abs/2203.10131,"Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than supervised neural network training, the same gradient-based optimizers are typically employed to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon. Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schroedinger equation and the Poisson problem. ",Half-Inverse Gradients for Physical Deep Learning,3,"[""Our ICLRâ22 spotlight paper on half-inverse gradients is online now: , enjoy! It's a new method bridging âclassicalâ optimizers and machine learning methods. "", 'HIGs are motivated by a continuous interpolation between GD and Gauss-Newton, and outperforms other methods nicely: https://t.co/wt0Cuxc9Mt', ""And here's the performance for a quantum control problem with differentiable physics training over 384 time steps: https://t.co/GV83pDQwcP""]",22,03,416
158,246,1404624618695049218,1148625407073247233,Haipeng Chen,"Glad to share our recent UAI-2021 paper @UncertaintyInAI , titled ""Contingency-Aware Influence Maximization: A Reinforcement Learning Approach."" We propose to address the contingency-aware influence maximization (CAIM) problem from a RL perspective. CAIM is motivated by a series of studies in our group with @MilindTambe_AI, on HIV prevention among homeless youth social networks in the city of LA. Different from normal IM problems, there is uncertainty in a node's willingness to be seeds. (image credit: Wilder et al.) Despite recent success in the field using POMDPs or greedy algorithms (Yadav et al. 2016, Wilder et al. 2018, 2021), a main limitation of scaling their methods to more homeless youth shelters is the large run time, a big burden to NGOs who don't have HPC resources. Inspired by recent work that uses RL to address combinatorial optimization problems, we extend it to the CAIM problem. We show the extension is computationally hard due to the uncertainty of a node's willingness, and propose a new reward shaping technique. The reward shaping component is proven to be computationally efficient both in theory (via the submodularity property of the influence cascade models) and in empirical evaluation (via an ablation study). Main takeaway -- our algorithm RL4IM achieves comparable influence to SOTA algorithm for CAIM, while requiring negligible runtime during test phase, which can be directly used by NGOs - a result making RL4IM an ideal alternative for CAIM in the low-resource computing paradigm. Our work is an example of how RL can be used for social good work, and we hope to encourage more AI researchers in exploring the idea here to assist AI for social good in the low-resource computing domain. Join work with @WeiQiu9, @HanChingOu, Bo An, and @MilindTambe_AI Recent works on HIV prevention among homeless youth networks using influence maximization, with success in field test, by @brwilder @AmulyaYadav19 @MilindTambe_AI @EricRicePhD Work that combines graph representation learning and reinforcement learning to address combinatorial optimization problems, by @lyeskhalil et al. 2017",https://arxiv.org/abs/2106.07039,"The influence maximization (IM) problem aims at finding a subset of seed nodes in a social network that maximize the spread of influence. In this study, we focus on a sub-class of IM problems, where whether the nodes are willing to be the seeds when being invited is uncertain, called contingency-aware IM. Such contingency aware IM is critical for applications for non-profit organizations in low resource communities (e.g., spreading awareness of disease prevention). Despite the initial success, a major practical obstacle in promoting the solutions to more communities is the tremendous runtime of the greedy algorithms and the lack of high performance computing (HPC) for the non-profits in the field -- whenever there is a new social network, the non-profits usually do not have the HPCs to recalculate the solutions. Motivated by this and inspired by the line of works that use reinforcement learning (RL) to address combinatorial optimization on graphs, we formalize the problem as a Markov Decision Process (MDP), and use RL to learn an IM policy over historically seen networks, and generalize to unseen networks with negligible runtime at test phase. To fully exploit the properties of our targeted problem, we propose two technical innovations that improve the existing methods, including state-abstraction and theoretically grounded reward shaping. Empirical results show that our method achieves influence as high as the state-of-the-art methods for contingency-aware IM, while having negligible runtime at test phase. ","Contingency-Aware Influence Maximization: A Reinforcement Learning
Approach",10,"['Glad to share our recent UAI-2021 paper @UncertaintyInAI , titled ""Contingency-Aware Influence Maximization: A Reinforcement Learning Approach."" We propose to address the contingency-aware influence maximization (CAIM) problem from a RL perspective. ', ""CAIM is motivated by a series of studies in our group with @MilindTambe_AI, on HIV prevention among homeless youth social networks in the city of LA. Different from normal IM problems, there is uncertainty in a node's willingness to be seeds. (image credit: Wilder et al.) https://t.co/5VIzEGKb5r"", ""Despite recent success in the field using POMDPs or greedy algorithms (Yadav et al. 2016, Wilder et al. 2018, 2021), a main limitation of scaling their methods to more homeless youth shelters is the large run time, a big burden to NGOs who don't have HPC resources."", ""Inspired by recent work that uses RL to address combinatorial optimization problems, we extend it to the CAIM problem. We show the extension is computationally hard due to the uncertainty of a node's willingness, and propose a new reward shaping technique."", 'The reward shaping component is proven to be computationally efficient both in theory (via the submodularity property of the influence cascade models) and in empirical evaluation (via an ablation study).', 'Main takeaway -- our algorithm RL4IM achieves comparable influence to SOTA algorithm for CAIM, while requiring negligible runtime during test phase, which can be directly used by NGOs - a result making RL4IM an ideal alternative for CAIM in the low-resource computing paradigm.', 'Our work is an example of how RL can be used for social good work, and we hope to encourage more AI researchers in exploring the idea here to assist AI for social good in the low-resource computing domain.', 'Join work with @WeiQiu9, @HanChingOu, Bo An, and @MilindTambe_AI', 'Recent works on HIV prevention among homeless youth networks using influence maximization, with success in field test, by @brwilder @AmulyaYadav19 @MilindTambe_AI @EricRicePhD', 'Work that combines graph representation learning and reinforcement learning to address combinatorial optimization problems, by @lyeskhalil et al. 2017']",21,06,2139
159,62,1052160556742955008,2676457430,MAGIC telescopes đŽđș,"A new #MAGICpaper has been accepted for publication in the Astrophysical Journal Letters! The MAGIC and VERITAS collaborations have discovered a new gamma-ray binary: PSRJ2032+4127/MT91 213. For more details about this system, check the paper at: ",http://arxiv.org/abs/1810.05271,"We report on observations of the pulsar / Be star binary system PSR J2032+4127 / MT91 213 in the energy range between 100 GeV and 20 TeV with the VERITAS and MAGIC imaging atmospheric Cherenkov telescope arrays. The binary orbit has a period of approximately 50 years, with the most recent periastron occurring on 2017 November 13. Our observations span from 18 months prior to periastron to one month after. A new, point-like, gamma-ray source is detected, coincident with the location of PSR J2032+4127 / MT91 213. The gamma-ray light curve and spectrum are well-characterized over the periastron passage. The flux is variable over at least an order of magnitude, peaking at periastron, thus providing a firm association of the TeV source with the pulsar / Be star system. Observations prior to periastron show a cutoff in the spectrum at an energy around 0.5 TeV. This result adds a new member to the small population of known TeV binaries, and it identifies only the second source of this class in which the nature and properties of the compact object are firmly established. We compare the gamma-ray results with the light curve measured with the X-ray Telescope (XRT) on board the Neil Gehrels \textit{Swift} Observatory and with the predictions of recent theoretical models of the system. We conclude that significant revision of the models is required to explain the details of the emission we have observed, and we discuss the relationship between the binary system and the overlapping steady extended source, TeV J2032+4130. ","Periastron Observations of TeV Gamma-Ray Emission from a Binary System
with a 50-year Period",1,"['A new #MAGICpaper has been accepted for publication in the Astrophysical Journal Letters! The MAGIC and VERITAS collaborations have discovered a new gamma-ray binary: PSRJ2032+4127/MT91 213. For more details about this system, check the paper at: ']",18,10,260
160,32,1042525613448298496,238054276,Xianyu Tan,"This is a movie on the 1D atmospheric variability presented in Figure 1 and 2 of our recent brown dwarf + directly imaged giant planet paper . This paper presents a brand new, very natural mechanism generating spontaneous variability for BDs and EGPs. ",https://arxiv.org/abs/1809.06467,"Growing observational evidence has suggested active meteorology in atmospheres of brown dwarfs (BDs) and directly imaged extrasolar giant planets (EGPs). In particular, a number of surveys have shown that near-IR brightness variability is common among L and T dwarfs. Despite initial understandings of atmospheric dynamics which is the major cause of the variability by previous studies, the detailed mechanism of variability remains elusive, and we need to seek a natural, self-consistent mechanism. Clouds are important in shaping the thermal structure and spectral properties of these atmospheres via large opacity, and we expect the same for inducing atmospheric variability. In this work, using a time-dependent one-dimensional model that incorporates a self-consistent coupling between the thermal structure, convective mixing, cloud radiative heating/cooling and condensation/evaporation of clouds, we show that radiative cloud feedback can drive spontaneous atmospheric variability in both temperature and cloud structure in conditions appropriate for BDs and directly imaged EGPs. The typical periods of variability are one to tens of hours with typical amplitude of the variability up to hundreds of Kelvins in effective temperature. The existence of variability is robust over a wide range of parameter space, but the detailed evolution of variability is sensitive to model parameters. Our novel, self-consistent mechanism has important implications for the observed flux variability of BDs and directly imaged EGPs, especially those evolve in short timescales. It is also a promising mechanism for cloud breaking, which has been proposed to explain the L/T transition of brown dwarfs. ","Atmospheric variability driven by radiative cloud feedback in brown
dwarfs and directly imaged extrasolar giant planets",1,"['This is a movie on the 1D atmospheric variability presented in Figure 1 and 2 of our recent brown dwarf + directly imaged giant planet paper . This paper presents a brand new, very natural mechanism generating spontaneous variability for BDs and EGPs. ']",18,09,264
161,87,1393137321705447432,2176486874,Steven Thomson,"Another new preprint out today! ""Finding the phase diagram of strongly-correlated disordered bosons using quantum quenches"" by L. Villa, S. J. Thomson & L. Sanchez-Palencia: (This is a companion paper to which came out yesterday!) In this longer paper, we develop the technique of quench spectroscopy further, and map out the zero temperature phase diagram of disordered bosons at unit filing using both spatially-resolved and momentum-resolved quench spectroscopy. đ
Our main result is that quench spectroscopy provides a new way to explore the zero-temperature phases of quantum matter with what we hope is a simplified experimental protocol, compared to established methods. đ Hoping to see some experimental groups try it in the near future! If you missed my thread on yesterday's paper where I outlined the technique in more detail, you can find it here: đ Please do check out both of our papers - all comments and constructive criticism welcome! đ",https://arxiv.org/abs/2105.06396,"The question of how the low-energy properties of disordered quantum systems may be connected to exotic localization phenomena at high energy is a key open question in the context of quantum glasses and many-body localization. In arXiv:2105.05774 we have shown that key features of the excitation spectrum of a disordered system can be efficiently probed from out-of-equilibrium dynamics following a quantum quench, providing distinctive signatures of the various phases. Here, we extend this work by providing a more in-depth study of the behavior of the quench spectral functions associated to different observables and investigating an extended parameter regime. We provide a detailed introduction to quench spectroscopy for disordered systems and show how spectral properties can be probed using both local operators and two-point correlation functions. We benchmark the technique using the one-dimensional Bose-Hubbard model in the presence of a random external potential, focusing on the low-lying excitations, and demonstrate that quench spectroscopy can distinguish the Mott insulator, superfluid, and Bose glass phases. We then explicitly reconstruct the zero-temperature phase diagram of the disordered Bose-Hubbard at fixed filling using two independent methods, both experimentally accessible via time-of-flight imaging and quantum gas microscopy respectively, and demonstrate that quench spectroscopy can give valuable insights as to the distribution of rare regions within disordered systems. ","Finding the phase diagram of strongly-correlated disordered bosons using
quantum quenches",4,"['Another new preprint out today!\n\n""Finding the phase diagram of strongly-correlated disordered bosons using quantum quenches"" by L. Villa, S. J. Thomson & L. Sanchez-Palencia: \n\n(This is a companion paper to which came out yesterday!) ', 'In this longer paper, we develop the technique of quench spectroscopy further, and map out the zero temperature phase diagram of disordered bosons at unit filing using both spatially-resolved and momentum-resolved quench spectroscopy. đ
https://t.co/hOJ7u3aqMy', 'Our main result is that quench spectroscopy provides a new way to explore the zero-temperature phases of quantum matter with what we hope is a simplified experimental protocol, compared to established methods. đ\n\nHoping to see some experimental groups try it in the near future!', ""If you missed my thread on yesterday's paper where I outlined the technique in more detail, you can find it here: đ\n\nhttps://t.co/9qHJIcEeiI\n\nPlease do check out both of our papers - all comments and constructive criticism welcome! đ""]",21,05,988
162,39,1194978432062513153,1004365363574902784,Kevin J. Kelly,"New paper out today! I worked with @PedroANMachado and Roni Harnik on the idea of using neutrinos that come from a ""decay-at-rest"" process to measure neutrino oscillations. @PedroANMachado When we try to understand neutrino oscillations, knowing how far they've travelled and how much energy they have is crucial for interpreting a result. Decay-at-rest neutrinos are special in that certain types can only have one energy. @PedroANMachado If we can measure them after travelling hundreds of kilometers, then we can start to pinpoint how neutrinos oscillate over many different lengths and energies, and start to stress-test our theories of neutrino oscillations. @PedroANMachado In the next decade, the J-PARC Spallation Neutron Source and the Hyper-Kamiokande experiment are a great opportunity for this type of measurement -- we can start to add stars to that panel and really hone in on neutrino oscillations.",https://arxiv.org/abs/1911.05088,"In addition to the next generation of beam-based neutrino experiments and their associated detectors, a number of intense, low-energy neutrino production sources from decays at rest will be in operation. In this work, we explore the physics opportunities with decay-at-rest neutrinos for complementary measurements of oscillation parameters at long baselines. The J-PARC Spallation Neutron Source, for example, will generate neutrinos from a variety of decay-at-rest (DAR) processes, specifically those of pions, muons, and kaons. Other proposed sources will produce large numbers of stopped pions and muons. We demonstrate the ability of the upcoming Hyper-Kamiokande experiment to detect the monochromatic kaon decay-at-rest neutrinos from J-PARC after they have travelled several hundred kilometers and undergone oscillations. This measurement will serve as a valuable cross-check in constraining our understanding of neutrino oscillations in a new regime of neutrino energy and baseline length. We also study the expected event rates from pion and muon DAR neutrinos in liquid Argon and water detectors and their sensitivities to to the CP violating phase $\delta_\mathrm{CP}$. ","Prospects of Measuring Oscillated Decay-at-Rest Neutrinos at Long
Baselines",4,"['New paper out today! I worked with @PedroANMachado and Roni Harnik on the idea of using neutrinos that come from a ""decay-at-rest"" process to measure neutrino oscillations.\n\n', ""@PedroANMachado When we try to understand neutrino oscillations, knowing how far they've travelled and how much energy they have is crucial for interpreting a result.\n\nDecay-at-rest neutrinos are special in that certain types can only have one energy. https://t.co/uoUy67PGyL"", '@PedroANMachado If we can measure them after travelling hundreds of kilometers, then we can start to pinpoint how neutrinos oscillate over many different lengths and energies, and start to stress-test our theories of neutrino oscillations.', '@PedroANMachado In the next decade, the J-PARC Spallation Neutron Source and the Hyper-Kamiokande experiment are a great opportunity for this type of measurement -- we can start to add stars to that panel and really hone in on neutrino oscillations.']",19,11,927
163,180,1441074229366059008,2550133394,Mostafa Dehghani,"Check out our new paper, presenting insights on scaling Transformers/T5 within the context of the pretraining-finetuning setup. With @YiTayML, @Jeffy_Sailing, @LiamFedus, @samiraabnar, @hwchung27, @sharan0909, @DaniYogatama, @ashVaswani, and @metzlerd. In search of the best scaling recipe, we ran an extensive search over different knobs and evaluated our modes w.r.t not only upstream performance (perplexity) but also the performance on different downstream tasks/benchmarks after finetuning. A few cool take-home messages: (1) If you, by default, take the T5-Base for your next amazing project to build on top of it, you may want to reconsider that. Transformer/T5 Base is not necessarily the Pareto-efficient configuration. We present some better alternatives. (2) Although model size is a key factor determining the scaling behavior of Transformers in pretraining, the model shape matters a lot for fine-tuning. So blindly scaling up may look good upstream, but can be disappointing when you finetune your model on a downstream task. (3) ""How to scale up?"" is pretty much depending on the region you're in. If you're working with Tiny models and want to go to Base size, the best recipe differs from the case of going from Large to XL. We have many more observations and insights. You should check the paper. Oh, btw, we are releasing 100+ pretrained checkpoints from our experiments, which will be an amazing playground for doing further analysis and interesting research :) You should also check @YiTayML's great thread that highlights more points from the paper: ",https://arxiv.org/abs/2109.10686,"There remain many open questions pertaining to the scaling behaviour of Transformer architectures. These scaling decisions and findings can be critical, as training runs often come with an associated computational cost which have both financial and/or environmental impact. The goal of this paper is to present scaling insights from pretraining and finetuning Transformers. While Kaplan et al. presents a comprehensive study of the scaling behaviour of Transformer language models, the scope is only on the upstream (pretraining) loss. Therefore, it is still unclear if these set of findings transfer to downstream task within the context of the pretrain-finetune paradigm. The key findings of this paper are as follows: (1) we show that aside from only the model size, model shape matters for downstream fine-tuning, (2) scaling protocols operate differently at different compute regions, (3) widely adopted T5-base and T5-large sizes are Pareto-inefficient. To this end, we present improved scaling protocols whereby our redesigned models achieve similar downstream fine-tuning quality while having 50\% fewer parameters and training 40\% faster compared to the widely adopted T5-base model. We publicly release over 100 pretrained checkpoints of different T5 configurations to facilitate future research and analysis. ","Scale Efficiently: Insights from Pre-training and Fine-tuning
Transformers",8,"['Check out our new paper, presenting insights on scaling Transformers/T5 within the context of the pretraining-finetuning setup. \n\n\nWith @YiTayML, @Jeffy_Sailing, @LiamFedus, @samiraabnar, @hwchung27, @sharan0909, @DaniYogatama, @ashVaswani, and @metzlerd. ', 'In search of the best scaling recipe, we ran an extensive search over different knobs and evaluated our modes w.r.t not only upstream performance (perplexity) but also the performance on different downstream tasks/benchmarks after finetuning. \nA few cool take-home messages:', '(1) If you, by default, take the T5-Base for your next amazing project to build on top of it, you may want to reconsider that. Transformer/T5 Base is not necessarily the Pareto-efficient configuration. We present some better alternatives.', '(2) Although model size is a key factor determining the scaling behavior of Transformers in pretraining, the model shape matters a lot for fine-tuning. So blindly scaling up may look good upstream, but can be disappointing when you finetune your model on a downstream task.', '(3) ""How to scale up?"" is pretty much depending on the region you\'re in. If you\'re working with Tiny models and want to go to Base size, the best recipe differs from the case of going from Large to XL.', 'We have many more observations and insights. You should check the paper.', 'Oh, btw, we are releasing 100+ pretrained checkpoints from our experiments, which will be an amazing playground for doing further analysis and interesting research :)', ""You should also check @YiTayML's great thread that highlights more points from the paper:\nhttps://t.co/MLj7rdrq1b""]",21,09,1593
164,196,1393134095530696704,754948023382310912,Niclas Rieger,You have some big đdata & want to find possible time lags between your variables for each location? Give complex MCA a try! We applied it on SST đ & precipitation đ§ïž #ERA5 to identify lagged teleconnectionsđđ arXivâĄïž #openaccess #openscience ,https://arxiv.org/abs/2105.04618,"A proper description of ocean-atmosphere interactions is key for a correct understanding of climate evolution. The interplay among the different variables acting over the climate is complex, often leading to correlations across long spatial distances (teleconnections). In some occasions, those teleconnections occur with quite significant temporal shifts that are fundamental for the understanding of the underlying phenomena but which are poorly captured by standard methods. Applying orthogonal decomposition such as Maximum Covariance Analysis (MCA) to geophysical data sets allows to extract common dominant patterns between two different variables, but generally suffers from (i) the non-physical orthogonal constraint as well as (ii) the consideration of simple correlations, whereby temporally offset signals are not detected. Here we propose an extension, complex rotated MCA, to address both limitations. We transform our signals using the Hilbert transform and perform the orthogonal decomposition in complex space, allowing us to correctly correlate out-of-phase signals. Subsequent Varimax rotation removes the orthogonal constraints, leading to more physically meaningful modes of geophysical variability. As an example of application, we have employed this method on sea surface temperature and continental precipitation; our method successfully captures the temporal and spatial interactions between these two variables, namely for (i) the seasonal cycle, (ii) canonical ENSO, (iii) the global warming trend, (iv) the Pacific Decadal Oscillation, (v) ENSO Modoki and finally (vi) the Atlantic Meridional Mode. The complex rotated modes of MCA provide information on the regional amplitude, and under certain conditions, the regional time lag between changes on ocean temperature and land precipitation. ","Lagged teleconnections of climate variables identified via complex
rotated Maximum Covariance Analysis",1,['You have some big đdata & want to find possible time lags between your variables for each location?\n\nGive complex MCA a try! We applied it on SST đ & precipitation đ§ïž #ERA5 to identify lagged teleconnectionsđđ\n\narXivâĄïž \n#openaccess #openscience '],21,05,255
165,135,1351642932223373312,1349790738591199234,Yuchen Liang,"In our #ICLR2021 paper we study a well-established neurobiological network motif from the fruit fly brain and investigate the possibility of reusing its architecture for solving common natural language processing tasks. Paper: Our fruit fly network generates binary logical word embeddings - vectors of [0,1] as opposed to continuous vectors like GloVe and word2vec, which is useful from the perspective of memory efficiency and interpretability. We show that not only can the fruit fly network motif achieve performance comparable to existing methods in NLP, but, additionally, it uses only a fraction of the computational resources (shorter training time and smaller memory footprint). Work done by @YuchenLiangRPI @wrong_whp @Ben_Hoov @LeopoldGrinberg @navlakha_lab @mj_zaki @DimaKrotov",https://arxiv.org/abs/2101.06887,"The mushroom body of the fruit fly brain is one of the best studied systems in neuroscience. At its core it consists of a population of Kenyon cells, which receive inputs from multiple sensory modalities. These cells are inhibited by the anterior paired lateral neuron, thus creating a sparse high dimensional representation of the inputs. In this work we study a mathematical formalization of this network motif and apply it to learning the correlational structure between words and their context in a corpus of unstructured text, a common natural language processing (NLP) task. We show that this network can learn semantic representations of words and can generate both static and context-dependent word embeddings. Unlike conventional methods (e.g., BERT, GloVe) that use dense representations for word embedding, our algorithm encodes semantic meaning of words and their context in the form of sparse binary hash codes. The quality of the learned representations is evaluated on word similarity analysis, word-sense disambiguation, and document classification. It is shown that not only can the fruit fly network motif achieve performance comparable to existing methods in NLP, but, additionally, it uses only a fraction of the computational resources (shorter training time and smaller memory footprint). ",Can a Fruit Fly Learn Word Embeddings?,4,"['In our #ICLR2021 paper we study a well-established neurobiological network motif from the fruit fly brain and investigate the possibility of reusing its architecture for solving common natural language processing tasks.\nPaper: ', 'Our fruit fly network generates binary logical word embeddings - vectors of [0,1] as opposed to continuous vectors like GloVe and word2vec, which is useful from the perspective of memory efficiency and interpretability.', 'We show that not only can the fruit fly network motif achieve performance comparable to existing methods in NLP, but, additionally, it uses only a fraction of the computational resources (shorter training time and smaller memory footprint).', 'Work done by @YuchenLiangRPI\n @wrong_whp\n @Ben_Hoov\n @LeopoldGrinberg\n @navlakha_lab\n @mj_zaki\n @DimaKrotov']",21,01,803
166,91,1146993240576593920,939498802767044608,Stephan,"New paper on much faster RL by learning graph representations of the world ! World graphs capture the structure of the world and can be used to focus exploration: w/ Wendy Shang, Alex Trott, @CaimingXiong @RichardSocher @SFResearch ",http://arxiv.org/abs/1907.00664,"In many real-world scenarios, an autonomous agent often encounters various tasks within a single complex environment. We propose to build a graph abstraction over the environment structure to accelerate the learning of these tasks. Here, nodes are important points of interest (pivotal states) and edges represent feasible traversals between them. Our approach has two stages. First, we jointly train a latent pivotal state model and a curiosity-driven goal-conditioned policy in a task-agnostic manner. Second, provided with the information from the world graph, a high-level Manager quickly finds solution to new tasks and expresses subgoals in reference to pivotal states to a low-level Worker. The Worker can then also leverage the graph to easily traverse to the pivotal states of interest, even across long distance, and explore non-locally. We perform a thorough ablation study to evaluate our approach on a suite of challenging maze tasks, demonstrating significant advantages from the proposed framework over baselines that lack world graph knowledge in terms of performance and efficiency. ",Learning World Graphs to Accelerate Hierarchical Reinforcement Learning,1,"['New paper on much faster RL by learning graph representations of the world ! World graphs capture the structure of the world and can be used to focus exploration: w/ Wendy Shang, Alex Trott, @CaimingXiong @RichardSocher @SFResearch ']",19,07,251
167,49,1375351092628885504,314171681,Laura Baudis,"New paper, in which we describe the energy calibration of GERDA in detail. An excellent resolution is essential to finding a monoenergetic signal, as expected for double-beta decay without neutrinos (with a half-life above 1.8 x 10^26 years in 76-Ge): @KokoFederico Thank you Federico!",https://arxiv.org/abs/2103.13777,"The GERmanium Detector Array (GERDA) collaboration searched for neutrinoless double-$\beta$ decay in $^{76}$Ge with an array of about 40 high-purity isotopically-enriched germanium detectors. The experimental signature of the decay is a monoenergetic signal at Q$_{\beta\beta}$ = 2039.061(7)keV in the measured summed energy spectrum of the two emitted electrons. Both the energy reconstruction and resolution of the germanium detectors are crucial to separate a potential signal from various backgrounds, such as neutrino-accompanied double-$\beta$ decays allowed by the Standard Model. The energy resolution and stability were determined and monitored as a function of time using data from regular $^{228}$Th calibrations. In this work, we describe the calibration process and associated data analysis of the full GERDA dataset, tailored to preserve the excellent resolution of the individual germanium detectors when combining data over several years. ",Calibration of the GERDA experiment,2,"['New paper, in which we describe the energy calibration of GERDA in detail. An excellent resolution is essential to finding a monoenergetic signal, as expected for double-beta decay without neutrinos (with a half-life above 1.8 x 10^26 years in 76-Ge): ', '@KokoFederico Thank you Federico!']",21,03,299
168,171,1447919822042456071,916709762062012416,Guangwei, New paper! We observed the atmosphere of hot Jupiter WASP-74b and found evidence for aerosols and potentially high C/O ratio! Special thanks to @_astronomay for analyzing the Spitzer data and demonstrating the challenges of Spitzer data reduction. And most importantly for matching the pink purple blue color scheme! Also thanks to @ExoSing @kevinbstevenson @JDLothringer @StellarPlanet and all other collaborators for the help!,https://arxiv.org/abs/2110.04415,"Planets are like children with each one being unique and special. A better understanding of their collective properties requires a deeper understanding of each planet. Here we add the transit and eclipse spectra of hot Jupiter WASP-74b into the ever growing dataset of exoplanet atmosphere spectral library. With six transits and three eclipses using the Hubble Space Telescope (HST) and Spitzer Space Telescope (\textit{Spitzer}), we present the most complete and precise atmospheric spectra of WASP-74b. We found no evidence for TiO/VO nor super-Rayleigh scattering reported in previous studies. The transit shows a muted water feature with strong Rayleigh scattering extending into the infrared. The eclipse shows a featureless blackbody-like WFC3/G141 spectrum and a weak methane absorption feature in the Spitzer 3.6 $\mu m$ band. Future James Webb Space Telescope (JWST) follow up observations are needed to confirm these results. ","The Hubble PanCET program: Transit and Eclipse Spectroscopy of the Hot
Jupiter WASP-74b",3,"[' New paper! We observed the atmosphere of hot Jupiter WASP-74b and found evidence for aerosols and potentially high C/O ratio! ', 'Special thanks to @_astronomay for analyzing the Spitzer data and demonstrating the challenges of Spitzer data reduction. And most importantly for matching the pink purple blue color scheme! https://t.co/GXHZEiq2YJ', 'Also thanks to @ExoSing @kevinbstevenson @JDLothringer @StellarPlanet and all other collaborators for the help!']",21,10,449
169,16,1156031244225789953,28535459,Dougal Mackey,"An exciting new S5 paper on arXiv today, led by Sergey Koposov - discovery of a 1700 km/s (!) star ejected from the Galactic centre. This lets us place quite precise constraints on the geometry and kinematics of the Milky Way. A big cheer for Australian facilities - the star was discovered with the 4m Anglo-Australian Telescope and verified with the 2.3m ANU telescope, both at Siding Spring Observatory.",https://arxiv.org/abs/1907.11725,"We present the serendipitous discovery of the fastest Main Sequence hyper-velocity star (HVS) by the Southern Stellar Stream Spectroscopic Survey (S5). The star S5-HVS1 is a $\sim 2.35$ M$_\odot$ A-type star located at a distance of $\sim 9$ kpc from the Sun and has a heliocentric radial velocity of $1017\pm 2.7$ km/s without any signature of velocity variability. The current 3-D velocity of the star in the Galactic frame is $1755\pm50$ km/s. When integrated backwards in time, the orbit of the star points unambiguously to the Galactic Centre, implying that S5-HVS1 was kicked away from Sgr A* with a velocity of $\sim 1800$ km/s and travelled for $4.8$ Myr to its current location. This is so far the only HVS confidently associated with the Galactic Centre. S5-HVS1 is also the first hyper-velocity star to provide constraints on the geometry and kinematics of the Galaxy, such as the Solar motion $V_{y,\odot}= 246.1\pm 5.3$ km/s or position $R_0=8.12\pm 0.23$ kpc. The ejection trajectory and transit time of S5-HVS1 coincide with the orbital plane and age of the annular disk of young stars at the Galactic centre, and thus may be linked to its formation. With the S5-HVS1 ejection velocity being almost twice the velocity of other hyper-velocity stars previously associated with the Galactic Centre, we question whether they have been generated by the same mechanism or whether the ejection velocity distribution has been constant over time. ","The Great Escape: Discovery of a nearby 1700 km/s star ejected from the
Milky Way by Sgr A*",2,"['An exciting new S5 paper on arXiv today, led by Sergey Koposov - discovery of a 1700 km/s (!) star ejected from the Galactic centre. This lets us place quite precise constraints on the geometry and kinematics of the Milky Way.\n\n', 'A big cheer for Australian facilities - the star was discovered with the 4m Anglo-Australian Telescope and verified with the 2.3m ANU telescope, both at Siding Spring Observatory.']",19,07,413
170,12,1278497432335028224,466500823,Aman Chadha,Announcing my new #AI paper on Video Super Resolution that builds on Recurrent Back Projection Networks using GANs with a four-fold loss. Weâre #1 on the Video Super Resolution leaderboard! đ #DeepLearning #ArtificialIntelligence #NeuralNetworks ,https://arxiv.org/abs/2006.11161,"Recently, learning-based models have enhanced the performance of single-image super-resolution (SISR). However, applying SISR successively to each video frame leads to a lack of temporal coherency. Convolutional neural networks (CNNs) outperform traditional approaches in terms of image quality metrics such as peak signal to noise ratio (PSNR) and structural similarity (SSIM). However, generative adversarial networks (GANs) offer a competitive advantage by being able to mitigate the issue of a lack of finer texture details, usually seen with CNNs when super-resolving at large upscaling factors. We present iSeeBetter, a novel GAN-based spatio-temporal approach to video super-resolution (VSR) that renders temporally consistent super-resolution videos. iSeeBetter extracts spatial and temporal information from the current and neighboring frames using the concept of recurrent back-projection networks as its generator. Furthermore, to improve the ""naturality"" of the super-resolved image while eliminating artifacts seen with traditional algorithms, we utilize the discriminator from super-resolution generative adversarial network (SRGAN). Although mean squared error (MSE) as a primary loss-minimization objective improves PSNR/SSIM, these metrics may not capture fine details in the image resulting in misrepresentation of perceptual quality. To address this, we use a four-fold (MSE, perceptual, adversarial, and total-variation (TV)) loss function. Our results demonstrate that iSeeBetter offers superior VSR fidelity and surpasses state-of-the-art performance. ","iSeeBetter: Spatio-temporal video super-resolution using recurrent
generative back-projection networks",1,['Announcing my new #AI paper on Video Super Resolution that builds on Recurrent Back Projection Networks using GANs with a four-fold loss. \n\nWeâre #1 on the Video Super Resolution leaderboard! đ\n \n \n\n#DeepLearning #ArtificialIntelligence #NeuralNetworks '],20,06,262
171,136,1435934105049444352,966054811740360704,Nicola De Cao,"Happy to announce my new #EMNLP2021 paper: Highly Parallel Autoregressive Entity Linking with Discriminative Correction SoTA performance while being >70x faster than previous generative formulations! đ€Ż đ đ» The model generates mention-entity pairs conditionally independently given the input and thus allowing high parallelism. In addition, a Longformer is used as the encoder to handle long input documents, and a shallow LSTM is used as the decoder to make generation super fast! âĄïž As always I want to thank my amazing collaborators @iatitov and @wilkeraziz â„ïž",https://arxiv.org/abs/2109.03792,"Generative approaches have been recently shown to be effective for both Entity Disambiguation and Entity Linking (i.e., joint mention detection and disambiguation). However, the previously proposed autoregressive formulation for EL suffers from i) high computational cost due to a complex (deep) decoder, ii) non-parallelizable decoding that scales with the source sequence length, and iii) the need for training on a large amount of data. In this work, we propose a very efficient approach that parallelizes autoregressive linking across all potential mentions and relies on a shallow and efficient decoder. Moreover, we augment the generative objective with an extra discriminative component, i.e., a correction term which lets us directly optimize the generator's ranking. When taken together, these techniques tackle all the above issues: our model is >70 times faster and more accurate than the previous generative method, outperforming state-of-the-art approaches on the standard English dataset AIDA-CoNLL. Source code available at this https URL ","Highly Parallel Autoregressive Entity Linking with Discriminative
Correction",3,"['Happy to announce my new #EMNLP2021 paper: Highly Parallel Autoregressive Entity Linking with Discriminative Correction\n\nSoTA performance while being >70x faster than previous generative formulations! đ€Ż\n\nđ\nđ» ', 'The model generates mention-entity pairs conditionally independently given the input and thus allowing high parallelism.\n\nIn addition, a Longformer is used as the encoder to handle long input documents, and a shallow LSTM is used as the decoder to make generation super fast! âĄïž https://t.co/ZZWfAwiBlC', 'As always I want to thank my amazing collaborators @iatitov and @wilkeraziz â„ïž']",21,09,591
172,46,1407675604896567302,896641093,"John J. Howard, Ph.D.",New Paper Announcement! How we determine groups when measuring #Fairness in #AI is a huge unanswered problem. We do a quantitative analysis of one approach in computer vision called the Fitzpatrick Skin Type (FST) ... We show this metric may not be measuring what we think it is and can lead to questionable outcomes in studies of #bias. Measurement is important! Particularly when we discuss fairness. Feedback welcome.,https://arxiv.org/abs/2106.11240,"With increasing adoption of face recognition systems, it is important to ensure adequate performance of these technologies across demographic groups. Recently, phenotypes such as skin-tone, have been proposed as superior alternatives to traditional race categories when exploring performance differentials. However, there is little consensus regarding how to appropriately measure skin-tone in evaluations of biometric performance or in AI more broadly. In this study, we explore the relationship between face-area-lightness-measures (FALMs) estimated from images and ground-truth skin readings collected using a device designed to measure human skin. FALMs estimated from different images of the same individual varied significantly relative to ground-truth FALM. This variation was only reduced by greater control of acquisition (camera, background, and environment). Next, we compare ground-truth FALM to Fitzpatrick Skin Types (FST) categories obtained using the standard, in-person, medical survey and show FST is poorly predictive of skin-tone. Finally, we show how noisy estimation of FALM leads to errors selecting explanatory factors for demographic differentials. These results demonstrate that measures of skin-tone for biometric performance evaluations must come from objective, characterized, and controlled sources. Further, despite this being a currently practiced approach, estimating FST categories and FALMs from uncontrolled imagery does not provide an appropriate measure of skin-tone. ","Reliability and Validity of Image-Based and Self-Reported Skin Phenotype
Metrics",2,"['New Paper Announcement! How we determine groups when measuring #Fairness in #AI is a huge unanswered problem. We do a quantitative analysis of one approach in computer vision called the Fitzpatrick Skin Type (FST) ...\n\n', 'We show this metric may not be measuring what we think it is and can lead to questionable outcomes in studies of #bias. Measurement is important! Particularly when we discuss fairness. Feedback welcome.']",21,06,427
173,191,1304802684982243331,927915414151147521,Antonis Papasavva,"đą In our latest work, \w the usual suspects @iDRAMALab @emilianoucl @gianluca_string @jhblackb @zsavvas90, we perform a preliminary analysis of the QAnon movement on . Find our preprint here đ°đđđ We find that the submissions in v/GreatAwakening tend to get approximately 57 upvotes and only 1.7 downvotes. On average, the /v/GreatAwakening submissions tend to be positively voted with the final vote (sum) being 59, and the median sum being ~32. Alarmingly, the audience of /v/GreatAwakening consumes content from a handful of users. The top submitter is responsible for 31.47% (1.36K) submissions. Excluding the top 15 submitters, all other users only posted 31.14% (1.34K) submissions. đ Interestingly, we observe that about 26% (932 users) of the users in our dataset registered a new account on Voat in September 2018: the month Reddit banned many QAnon related subreddits. Our results show that user migration is apparent when other communities get banned.đđđ« Last, we show that QAnon related discussions are hateful and racist. The term ""jew"" is closely related to holocaust associated words, and the terms ""masters,"" and ""puppet,"" showing that users in this community blame people of Jewish religion/descent for leading the ""deep state""đ€Ź",https://arxiv.org/abs/2009.04885,"Online fringe communities offer fertile grounds for users seeking and sharing ideas fueling suspicion of mainstream news and conspiracy theories. Among these, the QAnon conspiracy theory emerged in 2017 on 4chan, broadly supporting the idea that powerful politicians, aristocrats, and celebrities are closely engaged in a global pedophile ring. Simultaneously, governments are thought to be controlled by ""puppet masters,"" as democratically elected officials serve as a fake showroom of democracy. This paper provides an empirical exploratory analysis of the QAnon community on Voat.co, a Reddit-esque news aggregator, which has captured the interest of the press for its toxicity and for providing a platform to QAnon followers. More precisely, we analyze a large dataset from /v/GreatAwakening, the most popular QAnon-related subverse (the Voat equivalent of a subreddit), to characterize activity and user engagement. To further understand the discourse around QAnon, we study the most popular named entities mentioned in the posts, along with the most prominent topics of discussion, which focus on US politics, Donald Trump, and world events. We also use word embeddings to identify narratives around QAnon-specific keywords. Our graph visualization shows that some of the QAnon-related ones are closely related to those from the Pizzagate conspiracy theory and so-called drops by ""Q."" Finally, we analyze content toxicity, finding that discussions on /v/GreatAwakening are less toxic than in the broad Voat community. ","""Is it a Qoincidence?"": An Exploratory Study of QAnon on Voat",5,"['đą In our latest work, \\w the usual suspects @iDRAMALab @emilianoucl @gianluca_string @jhblackb @zsavvas90, we perform a preliminary analysis of the QAnon movement on . \nFind our preprint here đ°đđđ', 'We find that the submissions in v/GreatAwakening tend to get approximately 57 upvotes and only 1.7 downvotes. On average, the /v/GreatAwakening submissions tend to be positively voted with the final vote (sum) being 59, and the median sum being ~32. https://t.co/hHlCsIBtNd', 'Alarmingly, the audience of /v/GreatAwakening consumes content from a handful of users. The top submitter is responsible for 31.47% (1.36K) submissions. Excluding the top 15 submitters, all other users only posted 31.14% (1.34K) submissions. đ https://t.co/9GYmSHYCRk', 'Interestingly, we observe that about 26% (932 users) of the users in our dataset registered a new account on\nVoat in September 2018: the month Reddit banned many QAnon related subreddits. Our results show that user migration is apparent when other communities get banned.đđđ« https://t.co/a40wCzbWBM', 'Last, we show that QAnon related discussions are hateful and racist. The term ""jew"" is closely related to holocaust associated words, and the terms ""masters,"" and ""puppet,"" showing that users in this community blame people of Jewish religion/descent for leading the ""deep state""đ€Ź']",20,09,1277
174,129,1140965195998777345,954335153723183104,Luciano da F. Costa,"Consonance/dissonance are important properties of sound, and music is characterized by creative application of respective patterns. How would combinations of sounds be affected by amplification? We study this problem in this recent work: ",https://arxiv.org/abs/1906.06559,"After briefly revising the concepts of consonance/dissonance, a respective mathematic-computational model is described, based on Helmholtz's consonance theory and also considering the partials intensity. It is then applied to characterize five scale temperaments, as well as some minor and major triads and electronic amplification. In spite of the simplicity of the described model, a surprising agreement is often observed between the obtained consonances/dissonances and the typically observed properties of scales and chords. The representation of temperaments as graphs where links correspond to consonance (or dissonance) is presented and used to compare distinct temperaments, allowing the identification of two main groups of scales. The interesting issue of nonlinearities in electronic music amplification is also addressed while considering quadratic distortions, and it is shown that such nonlinearities can have drastic effect in changing the original patterns of consonance and dissonance. ","Modeling Consonance and its Relationships with Temperament, Harmony, and
Electronic Amplification",1,"['Consonance/dissonance are important properties of sound, and music is characterized by creative application of respective patterns. How would combinations of sounds be affected by amplification? We study this problem in this recent work:\n\n ']",19,06,251
175,232,1435880620216012803,1237350319891374081,Dimitris Papoulias,Paper day! We used data from @COHERENT_NUS to calculate the neutrino floor. We also studied the impact of subdominant electroweak/nuclear uncertainties and BSM models! Special thanks to my awesome collaborators: @ntinaValentina @LuisJFloresS D. Aristizabal ,https://arxiv.org/abs/2109.03247,"We reconsider the discovery limit of multi-ton direct detection dark matter experiments in the light of recent measurements of the coherent elastic neutrino-nucleus scattering process. Assuming the cross section to be a parameter entirely determined by data, rather than using its Standard Model prediction, we use the COHERENT CsI and LAr data sets to determine WIMP discovery limits. Being based on a data-driven approach, the results are thus free from theoretical assumptions and fall within the WIMP mass regions where XENONnT and DARWIN have best expected sensitivities. We further determine the impact of subleading nuclear form factor and weak mixing angle uncertainties effects on WIMP discovery limits. We point out that these effects, albeit small, should be taken into account. Moreover, to quantify the impact of new physics effects in the neutrino background, we revisit WIMP discovery limits assuming light vector and scalar mediators as well as neutrino magnetic moments/transitions. We stress that the presence of new interactions in the neutrino sector, in general, tend to worsen the WIMP discovery limit. ","Impact of COHERENT measurements, cross section uncertainties and new
interactions on the neutrino floor",1,['Paper day! \nWe used data from @COHERENT_NUS to calculate the neutrino floor. We also studied the impact of subdominant electroweak/nuclear uncertainties and BSM models!\nSpecial thanks to my awesome collaborators: @ntinaValentina @LuisJFloresS D. Aristizabal '],21,09,270
176,18,1278122949988569089,1186047738607263744,Rogemar A Riffel,"New paper on arXiv: Providing observational constraints in the multi-phase gas kinematics on 10-100 pc scales of galaxies is fundamental to better understand the role feedback from Active Galactic Nuclei (AGN) in the evolution of galaxies. We use Gemini NIFS to map de hot molecular and ionized gas kinematics in the inner 500 pc of NGC1275, the brightest galaxy of the Perseus cluster. From the fitting of the CO absorption bandheads in the K-band, we derive a stellar velocity dispersion of ~265 km/s , which implies a black hole mass of ~1.1E9 solar masses. The gas kinematics present two components, one (narrow) due to gas at velocities close to the systemic velocity of the galaxy and another (broad) due to outflows produced by the central active nucleus. We find hot (T> 1000 K) molecular and ionized outflows with velocities of up to 2000 km/s and mass outflow rates of 2.7E-2 solar masses/year and 1.6 solar masses/year, respectively in each of these gas phases. The kinetic power of the ionized outflows corresponds to only 0.05% of the luminosity of the AGN of NGC 1275, indicating that they are not powerful enough to provide significant AGN feedback, but may be effective in redistributing the gas in the central region of the galaxy. The H2 and [FeII] emission is produced mainly by shocks produced by the AGN winds, as revealed by the emission-line ratio diagnostic diagram and the observed kinematics. My collaborators in this work are @thaisa_sb (UFRGS), Nadia Zakamska (JHU) and @RiffelRogerio (UFRGS). @venkatessh Thanks! Indeed, the ALMA data of NGC1275 were already published by Nagai et al. The ALMA CO maps show the compact molecular disk (<100 pc), but not the outflow. Definitely, we should talk about ALMA data for some nearby Seyferts that we have NIFS data...",https://arxiv.org/abs/2006.15198,"The role of feedback from Active Galactic Nuclei (AGN) in the evolution of galaxies is still not not fully understood, mostly due to the lack of observational constraints in the multi-phase gas kinematics on the ten to hundred parsec scales. We have used the Gemini Near-infrared Integral Field Spectrograph (NIFS) to map the molecular and ionized gas kinematics in the inner 900$\times$900 pc$^2$ of the Seyfert galaxy NGC1275 at a spatial resolution of $\sim$70 pc. From the fitting of the CO absorption bandheads in the K-band, we derive a stellar velocity dispersion of $265\pm26$ km s$^{-1}$, which implies a black hole mass of $M_{\rm SMBH}=1.1^{+0.9}_{-0.5}\times10^9$ M$_\odot$. We find hot ($T\gtrsim1000$ K) molecular and ionized outflows with velocities of up to 2 000 km s$^{-1}$ and mass outflow rates of $2.7\times10^{-2} {\rm M_\odot}$ yr$^{-1}$ and $1.6 {\rm M_\odot}$ yr$^{-1}$, respectively, in each of these gas phases. The kinetic power of the ionized outflows corresponds to only 0.05 per cent of the luminosity of the AGN of NGC 1275, indicating that they are not powerful enough to provide significant AGN feedback, but may be effective in redistributing the gas in the central region of the galaxy. The AGN driven outflows seem to be responsible for the shocks necessary to produce the observed H$_2$ and [Fe II] line emission. ",Ionized and hot molecular outflows in the inner 500 pc of NGC1275,9,"['New paper on arXiv:\xa0\nProviding observational constraints in the multi-phase gas kinematics on 10-100 pc scales of galaxies is fundamental to better understand the role feedback from Active Galactic Nuclei (AGN) in the evolution of galaxies.', 'We use Gemini NIFS to map de hot molecular and ionized gas kinematics in the inner 500 pc of NGC1275, the brightest galaxy of the Perseus cluster. https://t.co/48wCobXBRQ', 'From the fitting of the CO absorption bandheads in the K-band, we derive a stellar velocity dispersion of\xa0 ~265\xa0 km/s , which implies a black hole mass of\xa0 ~1.1E9 solar masses. https://t.co/VHASWFxqW3', 'The gas kinematics present\xa0two components, one (narrow) due to gas at velocities close to the systemic\xa0velocity of the galaxy and another (broad) due to outflows produced by the central active nucleus. https://t.co/Mpna6Hn0sY', 'We find hot (T> 1000 K) molecular and ionized outflows with velocities of up to 2000 km/s and mass outflow rates of 2.7E-2 solar masses/year and 1.6 solar masses/year, respectively in each of these gas phases.', 'The kinetic power of the ionized outflows corresponds to only 0.05% of the luminosity of the AGN of NGC 1275, indicating that they are not powerful enough to provide significant AGN feedback, but may be effective in redistributing the gas in the central region of the galaxy.', 'The H2 and [FeII] emission is produced mainly by shocks produced by the AGN winds, as revealed by the emission-line ratio diagnostic diagram and the observed kinematics. https://t.co/AZ8M83ezco', 'My collaborators in this work are @thaisa_sb (UFRGS), Nadia Zakamska (JHU) and @RiffelRogerio (UFRGS).', '@venkatessh Thanks! Indeed, the ALMA data of NGC1275 were already published by Nagai et al. The ALMA CO maps show the compact molecular disk (<100 pc), but not the outflow. Definitely, we should talk about ALMA data for some nearby Seyferts that we have NIFS data...']",20,06,1832
177,97,1170926863214137344,985110481345155072,Eren M. Elçi,"Often itâs a useful strategy to embed a math problem into a larger space to solve it. In this paper we use this idea and construct a new coupling of âgeometricâ representations of the random-cluster model to design a new MCMC algorithm for the RC-model. Beyond itâs application to design a new MCMC algorithm, we can use our coupling to construct a perfect sampling algorithm for the q-flow representation of the Potts model. This uses the idea of coupling of MCMC chains, see here our work here: ",http://arxiv.org/abs/1909.02719,"Potts spin systems play a fundamental role in statistical mechanics and quantum field theory, and can be studied within the spin, the Fortuin-Kasteleyn (FK) bond or the $q$-flow (loop) representation. We introduce a Loop-Cluster (LC) joint model of bond-occupation variables interacting with $q$-flow variables, and formulate a LC algorithm that is found to be in the same dynamical universality as the celebrated Swendsen-Wang algorithm. This leads to a theoretical unification for all the representations, and numerically, one can apply the most efficient algorithm in one representation and measure physical quantities in others. Moreover, by using the LC scheme, we construct a hierarchy of geometric objects that contain as special cases the $q$-flow clusters and the backbone of FK clusters, the exact values of whose fractal dimensions in two dimensions remain as an open question. Our work not only provides a unified framework and an efficient algorithm for the Potts model, but also brings new insights into rich geometric structures of the FK clusters. ",Loop-Cluster Coupling and Algorithm for Classical Statistical Models,3,"['Often itâs a useful strategy to embed a math problem into a larger space to solve it. In this paper we use this idea and construct a new coupling of âgeometricâ representations of the random-cluster model to design a new MCMC algorithm for the RC-model. ', 'Beyond itâs application to design a new MCMC algorithm, we can use our coupling to construct a perfect sampling algorithm for the q-flow representation of the Potts model. This uses the idea of coupling of MCMC chains, see here our work here: https://t.co/msiDh1kWvl', 'https://t.co/kLljH3R3Qm']",19,09,517
178,190,1314582759797596167,1184198482455945218,Yuber F Perez-G,"New paper with Jessica Turner, an awesome collaborator! We explore in detail non-resonant leptogenesis in a black hole dominated early Universe, take a look We have found that black holes can modify substantially the final lepton asymmetry depending on the interplay between the products of the evaporation and the epoch of the Universe. For instance, if black holes evaporate before the thermal leptogenesis, the impact would be minimal If, on the other hand, the evaporation occurs way after the thermal leptogenesis era, the outcome would depend on the right-handed neutrino masses. For large masses (> 10^9 GeV) the evaporation could enhance the asymmetry. but if RH neutrinos are lighter, the black hole contribution would not be sufficient, and the effect would be the contrary: BHs would diminish the lepton asymmetry because of the extra radiation produced. So, we find that for the intermediate scale leptogenesis, BHs always tend to decrease the asymmetry, even for very fine tunned scenarios! Then, if it's proven that the Universe had a BH dominated era, this could put tension on this leptogenesis case. This was a fun project! I'm looking forward to understanding the effects of BHs in other leptogenesis scenarios! @tabrizi_zahra Not so many đ",https://arxiv.org/abs/2010.03565,"We perform the first numerical calculation of the interplay between thermal and black hole induced leptogenesis, demonstrating that the right-handed neutrino surplus produced during the evaporation only partially mitigates the entropy dilution suffered by the thermal component. As such, the intermediate-mass regime of the right-handed neutrinos, $10^6{\rm~GeV} \lesssim M_{N} \lesssim 10^{9}{\rm~GeV}$, could not explain the observed baryon asymmetry even for fine-tuned scenarios if there existed a primordial black hole dominated era, consistent with initial black hole masses of $M_i \gtrsim \mathcal{O}\left(1\right)$ kg. Detection of the gravitational waves emitted from the same primordial black holes would place intermediate-scale thermal leptogenesis under tension. ","Assessing the tension between a black hole dominated early universe and
leptogenesis",7,"['New paper with Jessica Turner, an awesome collaborator!\nWe explore in detail non-resonant leptogenesis in a black hole dominated early Universe, take a look\n ', 'We have found that black holes can modify substantially the final lepton asymmetry depending on the interplay between the products of the evaporation and the epoch of the Universe. For instance, if black holes evaporate before the thermal leptogenesis, the impact would be minimal', 'If, on the other hand, the evaporation occurs way after the thermal leptogenesis era, the outcome would depend on the right-handed neutrino masses. For large masses (> 10^9 GeV) the evaporation could enhance the asymmetry.', 'but if RH neutrinos are lighter, the black hole contribution would not be sufficient, and the effect would be the contrary: BHs would diminish the lepton asymmetry because of the extra radiation produced.', ""So, we find that for the intermediate scale leptogenesis, BHs always tend to decrease the asymmetry, even for very fine tunned scenarios! Then, if it's proven that the Universe had a BH dominated era, this could put tension on this leptogenesis case."", ""This was a fun project! I'm looking forward to understanding the effects of BHs in other leptogenesis scenarios!"", '@tabrizi_zahra Not so many đ']",20,10,1275
179,192,1255832795341115393,319518346,Jose Camacho-Collados,"Are unambiguous words useful for Word Sense Disambiguation? Surprisingly yes! Check out our latest work with @danielbloureiro to find out why and how. Paper đ (Note: for full transparency, we've also made our #acl2020nlp reviews available) #NLProc 1/2 Bonus: we freely release UWA, a large sense-annotated corpus based on OpenWebText and Wikipedia with unambiguous word annotations. Sense embeddings (based on this corpus and Semcor) to perform state-of-the-art WSD are also available for download. 2/2 ",https://arxiv.org/abs/2004.14325,"State-of-the-art methods for Word Sense Disambiguation (WSD) combine two different features: the power of pre-trained language models and a propagation method to extend the coverage of such models. This propagation is needed as current sense-annotated corpora lack coverage of many instances in the underlying sense inventory (usually WordNet). At the same time, unambiguous words make for a large portion of all words in WordNet, while being poorly covered in existing sense-annotated corpora. In this paper, we propose a simple method to provide annotations for most unambiguous words in a large corpus. We introduce the UWA (Unambiguous Word Annotations) dataset and show how a state-of-the-art propagation-based model can use it to extend the coverage and quality of its word sense embeddings by a significant margin, improving on its original results on WSD. ","Don't Neglect the Obvious: On the Role of Unambiguous Words in Word
Sense Disambiguation",2,"[""Are unambiguous words useful for Word Sense Disambiguation? Surprisingly yes! Check out our latest work with @danielbloureiro to find out why and how. \n\nPaper đ \n\n(Note: for full transparency, we've also made our #acl2020nlp reviews available) #NLProc 1/2 "", 'Bonus: we freely release UWA, a large sense-annotated corpus based on OpenWebText and Wikipedia with unambiguous word annotations. Sense embeddings (based on this corpus and Semcor) to perform state-of-the-art WSD are also available for download. 2/2\n\nhttps://t.co/CU0s2DiWcF']",20,04,524
180,25,1021311817589682176,739505640326987777,Guido Roberts-Borsani,"New paper out today! Very pleased with this one: looking at cold gas inflows and outflows in local, normal galaxies using stacking techniques of SDSS spectra - aiming to constrain the strength/properties of the flows and whether they can quench the galaxy! In the local Universe, two populations of galaxies are observed: blue+star-forming and red+passive galaxies that have stopped forming stars. The general consensus is that galaxies transit from this blue population to the red one - ie., something is quenching the star formation. One of the major unanswered questions in astronomy is how does this happen? Here we look at the role of galaxy outflows and aim to constrain whether they can expel enough Hydrogen gas out of the galaxy - thereby removing the ""fuel"" for star formation - to ""quench"" the host. Many authors have done great work on this, but generally looked at extreme objects with somewhat incomplete data sets. Here we expand on this by looking at all galaxy populations with one of the largest astronomical surveys to date (@sdssurveys).",https://arxiv.org/abs/1807.07575,"We perform a stacking analysis of the neutral \nad\,$\lambda\lambda$5889,5895\,\AA\ ISM doublet using the SDSS DR7 spectroscopic data set to probe the prevalence and characteristics of cold (T\,$\lesssim$\,10$^{4}$\,K) galactic-scale gas flows in local (0.025$\leqslant z\leqslant$0.1) inactive and AGN-host galaxies across the SFR-M$_{*}$ plane. We find low-velocity outflows to be prevalent in regions of high SFRs and stellar masses (10 $\lesssim$log M$_{*}$/M$_{\odot}$ $\lesssim$ 11.5), however we do not find any detections in the low mass (log M$_{*}$/M$_{\odot}$ $\lesssim$ 10) regime. We also find tentative detections of inflowing gas in high mass galaxies across the star-forming population. We derive mass outflow rates in the range of 0.14-1.74\,M$_{\odot}$yr$^{-1}$ and upper limits on inflow rates <1\,M$_{\odot}$yr$^{-1}$, allowing us to place constraints on the mass loading factor ($\eta$=$\dot{M}_{\text{out}}$/SFR) for use in simulations of the local Universe. We discuss the fate of the outflows by comparing the force provided by the starburst to the critical force needed to push the outflow outward, and find the vast majority of the outflows unlikely to escape the host system. Finally, as outflow detection rates and central velocities do not vary strongly with the presence of a (weak) active supermassive black hole, we determine that star formation appears to be the primary driver of outflows at $z\sim$0. ","The prevalence and properties of cold gas inflows and outflows around
galaxies in the local Universe",4,"['New paper out today! \nVery pleased with this one: looking at cold gas inflows and outflows in local, normal galaxies using stacking techniques of SDSS spectra - aiming to constrain the strength/properties of the flows and whether they can quench the galaxy! ', 'In the local Universe, two populations of galaxies are observed: blue+star-forming and red+passive galaxies that have stopped forming stars. The general consensus is that galaxies transit from this blue population to the red one - ie., something is quenching the star formation. https://t.co/joi7BKF32y', 'One of the major unanswered questions in astronomy is how does this happen? Here we look at the role of galaxy outflows and aim to constrain whether they can expel enough Hydrogen gas out of the galaxy - thereby removing the ""fuel"" for star formation - to ""quench"" the host. https://t.co/fLvv55ZLfR', 'Many authors have done great work on this, but generally looked at extreme objects with somewhat incomplete data sets. Here we expand on this by looking at all galaxy populations with one of the largest astronomical surveys to date (@sdssurveys).']",18,07,1085
181,86,1425649945026076673,2467767397,Howard Cohl,"In this paper we exploit this relationship to highlight potentially new properties of Ferrers functions which are elucidated by properties of Gegenbauer/ultraspherical polynomial such as e.g., related to orthogonality, Poisson kernel and Poisson-Darboux. ",https://arxiv.org/abs/2108.03276,"Using the direct relation between the Gegenbauer polynomials and the Ferrers function of the first kind, we compute interrelations between certain Jacobi polynomials, Meixner polynomials, and the Ferrers function of the first kind. We then compute Rodrigues-type and orthogonality relations for Ferrers functions of the first and second kinds. In the remainder of the paper using the relation between Gegenbauer polynomials and the Ferrers function of the first kind we derive connection and linearization relations, some definite integral and series expansions, some asymptotic expansions of Mehler-Heine type, Christoffel-Darboux summation formulas, and infinite series closure relations (Dirac delta distribution). ","On the relation between Gegenbauer polynomials and the Ferrers function
of the first kind",1,"['In this paper we exploit this relationship to highlight potentially new properties of Ferrers functions which are elucidated by properties of Gegenbauer/ultraspherical polynomial such as e.g., related to orthogonality, Poisson kernel and Poisson-Darboux. ']",21,08,261
182,55,1174351372549943296,243126428,Jaemin Cho,"New #emnlp2019 paper on diverse sequence generation with @seo_minjoon @HannaHajishirzi TL;DR) Separating diversification from generation improves both diversity & accuracy in question generation and summarization âą Paper: âą Code: @seo_minjoon @HannaHajishirzi We explicitly separate diversification from generation using a mixture-of-experts content selection module (called Selector) that guides an encoder-decoder model. @seo_minjoon @HannaHajishirzi Two-stage method: (1) Diversification: Selector samples different binary masks (called focus; m1, m2, and m3 in the figure) on a source sequence. (2) Generation: an encoder-decoder model generates different sequences from the source sequence guided by different masks. @seo_minjoon @HannaHajishirzi Not only does this improve the diversity of the generated sequences, but also improves accuracy (high fidelity) of them, since conventional MLE models often learn suboptimal mapping that is in the middle of the targets but not near any of them.",https://arxiv.org/abs/1909.01953,"Generating diverse sequences is important in many NLP applications such as question generation or summarization that exhibit semantically one-to-many relationships between source and the target sequences. We present a method to explicitly separate diversification from generation using a general plug-and-play module (called SELECTOR) that wraps around and guides an existing encoder-decoder model. The diversification stage uses a mixture of experts to sample different binary masks on the source sequence for diverse content selection. The generation stage uses a standard encoder-decoder model given each selected content from the source sequence. Due to the non-differentiable nature of discrete sampling and the lack of ground truth labels for binary mask, we leverage a proxy for ground truth mask and adopt stochastic hard-EM for training. In question generation (SQuAD) and abstractive summarization (CNN-DM), our method demonstrates significant improvements in accuracy, diversity and training efficiency, including state-of-the-art top-1 accuracy in both datasets, 6% gain in top-5 accuracy, and 3.7 times faster training over a state of the art model. Our code is publicly available at this https URL ",Mixture Content Selection for Diverse Sequence Generation,4,"['New #emnlp2019 paper on diverse sequence generation with @seo_minjoon @HannaHajishirzi\nTL;DR) Separating diversification from generation improves both diversity & accuracy in question generation and summarization\nâą Paper: \nâą Code: ', '@seo_minjoon @HannaHajishirzi We explicitly separate diversification from generation using a mixture-of-experts content selection module (called Selector) that guides an encoder-decoder model.', '@seo_minjoon @HannaHajishirzi Two-stage method:\n(1) Diversification: Selector samples different binary masks (called focus; m1, m2, and m3 in the figure) on a source sequence.\n\n(2) Generation: an encoder-decoder model generates different sequences from the source sequence guided by different masks.', '@seo_minjoon @HannaHajishirzi Not only does this improve the diversity of the generated sequences, but also improves accuracy (high fidelity) of them, since conventional MLE models often learn suboptimal mapping that is in the middle of the targets but not near any of them.']",19,09,1017
183,151,1324354758463918096,76009287,Desh Raj,"đ New paper on ArXiv đ DOVER-Lap: A method for combining overlap-aware diarization outputs Paper: Code: Thread about the work đ 1/n Ensembles usually improve over single-best systems. But ensembling diarization (""who spoke when"") systems is hard for 2 reasons: 1. Output labels are not aligned across systems 2. Overlap-aware systems may predict multiple labels in a region 2/n We propose a method to achieve this combination in two stages: (i) mapping all system outputs to a common label space, and (ii) rank-weighted voting which considers overlapping speakers. 3/n We formulate (i) as a weighted k-partite graph matching problem, and propose a greedy maximal matching algorithm based on a ""global cost tensor"" containing all pairwise distances between predicted labels. 4/n For (ii), we vote among the systems and predict the top N labels, where N is the weighted mean of number of labels rounded to the nearest integer. In simple words, if systems A, B, and C predict 2, 1, and 2 speakers in a region, the combined output contains 2 speakers. 5/n We combined outputs from several new diarization methods (RPN, TS-VAD, etc.) -> consistent improvements across different overlap conditions (see results on LibriCSS in the table): 6/n More experiments in paper: - Combining systems on AMI meeting data - Using DOVER-Lap for multi-channel fusion (TL;DR -> does better than WPE + beamforming) 7/n No training or GPUs needed! I did all the experiments on my MacBook Pro CPU and they ran in a few seconds. Consider using DOVER-Lap when you are submitting your diarization systems to future editions of DIHARD or VoxSRC đ n/n",https://arxiv.org/abs/2011.01997,"Several advances have been made recently towards handling overlapping speech for speaker diarization. Since speech and natural language tasks often benefit from ensemble techniques, we propose an algorithm for combining outputs from such diarization systems through majority voting. Our method, DOVER-Lap, is inspired from the recently proposed DOVER algorithm, but is designed to handle overlapping segments in diarization outputs. We also modify the pair-wise incremental label mapping strategy used in DOVER, and propose an approximation algorithm based on weighted k-partite graph matching, which performs this mapping using a global cost tensor. We demonstrate the strength of our method by combining outputs from diverse systems -- clustering-based, region proposal networks, and target-speaker voice activity detection -- on AMI and LibriCSS datasets, where it consistently outperforms the single best system. Additionally, we show that DOVER-Lap can be used for late fusion in multichannel diarization, and compares favorably with early fusion methods like beamforming. ",DOVER-Lap: A Method for Combining Overlap-aware Diarization Outputs,8,"['đ New paper on ArXiv đ\n\nDOVER-Lap: A method for combining overlap-aware diarization outputs\n\nPaper: \nCode: \n\nThread about the work đ\n\n1/n', 'Ensembles usually improve over single-best systems. But ensembling diarization (""who spoke when"") systems is hard for 2 reasons:\n\n1. Output labels are not aligned across systems\n2. Overlap-aware systems may predict multiple labels in a region\n\n2/n', 'We propose a method to achieve this combination in two stages: (i) mapping all system outputs to a common label space, and (ii) rank-weighted voting which considers overlapping speakers.\n\n3/n', 'We formulate (i) as a weighted k-partite graph matching problem, and propose a greedy maximal matching algorithm based on a ""global cost tensor"" containing all pairwise distances between predicted labels.\n\n4/n https://t.co/A48IjoIELM', 'For (ii), we vote among the systems and predict the top N labels, where N is the weighted mean of number of labels rounded to the nearest integer. In simple words, if systems A, B, and C predict 2, 1, and 2 speakers in a region, the combined output contains 2 speakers.\n\n5/n https://t.co/d2acDXCGMT', 'We combined outputs from several new diarization methods (RPN, TS-VAD, etc.) -> consistent improvements across different overlap conditions (see results on LibriCSS in the table):\n\n6/n https://t.co/HuFc7174Fs', 'More experiments in paper: \n- Combining systems on AMI meeting data\n- Using DOVER-Lap for multi-channel fusion (TL;DR -> does better than WPE + beamforming)\n\n7/n', 'No training or GPUs needed! I did all the experiments on my MacBook Pro CPU and they ran in a few seconds.\n\nConsider using DOVER-Lap when you are submitting your diarization systems to future editions of DIHARD or VoxSRC đ\n\nn/n']",20,11,1662
184,78,1506020820769914880,45724845,Swarat Chaudhuri,"How do you learn neural networks that respect end-to-end safety requirements of larger systems of which they are a part? Our new ICLR paper, led by @ChenxiYang001 (), explores this question. (1/n) The SOTA answer to the question is to train the NNs first, then show that the resulting system is safe. But why would you expect it to safe if the NNs haven't seen the property during training? And what do you do if safety checking fails? (2/n) Our answer: give the learner access to a signal from a formal safety analyzer during training. This idea was previously used to prove properties of isolated NNs: . Does it also apply to systems where NNs coexist with nondifferentiable symbolic code? (3/n) Our paper takes a first step on this problem. The main ideas are to compute a safety loss using a probabilistic symbolic executor and to backprop gradients of this loss through nondifferentiable operations using a stochastic gradient estimator. (4/n) I am really excited about this direction. The distinction between ""ML/AI"" and ""software"" is spurious in 2022 -- soon, all systems will have ML components. Since you can't debug NNs manually, connecting learning and verification is key. (5/n) And there are so many open technical challenges! We need better sound approximation techniques for NNs. We need to figure out how to handle soft requirements and unknown unknowns. We need to balance precision and scalability. (6/n) Fortunately, more and more folks are working on FM + ML, so I am optimistic about the field. Feel free to ping us if you want to chat about the area or our work. (7/7)",https://arxiv.org/abs/2203.07671,"We study the problem of learning worst-case-safe parameters for programs that use neural networks as well as symbolic, human-written code. Such neurosymbolic programs arise in many safety-critical domains. However, because they can use nondifferentiable operations, it is hard to learn their parameters using existing gradient-based approaches to safe learning. Our approach to this problem, Differentiable Symbolic Execution (DSE), samples control flow paths in a program, symbolically constructs worst-case ""safety losses"" along these paths, and backpropagates the gradients of these losses through program operations using a generalization of the REINFORCE estimator. We evaluate the method on a mix of synthetic tasks and real-world benchmarks. Our experiments show that DSE significantly outperforms the state-of-the-art DiffAI method on these tasks. ",Safe Neurosymbolic Learning with Differentiable Symbolic Execution,7,"['How do you learn neural networks that respect end-to-end safety requirements of larger systems of which they are a part? Our new ICLR paper, led by @ChenxiYang001 (), explores this question. (1/n) ', ""The SOTA answer to the question is to train the NNs first, then show that the resulting system is safe. But why would you expect it to safe if the NNs haven't seen the property during training? And what do you do if safety checking fails? (2/n)"", 'Our answer: give the learner access to a signal from a formal safety analyzer during training. This idea was previously used to prove properties of isolated NNs: https://t.co/31vipcjNZl. Does it also apply to systems where NNs coexist with nondifferentiable symbolic code? (3/n)', 'Our paper takes a first step on this problem. The main ideas are to compute a safety loss using a probabilistic symbolic executor and to backprop gradients of this loss through nondifferentiable operations using a stochastic gradient estimator. (4/n)', 'I am really excited about this direction. The distinction between ""ML/AI"" and ""software"" is spurious in 2022 -- soon, all systems will have ML components. Since you can\'t debug NNs manually, connecting learning and verification is key. (5/n)', 'And there are so many open technical challenges! We need better sound approximation techniques for NNs. We need to figure out how to handle soft requirements and unknown unknowns. We need to balance precision and scalability. (6/n)', 'Fortunately, more and more folks are working on FM + ML, so I am optimistic about the field. Feel free to ping us if you want to chat about the area or our work. (7/7)']",22,03,1616
185,0,1293491058941276160,950844713657094145,Maxwell Ramstead,"Our new paper ""Is the free-energy principle a formal theory of semantics? From variational density dynamics to neural and phenotypic representations"" () was the subject of a great discussion on team_comm podcast's second episode. Highly recommend following! ",https://arxiv.org/abs/2007.09291,"The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance - in relation to the ontological and epistemological status of representations - is most appropriate. We focus on non-realist (deflationary and fictionalist-instrumentalist) approaches. We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims. After reviewing the free-energy principle and active inference, we argue that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium). We propose a modified fictionalist account: an organism-centered fictionalism or instrumentalism. We argue that, under the free-energy principle, pursuing even a deflationary account of the content of neural representations licenses the appeal to the kind of semantic content involved in the aboutness or intentionality of cognitive systems; our position is thus coherent with, but rests on distinct assumptions from, the realist position. We argue that the free-energy principle thereby explains the aboutness or intentionality in living systems and hence their capacity to parse their sensory stream using an ontology or set of semantic factors. ","Is the free-energy principle a formal theory of semantics? From
variational density dynamics to neural and phenotypic representations",1,"['Our new paper ""Is the free-energy principle a formal theory of semantics? From variational density dynamics to neural and phenotypic representations"" () was the subject of a great discussion on team_comm podcast\'s second episode.\nHighly recommend following! ']",20,07,270
186,115,1423486044457025536,917589174,Harold Erbin,"New paper with @thesfinox, Robin Schneider and Mohamed Tamaazousti: we compute Hodge numbers for complete intersection #CalabiYau using #DeepLearning. Accuracy for 80% training ratio: 100% for h(1,1) and h(2,1), 96% for h(3,1), 83% for h(2,2) @thesfinox And it's a new milestone since it's my 30th paper!",https://arxiv.org/abs/2108.02221,"We continue earlier efforts in computing the dimensions of tangent space cohomologies of Calabi-Yau manifolds using deep learning. In this paper, we consider the dataset of all Calabi-Yau four-folds constructed as complete intersections in products of projective spaces. Employing neural networks inspired by state-of-the-art computer vision architectures, we improve earlier benchmarks and demonstrate that all four non-trivial Hodge numbers can be learned at the same time using a multi-task architecture. With 30% (80%) training ratio, we reach an accuracy of 100% for $h^{(1,1)}$ and 97% for $h^{(2,1)}$ (100% for both), 81% (96%) for $h^{(3,1)}$, and 49% (83%) for $h^{(2,2)}$. Assuming that the Euler number is known, as it is easy to compute, and taking into account the linear constraint arising from index computations, we get 100% total accuracy. ",Deep multi-task mining Calabi-Yau four-folds,2,"['New paper with @thesfinox, Robin Schneider and Mohamed Tamaazousti: we compute Hodge numbers for complete intersection #CalabiYau using #DeepLearning.\nAccuracy for 80% training ratio: 100% for h(1,1) and h(2,1), 96% for h(3,1), 83% for h(2,2)\n', ""@thesfinox And it's a new milestone since it's my 30th paper!""]",21,08,311
187,20,1222432508928843776,2901786546,Pau Ramos,"#GaiaDR2 keeps on delivering! Our new paper is now on the ArXiv: In the work led by @AntojaTeresa we have detected the Sagittarius stream all around the sky from proper motions alone. The result: a ~300.000 stars sample (dwarf+tails) [1/3] And all that... Without downloading a single star!! Instead, we pulled the proper motion histograms directly from the #GaiaMission Archive and processed them on the fly. You can find more info at . Stay tuned for news in the upcoming weeks! [2/3] Once we isolated the proper motion peaks belonging to Sagittarius, we finally downloaded the individual stars of the stream. The catalogue will be available soon... In the meantime, we leave you with this colour-magnitude diagram [3/3]. ",https://arxiv.org/abs/2001.10012,"We aim to measure the proper motion along the Sagittarius stream that is the missing piece to determine its full 6D phase space coordinates. We conduct a blind search of over-densities in proper motion from Gaia DR2 in a broad region around the Sagittarius stream by applying wavelet transform techniques. We find that for most of the sky patches, the highest intensity peaks delineate the path of the Sagittarius stream. The 1500 peaks identified depict a continuous sequence spanning almost $2\pi$ in the sky, only obscured when the stream crosses the Galactic disk. Altogether, around $100\,000$ stars potentially belong to the stream as indicated by a coarse inspection of the colour-magnitude diagrams. From these stars, we determine the proper motion along the Sagittarius stream, making it the proper motion sequence with the largest span and continuity ever measured for a stream. A first comparison with existing N-body models of the stream reveals some discrepancies, especially near the pericentre of the trailing arm and an overestimation of the total proper motion for the leading arm. Our study can be the starting point for determining the variation of the population of stars along the stream, the distance to the stream with red clump stars, and the solar motion. It will also allow a much better measurement of the Milky Way potential. ",An all-sky proper motion map of the Sagittarius stream using Gaia DR2,3,"['#GaiaDR2 keeps on delivering! Our new paper is now on the ArXiv: \n\nIn the work led by @AntojaTeresa we have detected the Sagittarius stream all around the sky from proper motions alone. The result: a ~300.000 stars sample (dwarf+tails) [1/3] ', 'And all that... Without downloading a single star!! Instead, we pulled the proper motion histograms directly from the #GaiaMission Archive and processed them on the fly.\n\nYou can find more info at https://t.co/1Z659VaPAW. Stay tuned for news in the upcoming weeks! [2/3]', 'Once we isolated the proper motion peaks belonging to Sagittarius, we finally downloaded the individual stars of the stream. The catalogue will be available soon...\n\nIn the meantime, we leave you with this colour-magnitude diagram [3/3]. https://t.co/q72uZ67zV4']",20,01,750
188,150,1309500042118660096,3609913993,Jeff Cain,"Check out my latest paper on @arxiv, where we use CNTs as ânano reactorsâ to make materials that shouldnât exist! We synthesize new 1D members of the transition metal trichalcogenides (TMT), which do not exist in bulk - only when stabilized by CNTs! TMTs are canonical examples of superconductors and CDW materials. Some members of this family have never been synthesized(eg TiTe3, NbTe3), and energetically canât exist in bulk. We solve this by synthesizing the materials in CNTs to protect & stabilize them.",https://arxiv.org/abs/2009.10869,"The structure of MX3 transition metal trichalcogenides (TMTs, with M a transition metal and X a chalcogen) is typified by one-dimensional (1D) chains weakly bound together via van der Waals interactions. This structural motif is common across a range of M and X atoms (e.g. NbSe3, HfTe3, TaS3), but not all M and X combinations are stable. We report here that three new members of the MX3 family which are not stable in bulk, specifically NbTe3, VTe3, and TiTe3, can be synthesized in the few- to single-chain limit via nano-confined growth within the stabilizing cavity of multi-walled carbon nanotubes. Transmission electron microscopy (TEM) and atomic-resolution scanning transmission electron microscopy (STEM) reveal the chain-like nature and the detailed atomic structure. The synthesized materials exhibit behavior unique to few-chain quasi-1D structures, such as multi-chain spiraling and a trigonal anti-prismatic rocking distortion in the single-chain limit. Density functional theory (DFT) calculations provide insight into the crystal structure and stability of the materials, as well as their electronic structure. ","Stabilization of NbTe3, VTe3, and TiTe3 via Nanotube Encapsulation",2,"['Check out my latest paper on @arxiv, where we use CNTs as ânano reactorsâ to make materials that shouldnât exist! We synthesize new 1D members of the transition metal trichalcogenides (TMT), which do not exist in bulk - only when stabilized by CNTs! ', 'TMTs are canonical examples of superconductors and CDW materials. Some members of this family have never been synthesized(eg TiTe3, NbTe3), and energetically canât exist in bulk. We solve this by synthesizing the materials in CNTs to protect & stabilize them.']",20,09,523
189,132,1501214185555959816,959316810448318464,Theo O'Neill,"New paper out today! đ Led by Jon Swift, we detail first results from the recently renovated Thacher Observatory in Ojai, CA. Take a look if youâre interested in observatory management, astronomy education, or want to see some pretty light curves! ",https://arxiv.org/abs/2203.02529,"Located on the campus of the Thacher School in Southern California, the Thacher Observatory has a legacy of astronomy research and education that dates back to the late 1950's. In 2016, the observatory was fully renovated with upgrades including a new 0.7-m telescope, a research grade camera, and a slit dome with full automation capabilities. The low-elevation site is bordered by the Los Padres National Forest and therefore affords dark to very dark skies allowing for accurate and precise photometric observations. We present a characterization of the site including sky brightness, weather, and seeing, and we demonstrate the on-sky performance of the facility. Our primary research programs are based around our multi-band photometric capabilities and include photometric monitoring of variable sources, a nearby supernova search and followup program, a quick response transient followup effort, and exoplanet and eclipsing binary light curves. Select results from these programs are included in this work which highlight the broad range of science available to an automated observatory with a moderately sized telescope. ",The Renovated Thacher Observatory and First Science Results,1,"['New paper out today! đ Led by Jon Swift, we detail first results from the recently renovated Thacher Observatory in Ojai, CA. \n\nTake a look if youâre interested in observatory management, astronomy education, or want to see some pretty light curves!\n\n']",22,03,255
190,313,1316291822952484866,272682658,Robert Hawkins,"đ„ more EMNLP content for your news feedđ„ we were interested in how recent neural language models integrate lexical/semantic information into their knowledge of grammatical constructions. obviously, we needed to study verb biases. link: in collab with the brilliant @TakaYamakoshi, @adelegoldberg1, and Tom Griffiths, we present the DAIS corpus of 50K acceptability ratings for 5K sentence pairs in the dative alternation, containing 200 different verbs: e.g. how much better is A than B here? A: Ava gave her friend a book. B: Ava gave a book to her friend. what about with a different verb? A: Ava explained her friend a problem. B: Ava explained a problem to her friend. This preference varies widely across verbs! It turns out that larger models account for these verb-specific preferences better than smaller models, and transformer architectures (e.g. GPT-2) do so better than recurrent architectures (e.g. LSTMs), even with comparable param numbers. To begin to understand why, we probed the hidden states of these models as they proceed through the sentence. Immediately upon seeing the verb, GPT2's hidden layers already contains a surprising amount of information about human ratings, and this improved after seeing the 1st arg as a side-note, we also analyzed how these representations are organized as a function of layer depth in GPT2 -- at the verb, human ratings are more decodable at lower layers (corresponding to lexical info?), while later in the sentence, it's only decodable at intermediate layers After a thoughtful suggestion from a reviewer, we also compared these models on an older natural speech corpus (assembled by the inimitable Joan Bresnan) and found similar rankings (vs. the roughly 92% previously achieved from hand-annotated features) We speculate that transformers may be able to learn a better layer-wise pipeline for integrating lexically specific information with representations of higher-level syntactic constructions like the dative & double-object, but further work is needed! We hope our dataset will be useful for future work in both psycholinguistics and NLP, as we push our models to account for subtler phenomena and understand how they deviate from human expectations.",https://arxiv.org/abs/2010.02375,"Languages typically provide more than one grammatical construction to express certain types of messages. A speaker's choice of construction is known to depend on multiple factors, including the choice of main verb -- a phenomenon known as \emph{verb bias}. Here we introduce DAIS, a large benchmark dataset containing 50K human judgments for 5K distinct sentence pairs in the English dative alternation. This dataset includes 200 unique verbs and systematically varies the definiteness and length of arguments. We use this dataset, as well as an existing corpus of naturally occurring data, to evaluate how well recent neural language models capture human preferences. Results show that larger models perform better than smaller models, and transformer architectures (e.g. GPT-2) tend to out-perform recurrent architectures (e.g. LSTMs) even under comparable parameter and training settings. Additional analyses of internal feature representations suggest that transformers may better integrate specific lexical information with grammatical constructions. ",Investigating representations of verb bias in neural language models,9,"['đ„ more EMNLP content for your news feedđ„\n\nwe were interested in how recent neural language models integrate lexical/semantic information into their knowledge of grammatical constructions. obviously, we needed to study verb biases. \n\nlink: ', 'in collab with the brilliant @TakaYamakoshi, @adelegoldberg1, and Tom Griffiths, we present the DAIS corpus of 50K acceptability ratings for 5K sentence pairs in the dative alternation, containing 200 different verbs:\n\nhttps://t.co/k51FOANwoI', 'e.g. how much better is A than B here?\n\nA: Ava gave her friend a book.\nB: Ava gave a book to her friend.\n\nwhat about with a different verb?\n\nA: Ava explained her friend a problem.\nB: Ava explained a problem to her friend.\n\nThis preference varies widely across verbs! https://t.co/vPPQ2vqknW', 'It turns out that larger models account for these verb-specific preferences better than smaller models, and transformer architectures (e.g. GPT-2) do so better than recurrent architectures (e.g. LSTMs), even with comparable param numbers. https://t.co/RBBm9Li0gJ', ""To begin to understand why, we probed the hidden states of these models as they proceed through the sentence. Immediately upon seeing the verb, GPT2's hidden layers already contains a surprising amount of information about human ratings, and this improved after seeing the 1st arg https://t.co/XBIan8K0JB"", ""as a side-note, we also analyzed how these representations are organized as a function of layer depth in GPT2 -- at the verb, human ratings are more decodable at lower layers (corresponding to lexical info?), while later in the sentence, it's only decodable at intermediate layers https://t.co/AEAAOsYqVW"", 'After a thoughtful suggestion from a reviewer, we also compared these models on an older natural speech corpus (assembled by the inimitable Joan Bresnan) and found similar rankings (vs. the roughly 92% previously achieved from hand-annotated features) https://t.co/TMQUg74IMl https://t.co/oWRFPlgxag', 'We speculate that transformers may be able to learn a better layer-wise pipeline for integrating lexically specific information with representations of higher-level syntactic constructions like the dative & double-object, but further work is needed!', 'We hope our dataset will be useful for future work in both psycholinguistics and NLP, as we push our models to account for subtler phenomena and understand how they deviate from human expectations.']",20,10,2281
191,32,1509084870999486464,3236251346,Mikel Sanz,"New paper âQuantum Genetic Algorithm with Individuals in Multiple Registersâ with @raist272 and @Gatgian we propose a subroutine-based fully quantum genetic algorithm distributable among quantum processors @OpenSuperQ @QUANTEK2122 @Ikerbasque @BCAMBilbao The use multiple registers allows us to introduce all the natural elements characterizing genetic algorithms: population-based search with selection of many individuals, crossover and mutation. Surprisingly, the mutation subroutine, has small impact on the average performance 2/3 Finally, and I like this a lot đ, we introduce a quantum channel analysis to prove the exponential convergence of our algorithm and even predict its convergence-ratio. @NquireC @ryc_upvehu @upvehu ",http://arxiv.org/abs/2203.15039,"Genetic algorithms are heuristic optimization techniques inspired by Darwinian evolution, which are characterized by successfully finding robust solutions for optimization problems. Here, we propose a subroutine-based quantum genetic algorithm with individuals codified in independent registers. This distinctive codification allows our proposal to depict all the fundamental elements characterizing genetic algorithms, i.e. population-based search with selection of many individuals, crossover, and mutation. Our subroutine-based construction permits us to consider several variants of the algorithm. For instance, we firstly analyze the performance of two different quantum cloning machines, a key component of the crossover subroutine. Indeed, we study two paradigmatic examples, namely, the biomimetic cloning of quantum observables and the Bu\v zek-Hillery universal quantum cloning machine, observing a faster average convergence of the former, but better final populations of the latter. Additionally, we analyzed the effect of introducing a mutation subroutine, concluding a minor impact on the average performance. Furthermore, we introduce a quantum channel analysis to prove the exponential convergence of our algorithm and even predict its convergence-ratio. This tool could be extended to formally prove results on the convergence of general non-unitary iteration-based algorithms. ",Quantum Genetic Algorithm with Individuals in Multiple Registers,3,"['New paper âQuantum Genetic Algorithm with Individuals in Multiple Registersâ with @raist272 and @Gatgian we propose a subroutine-based fully quantum genetic algorithm distributable among quantum processors @OpenSuperQ @QUANTEK2122 @Ikerbasque @BCAMBilbao ', 'The use multiple registers allows us to introduce all the natural elements characterizing genetic algorithms: population-based search with selection of many individuals, crossover and mutation. Surprisingly, the mutation subroutine, has small impact on the average performance 2/3 https://t.co/l02rWINXWi', 'Finally, and I like this a lot đ, we introduce a quantum channel analysis to prove the exponential convergence of our algorithm and even predict its convergence-ratio. @NquireC @ryc_upvehu @upvehu https://t.co/11SXwuDgIt']",22,03,760
192,77,1320501654471532545,2375680693,John P Dickerson,"Counterfactual explanations are core to AI explainability; announcing a new survey paper & intro to that space. @itsArthurAI Research Fellow @Sahil1V, @keeghin, and I welcome feedback & will update our survey as this burgeoning field matures. Paper: ",https://arxiv.org/abs/2010.10596,"Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine-learning-based systems. A burgeoning body of research seeks to define the goals and methods of explainability in machine learning. In this paper, we seek to review and categorize research on counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability. ",Counterfactual Explanations for Machine Learning: A Review,1,"['Counterfactual explanations are core to AI explainability; announcing a new survey paper & intro to that space. @itsArthurAI Research Fellow @Sahil1V, @keeghin, and I welcome feedback & will update our survey as this burgeoning field matures.\n\nPaper: ']",20,10,263
193,95,1146677258851106816,3368978577,James Kuszlewicz,"Our new paper is out on arXiv! We show how to infer the asteroseismic inclination angle of a star using a hierarchical Bayesian model! . Check out and for the code! A huge thank you also to all my coauthors who made this work possible, including @Thomas_S_North @astrokeat @farrwill @CitationWarrior! @simon4nine Cheers mate! How are you doing in Oz?",https://arxiv.org/abs/1907.01565,"The stellar inclination angle-the angle between the rotation axis of a star and our line of sight-provides valuable information in many different areas, from the characterisation of the geometry of exoplanetary and eclipsing binary systems, to the formation and evolution of those systems. We propose a method based on asteroseismology and a Bayesian hierarchical scheme for extracting the inclination angle of a single star. This hierarchical method therefore provides a means to both accurately and robustly extract inclination angles from red giant stars. We successfully apply this technique to an artificial dataset with an underlying isotropic inclination angle distribution to verify the method. We also apply this technique to 123 red giant stars observed with $\textit{Kepler}$. We also show the need for a selection function to account for possible population-level biases, that are not present in individual star-by-star cases, in order to extend the hierarchical method towards inferring underlying population inclination angle distributions. ",Bayesian hierarchical inference of asteroseismic inclination angles,3,"['Our new paper is out on arXiv! We show how to infer the asteroseismic inclination angle of a star using a hierarchical Bayesian model! . Check out and for the code!', 'A huge thank you also to all my coauthors who made this work possible, including @Thomas_S_North @astrokeat @farrwill @CitationWarrior!', '@simon4nine Cheers mate! How are you doing in Oz?']",19,07,370
194,28,895272749438054400,65010804,Hugh Osborn,"Tau Ceti is one of the closest sunlike stars. A new paper today suggests four super-Earth #exoplanets may orbit it: Of course, a paper in 2012 announced 5 planets... most of which turned out not to exist. Two (e & f) have survived this new analyses though. The team remove lots of noise to reveal the planetary signals which would, if confirmed, be the lowest-amplitude detections ever with RVs. They also find the planets on orbits far more eccentric than should be possible/expected in a stable solar system. All in all: super interesting work! It's suggestive that Tau Ceti has planets, especially the long-period ones far from stellar rotation. BUT I think more data & analyses is needed before we can call them bona fide, confirmed planets (as the spurious 2012 detections showed).",https://arxiv.org/abs/1708.02051,"The removal of noise typically correlated in time and wavelength is one of the main challenges for using the radial velocity method to detect Earth analogues. We analyze radial velocity data of tau Ceti and find robust evidence for wavelength dependent noise. We find this noise can be modeled by a combination of moving average models and ""differential radial velocities"". We apply this noise model to various radial velocity data sets for tau Ceti, and find four periodic signals at 20.0, 49.3, 160 and 642 d which we interpret as planets. We identify two new signals with orbital periods of 20.0 and 49.3 d while the other two previously suspected signals around 160 and 600 d are quantified to a higher precision. The 20.0 d candidate is independently detected in KECK data. All planets detected in this work have minimum masses less than 4$M_\oplus$ with the two long period ones located around the inner and outer edges of the habitable zone, respectively. We find that the instrumental noise gives rise to a precision limit of the HARPS around 0.2 m/s. We also find correlation between the HARPS data and the central moments of the spectral line profile at around 0.5 m/s level, although these central moments may contain both noise and signals. The signals detected in this work have semi-amplitudes as low as 0.3 m/s, demonstrating the ability of the radial velocity technique to detect relatively weak signals. ","Color difference makes a difference: four planet candidates around tau
Ceti",6,"['Tau Ceti is one of the closest sunlike stars. A new paper today suggests four super-Earth #exoplanets may orbit it: ', 'Of course, a paper in 2012 announced 5 planets... most of which turned out not to exist. Two (e & f) have survived this new analyses though.', 'The team remove lots of noise to reveal the planetary signals which would, if confirmed, be the lowest-amplitude detections ever with RVs. https://t.co/2oECrOTMm0', 'They also find the planets on orbits far more eccentric than should be possible/expected in a stable solar system.', ""All in all: super interesting work! It's suggestive that Tau Ceti has planets, especially the long-period ones far from stellar rotation."", 'BUT I think more data & analyses is needed before we can call them bona fide, confirmed planets (as the spurious 2012 detections showed).']",17,08,807
195,56,1395500630614106114,2818695390,Sasho Nikolov,"New paper up, with amazing U of T student Deepanshu Kush and Haohua Tang (who was an undergrad while working on this): Near Neighbor Search via Efficient Average Distortion Embeddings . 𧔠[1/9] The goal in approximate near neighbor search (ANN): preprocess n points P (in some metric space) into a small data structure s.t. queries can be answered quickly: given a new point q, if P has some x* thatâs close to q, return a point x in P thatâs approximately close to q. [2/9] We want to answer a query in less time than it takes to scan P, while keeping space and preprocessing time low. Approximation is necessary for this, but how much? It depends on how we measure distance. With Euclidean or Manhattan distance, constant approx is possible. [3/9] What about other distances? You can approximate your favorite metric by a subset of Euclidean/Manhattan space, i.e., find a low distortion embedding. You can get lots of non-trivial results like this, but you are stuck with bad approximation poly(d) even for ell_p^d spaces [4/9] In papers with Andoni, Naor, @ilyaraz2, and Waingarten we showed that you can get very good approximation poly(log d) for *any* distance given by a norm, much better than embeddings (@QuantaMagazine article: ) [5/9] Big caveat: the data structures require exponential preprocessing. This is what the new results improve: for ell_p and other spaces, we give data structures with the same approx as ANNRW but efficient preprocessing, via a new general method that could work for all norms. [6/9] The idea is to use an embedding into Euclidean space with a much weaker guarantee: low *average* distortion, rather than low distortion. I.e. the embedding should not expand distances, but it can contract them, as long as the average distance between pts in P stays large. [7/9] Turns out that such embeddings are enough to get good random hash functions, and from them good data structures. We also construct low avg distortion embeddings for ell_p and other spaces. Finally, a fascinating open problem: [8/9] Find an efficiently computable embedding of (the 1/2-snowflake) of any d-dimensional norm into Euclidean space with avg distortion sqrt(log d). Naor proved () these exist, but did not give an explicit embedding, let alone an algorithm to compute it. [9/9]",https://arxiv.org/abs/2105.04712,"A recent series of papers by Andoni, Naor, Nikolov, Razenshteyn, and Waingarten (STOC 2018, FOCS 2018) has given approximate near neighbour search (NNS) data structures for a wide class of distance metrics, including all norms. In particular, these data structures achieve approximation on the order of $p$ for $\ell_p^d$ norms with space complexity nearly linear in the dataset size $n$ and polynomial in the dimension $d$, and query time sub-linear in $n$ and polynomial in $d$. The main shortcoming is the exponential in $d$ pre-processing time required for their construction. In this paper, we describe a more direct framework for constructing NNS data structures for general norms. More specifically, we show via an algorithmic reduction that an efficient NNS data structure for a given metric is implied by an efficient average distortion embedding of it into $\ell_1$ or into Euclidean space. In particular, the resulting data structures require only polynomial pre-processing time, as long as the embedding can be computed in polynomial time. As a concrete instantiation of this framework, we give an NNS data structure for $\ell_p$ with efficient pre-processing that matches the approximation factor, space and query complexity of the aforementioned data structure of Andoni et al. On the way, we resolve a question of Naor (Analysis and Geometry in Metric Spaces, 2014) and provide an explicit, efficiently computable embedding of $\ell_p$, for $p \ge 2$, into $\ell_2$ with (quadratic) average distortion on the order of $p$. We expect our approach to pave the way for constructing efficient NNS data structures for all norms. ",Near Neighbor Search via Efficient Average Distortion Embeddings,9,"['New paper up, with amazing U of T student Deepanshu Kush and Haohua Tang (who was an undergrad while working on this): Near Neighbor Search via Efficient Average Distortion Embeddings . 𧔠[1/9]', 'The goal in approximate near neighbor search (ANN): preprocess n points P (in some metric space) into a small data structure s.t. queries can be answered quickly: given a new point q, if P has some x* thatâs close to q, return a point x in P thatâs approximately close to q. [2/9]', 'We want to answer a query in less time than it takes to scan P, while keeping space and preprocessing time low. Approximation is necessary for this, but how much? It depends on how we measure distance. With Euclidean or Manhattan distance, constant approx is possible. [3/9]', 'What about other distances? You can approximate your favorite metric by a subset of Euclidean/Manhattan space, i.e., find a low distortion embedding. You can get lots of non-trivial results like this, but you are stuck with bad approximation poly(d) even for ell_p^d spaces [4/9]', 'In papers with Andoni, Naor, @ilyaraz2, and Waingarten we showed that you can get very good approximation poly(log d) for *any* distance given by a norm, much better than embeddings (@QuantaMagazine article: https://t.co/KYqJqt3sjP) [5/9]', 'Big caveat: the data structures require exponential preprocessing. This is what the new results improve: for ell_p and other spaces, we give data structures with the same approx as ANNRW but efficient preprocessing, via a new general method that could work for all norms. [6/9]', 'The idea is to use an embedding into Euclidean space with a much weaker guarantee: low *average* distortion, rather than low distortion. I.e. the embedding should not expand distances, but it can contract them, as long as the average distance between pts in P stays large. [7/9]', 'Turns out that such embeddings are enough to get good random hash functions, and from them good data structures. We also construct low avg distortion embeddings for ell_p and other spaces. Finally, a fascinating open problem: [8/9]', 'Find an efficiently computable embedding of (the 1/2-snowflake) of any d-dimensional norm into Euclidean space with avg distortion sqrt(log d). Naor proved (https://t.co/p63oPk5NV9) these exist, but did not give an explicit embedding, let alone an algorithm to compute it. [9/9]']",21,05,2308
196,18,1034363142757908480,809336237194706944,Rico Sennrich,"Why Self-Attention? New #EMNLP2018 paper doing targeted evaluation of different NMT architectures. Analysis shows that word sense disambiguation is a big strength of Transformer, but not long-distance dependencies, as speculated by Vaswani et al. (2017). ",https://arxiv.org/abs/1808.08946,"Recently, non-recurrent architectures (convolutional, self-attentional) have outperformed RNNs in neural machine translation. CNNs and self-attentional networks can connect distant words via shorter network paths than RNNs, and it has been speculated that this improves their ability to model long-range dependencies. However, this theoretical argument has not been tested empirically, nor have alternative explanations for their strong performance been explored in-depth. We hypothesize that the strong performance of CNNs and self-attentional networks could also be due to their ability to extract semantic features from the source text, and we evaluate RNNs, CNNs and self-attention networks on two tasks: subject-verb agreement (where capturing long-range dependencies is required) and word sense disambiguation (where semantic feature extraction is required). Our experimental results show that: 1) self-attentional networks and CNNs do not outperform RNNs in modeling subject-verb agreement over long distances; 2) self-attentional networks perform distinctly better than RNNs and CNNs on word sense disambiguation. ","Why Self-Attention? A Targeted Evaluation of Neural Machine Translation
Architectures",1,"['Why Self-Attention? New #EMNLP2018 paper doing targeted evaluation of different NMT architectures. Analysis shows that word sense disambiguation is a big strength of Transformer, but not long-distance dependencies, as speculated by Vaswani et al. (2017). ']",18,08,261
197,160,1228408225567039490,2463105726,Eugene Bagdasaryan,"Turns out, users donât always benefit from participating in Federated Learning. Sounds discouraging for many proposed scenarios. The problem gets worse for robust and privacy-preserving FL. We propose: âSalvaging Federated Learning by Local Adaptationâ @jvmncs Thanks! We report accuracy on local data of that participant which is what makes the difference for them. We tried to avoid talking about global accuracy on some holdout dataset. @AgoryachAlex will do đ",https://arxiv.org/abs/2002.04758,"Federated learning (FL) is a heavily promoted approach for training ML models on sensitive data, e.g., text typed by users on their smartphones. FL is expressly designed for training on data that are unbalanced and non-iid across the participants. To ensure privacy and integrity of the fedeated model, latest FL approaches use differential privacy or robust aggregation. We look at FL from the \emph{local} viewpoint of an individual participant and ask: (1) do participants have an incentive to participate in FL? (2) how can participants \emph{individually} improve the quality of their local models, without re-designing the FL framework and/or involving other participants? First, we show that on standard tasks such as next-word prediction, many participants gain no benefit from FL because the federated model is less accurate on their data than the models they can train locally on their own. Second, we show that differential privacy and robust aggregation make this problem worse by further destroying the accuracy of the federated model for many participants. Then, we evaluate three techniques for local adaptation of federated models: fine-tuning, multi-task learning, and knowledge distillation. We analyze where each is applicable and demonstrate that all participants benefit from local adaptation. Participants whose local models are poor obtain big accuracy improvements over conventional FL. Participants whose local models are better than the federated model\textemdash and who have no incentive to participate in FL today\textemdash improve less, but sufficiently to make the adapted federated model better than their local models. ",Salvaging Federated Learning by Local Adaptation,3,"['Turns out, users donât always benefit from participating in Federated Learning. Sounds discouraging for many proposed scenarios. The problem gets worse for robust and privacy-preserving FL. We propose: âSalvaging Federated Learning by Local Adaptationâ ', '@jvmncs Thanks! We report accuracy on local data of that participant which is what makes the difference for them. We tried to avoid talking about global accuracy on some holdout dataset.', '@AgoryachAlex will do đ']",20,02,470
198,99,1425057742583181314,7773042,Yasser Souri,"Happy to announce our new paper: ""FIFA: Fast Inference Approximation for Action Segmentation"". Paper: Animation: We propose an approximate approach to perform inference for action segmentation using gradient-descent instead of dynamic programming. Using gradient based optimization results in a fast and ""any-time"" algorithm where one can stop the optimization after any number of steps. FIFA can be used in combination with any action segmentation approach that estimates framewise probabilities. We use FIFA in combination with MuCon, CDFL, MS-TCN and MS-TCN++ in both fully supervised and weakly supervised settings. We report state-of-the-art or comparable results on the Breakfast and Hollywood extended dataset. FIFA achieves this accuracy while being 5-12 times faster than dynamic programming based exact inference algorithms.",https://arxiv.org/abs/2108.03894,"We introduce FIFA, a fast approximate inference method for action segmentation and alignment. Unlike previous approaches, FIFA does not rely on expensive dynamic programming for inference. Instead, it uses an approximate differentiable energy function that can be minimized using gradient-descent. FIFA is a general approach that can replace exact inference improving its speed by more than 5 times while maintaining its performance. FIFA is an anytime inference algorithm that provides a better speed vs. accuracy trade-off compared to exact inference. We apply FIFA on top of state-of-the-art approaches for weakly supervised action segmentation and alignment as well as fully supervised action segmentation. FIFA achieves state-of-the-art results on most metrics on two action segmentation datasets. ",FIFA: Fast Inference Approximation for Action Segmentation,4,"['Happy to announce our new paper: ""FIFA: Fast Inference Approximation for Action Segmentation"".\n\nPaper: \nAnimation: ', 'We propose an approximate approach to perform inference for action segmentation using gradient-descent instead of dynamic programming.\nUsing gradient based optimization results in a fast and ""any-time"" algorithm where one can stop the optimization after any number of steps. https://t.co/VZev9DZmh3', 'FIFA can be used in combination with any action segmentation approach that estimates framewise probabilities.\nWe use FIFA in combination with MuCon, CDFL, MS-TCN and MS-TCN++ in both fully supervised and weakly supervised settings.', 'We report state-of-the-art or comparable results on the Breakfast and Hollywood extended dataset.\nFIFA achieves this accuracy while being 5-12 times faster than dynamic programming based exact inference algorithms.']",21,08,855
199,107,1273346069372567564,270544249,Abdul Saleh,Chatbots often say pretty weird thingsđ
Ever wondered if there was a way to probe them for specific conversational skills? Check out our new paper! Joint work with @boknilev and @pmphlt! Paper: Code: also joint work with @_tovly and @stephenLcasper!,https://arxiv.org/abs/2006.08331,"The predominant approach to open-domain dialog generation relies on end-to-end training of neural models on chat datasets. However, this approach provides little insight as to what these models learn (or do not learn) about engaging in dialog. In this study, we analyze the internal representations learned by neural open-domain dialog systems and evaluate the quality of these representations for learning basic conversational skills. Our results suggest that standard open-domain dialog systems struggle with answering questions, inferring contradiction, and determining the topic of conversation, among other tasks. We also find that the dyadic, turn-taking nature of dialog is not fully leveraged by these models. By exploring these limitations, we highlight the need for additional research into architectures and training methods that can better capture high-level information about dialog. ",Probing Neural Dialog Models for Conversational Understanding,2,"['Chatbots often say pretty weird thingsđ
Ever wondered if there was a way to probe them for specific conversational skills? Check out our new paper! Joint work with @boknilev and @pmphlt!\n\nPaper: \nCode: ', 'also joint work with @_tovly and @stephenLcasper!']",20,06,270
200,111,1217132405552644096,4111874585,Manuel Rigger,"We propose Pivoted Query Synthesis (PQS), a new approach for finding logic bugs in DBMS (see ). Using PQS, we found ~100 previously unknown (and many critical) bugs in widely-used DBMS (e.g., SQLite3, MySQL, and PostgreSQL). Work with @zhendongsu. PQS effectively tackles both test query and oracle generation. Its core idea is to choose a randomly-selected ""pivot row"" and generate a query whose result set must contain the pivot row. If the result set returned by the DBMS fails to fetch the pivot row, a bug is found. To our knowledge, this is the largest and most successful testing campaign against such production DBMS ( lists all PQS bugs). Stay tuned for more work on testing DBMS. For example, our ongoing work has found an additional 100+ bugs via another new approach I also want to highlight the great work done by the DBMS developers. The SQLite developers in particular fixed bugs very quickly, which is why we focused on testing SQLite.",https://arxiv.org/abs/2001.04174,"Relational databases are used ubiquitously. They are managed by database management systems (DBMS), which allow inserting, modifying, and querying data using a domain-specific language called Structured Query Language (SQL). Popular DBMS have been extensively tested by fuzzers, which have been successful in finding crash bugs. However, approaches to finding logic bugs, such as when a DBMS computes an incorrect result set, have remained mostly untackled. Differential testing is an effective technique to test systems that support a common language by comparing the outputs of these systems. However, this technique is ineffective for DBMS, because each DBMS typically supports its own SQL dialect. To this end, we devised a novel and general approach that we have termed Pivoted Query Synthesis. The core idea of this approach is to automatically generate queries for which we ensure that they fetch a specific, randomly selected row, called the pivot row. If the DBMS fails to fetch the pivot row, the likely cause is a bug in the DBMS. We tested our approach on three widely-used and mature DBMS, namely SQLite, MySQL, and PostgreSQL. In total, we reported 123 bugs in these DBMS, 99 of which have been fixed or verified, demonstrating that the approach is highly effective and general. We expect that the wide applicability and simplicity of our approach will enable the improvement of robustness of many DBMS. ",Testing Database Engines via Pivoted Query Synthesis,4,"['We propose Pivoted Query Synthesis (PQS), a new approach for finding logic bugs in DBMS (see ). Using PQS, we found ~100 previously unknown (and many critical) bugs in widely-used DBMS (e.g., SQLite3, MySQL, and PostgreSQL). Work with @zhendongsu. ', 'PQS effectively tackles both test query and oracle generation. Its core idea is to choose a randomly-selected ""pivot row"" and generate a query whose result set must contain the pivot row. If the result set returned by the DBMS fails to fetch the pivot row, a bug is found.', 'To our knowledge, this is the largest and most successful testing campaign against such production DBMS (https://t.co/aB0tUYgEv0 lists all PQS bugs). Stay tuned for more work on testing DBMS. For example, our ongoing work has found an additional 100+ bugs via another new approach', 'I also want to highlight the great work done by the DBMS developers. The SQLite developers in particular fixed bugs very quickly, which is why we focused on testing SQLite.']",20,01,970
201,119,1249919234127400961,1176867972163559424,MartinHuber,In our new paper (joint with D Imhof and H Wallimann) we suggest a #machinelearning approach for detecting partial #collusion and incomplete #cartels using data from the Swiss road construction sector: #EconTwitter #IndustrialOrganization #DataScience,https://arxiv.org/abs/2004.05629,"We propose a new method for flagging bid rigging, which is particularly useful for detecting incomplete bid-rigging cartels. Our approach combines screens, i.e. statistics derived from the distribution of bids in a tender, with machine learning to predict the probability of collusion. As a methodological innovation, we calculate such screens for all possible subgroups of three or four bids within a tender and use summary statistics like the mean, median, maximum, and minimum of each screen as predictors in the machine learning algorithm. This approach tackles the issue that competitive bids in incomplete cartels distort the statistical signals produced by bid rigging. We demonstrate that our algorithm outperforms previously suggested methods in applications to incomplete cartels based on empirical data from Switzerland. ",A Machine Learning Approach for Flagging Incomplete Bid-rigging Cartels,1,['In our new paper (joint with D Imhof and H Wallimann) we suggest a #machinelearning approach for detecting partial #collusion and incomplete #cartels using data from the Swiss road construction sector:\n\n#EconTwitter #IndustrialOrganization #DataScience'],20,04,258
202,89,1480920576285892612,79918104,Chris Amato,"I'm excited about our new #AAAI2022 paper that seeks to better understand actor-critic methods for multi-agent RL. In particular, while state-based critics are popular, we show they have theoretical and empirical drawbacks that need to be considered. ",https://arxiv.org/abs/2201.01221,"Centralized Training for Decentralized Execution, where training is done in a centralized offline fashion, has become a popular solution paradigm in Multi-Agent Reinforcement Learning. Many such methods take the form of actor-critic with state-based critics, since centralized training allows access to the true system state, which can be useful during training despite not being available at execution time. State-based critics have become a common empirical choice, albeit one which has had limited theoretical justification or analysis. In this paper, we show that state-based critics can introduce bias in the policy gradient estimates, potentially undermining the asymptotic guarantees of the algorithm. We also show that, even if the state-based critics do not introduce any bias, they can still result in a larger gradient variance, contrary to the common intuition. Finally, we show the effects of the theories in practice by comparing different forms of centralized critics on a wide range of common benchmarks, and detail how various environmental properties are related to the effectiveness of different types of critics. ","A Deeper Understanding of State-Based Critics in Multi-Agent
Reinforcement Learning",1,"[""I'm excited about our new #AAAI2022 paper that seeks to better understand actor-critic methods for multi-agent RL. In particular, while state-based critics are popular, we show they have theoretical and empirical drawbacks that need to be considered. \n\n""]",22,01,258
203,61,1239352070802718720,1191056593476915200,Ray Bai,"New preprint for a paper I co-authored! ""VC-BART: Bayesian trees meet varying coefficients."" We introduce a new approach for varying coefficient models with Bayesian additive regression tree priors. @skdeshpande91 We aim to build flexible but interpretable regression models, where the additive effect of each covariate on the outcome *varies* as a function of effect modifiers (e.g. the predictors could be functions of time and space). BART is ideal for this purpose.",https://arxiv.org/abs/2003.06416,"Many studies have reported associations between later-life cognition and socioeconomic position in childhood, young adulthood, and mid-life. However, the vast majority of these studies are unable to quantify how these associations vary over time and with respect to several demographic factors. Varying coefficient (VC) models, which treat the covariate effects in a linear model as nonparametric functions of additional effect modifiers, offer an appealing way to overcome these limitations. Unfortunately, state-of-the-art VC modeling methods require computationally prohibitive parameter tuning or make restrictive assumptions about the functional form of the covariate effects. In response, we propose VCBART, which estimates the covariate effects in a VC model using Bayesian Additive Regression Trees. With simple default hyperparameter settings, VCBART outperforms existing methods in terms of covariate effect estimation and prediction. Using VCBART, we predict the cognitive trajectories of 4,167 subjects from the Health and Retirement Study using multiple measures of socioeconomic position and physical health. We find that socioeconomic position in childhood and young adulthood have small effects that do not vary with age. In contrast, the effects of measures of mid-life physical health tend to vary with respect to age, race, and marital status. An R package implementing VCBART is available at this https URL ",VCBART: Bayesian trees for varying coefficients,2,"['New preprint for a paper I co-authored! ""VC-BART: Bayesian trees meet varying coefficients."" We introduce a new approach for varying coefficient models with Bayesian additive regression tree priors. @skdeshpande91', 'We aim to build flexible but interpretable regression models, where the additive effect of each covariate on the outcome *varies* as a function of effect modifiers (e.g. the predictors could be functions of time and space). BART is ideal for this purpose.']",20,03,476
204,146,1499432071902674949,969035645938294785,Erik Thiede (he /him),"All you free energy aficionados looking for your morning dose of math, here it is! New paper w. Sherry Li on understanding the error in MBAR dropped on arXiv at . Some takeaways in the comments... - You really can't ignore the effects of the sampling dynamics in the error. Dynamical effects can contribute just as much -- if not more -- to the error than static properties. - Your intuition that you should be discarding those high-free energy umbrella sampling windows is probably right: you can actually *decrease* the variance in other parts of your PMF by throwing out hard-to-sample windows. - In fact, sometimes surprisingly few states often contribute to most of the MBAR error. We might be able to tune our free energy calculations much better than we have been, now that we have comprehensive tools for understanding the error.",https://arxiv.org/abs/2203.01227,"Multiple sampling strategies commonly used in molecular dynamics, such as umbrella sampling and alchemical free energy methods, involve sampling from multiple thermodynamic states. Commonly, the data are then recombined to construct estimates of free energies and ensemble averages using the Multistate Bennett Acceptance Ratio (MBAR) formalism. However, the error of the MBAR estimator is not well-understood: previous error analysis of MBAR assumed independent samples and did not permit attributing contributions to the total error to individual thermodynamic states. In this work, we derive a novel central limit theorem for MBAR estimates. This central limit theorem yields an error estimator which can be decomposed into contributions from the individual Markov chains used to sample the states. We demonstrate the error estimator for an umbrella sampling calculation of the alanine dipeptide in two dimensions and an alchemical calculation of the hydration free energy of methane. In both cases, the states' individual contributions to the error provide insight into the sources of error of the simulations. Our numerical results demonstrate that the time required for the Markov chain to decorrelate in individual thermodynamic states contributes considerably to the total MBAR error. Moreover, they indicate that it may be possible to use the contributions to tune the sampling and improve the accuracy of MBAR calculations. ",Understanding the Sources of Error in MBAR through Asymptotic Analysis,4,"['All you free energy aficionados looking for your morning dose of math, here it is! New paper w. Sherry Li on understanding the error in MBAR dropped on arXiv at . Some takeaways in the comments...', ""- You really can't ignore the effects of the sampling dynamics in the error. Dynamical effects can contribute just as much -- if not more -- to the error than static properties."", '- Your intuition that you should be discarding those high-free energy umbrella sampling windows is probably right: you can actually *decrease* the variance in other parts of your PMF by throwing out hard-to-sample windows.', '- In fact, sometimes surprisingly few states often contribute to most of the MBAR error. We might be able to tune our free energy calculations much better than we have been, now that we have comprehensive tools for understanding the error.']",22,03,843
205,93,1469242837195796480,724609851884724225,Ana Belen Sainz,"New paper out! Neither incompatibility among measurements nor the assumption of freedom of choice is necessary for witnessing failures of generalized noncontextuality. Also, such failures can be witnessed using arbitrarily inefficient detectors. @ictqt",https://arxiv.org/abs/2112.04521,"The formalism of generalized probabilistic theories (GPTs) was originally developed as a way to characterize the landscape of conceivable physical theories. Thus, the GPT describing a given physical theory necessarily includes all physically possible processes. We here consider the question of how to provide a GPT-like characterization of a particular experimental setup within a given physical theory. We show that the resulting characterization is not generally a GPT in and of itself-rather, it is described by a more general mathematical object that we introduce and term an accessible GPT fragment. We then introduce an equivalence relation, termed cone equivalence, between accessible GPT fragments (and, as a special case, between standard GPTs). We give a number of examples of experimental scenarios that are best described using accessible GPT fragments, and where moreover cone-equivalence arises naturally. We then prove that an accessible GPT fragment admits of a classical explanation if and only if every other fragment that is cone-equivalent to it also admits of a classical explanation. Finally, we leverage this result to prove several fundamental results regarding the experimental requirements for witnessing the failure of generalized noncontextuality. In particular, we prove that neither incompatibility among measurements nor the assumption of freedom of choice is necessary for witnessing failures of generalized noncontextuality, and, moreover, that such failures can be witnessed even using arbitrarily inefficient detectors. ","Accessible fragments of generalized probabilistic theories, cone
equivalence, and applications to witnessing nonclassicality",1,"['New paper out!\n\nNeither incompatibility among measurements nor the assumption of freedom of choice is necessary for witnessing failures of generalized noncontextuality. Also, such failures can be witnessed using arbitrarily inefficient detectors.\n\n\n\n@ictqt']",21,12,259
206,191,1359084489557880833,1314104323,The Anh Han,"New pre-print: we study how to interfere in a spatial Ultimatum Game to promote fairness at a minimal cost @Cedric_Perret13 We show that, to minimize the cost of incentive/intervention, it is important to distinguish the role (i.e. provider vs receiver): Promoting Fair Proposers, Fair Responders or Both? Cost-Efficient Interference in the Spatial Ultimatum Game",https://arxiv.org/abs/2102.03461,"Institutions and investors face the constant challenge of making accurate decisions and predictions regarding how best they should distribute their endowments. The problem of achieving an optimal outcome at minimal cost has been extensively studied and resolved using several heuristics. However, these works usually fail to address how an external party can target different types of fair behaviour or do not take into account how limited information can shape this complex interplay. Here, we consider the well-known Ultimatum game in a spatial setting and propose a hierarchy of interference mechanisms based on the amount of information available to an external decision-maker and desired standards of fairness. Our analysis reveals that monitoring the population at a macroscopic level requires more strict information gathering in order to obtain an optimal outcome and that local observations can mediate this requirement. Moreover, we identify the conditions which must be met for an individual to be eligible for investment in order to avoid unnecessary spending. We further explore the effects of varying mutation or behavioural exploration rates on the choice of investment strategy and total accumulated costs to the investor. Overall, our analysis provides new insights about efficient heuristics for cost-efficient promotion of fairness in societies. Finally, we discuss the differences between our findings and previous work done on the PD and present our suggestions for promoting fairness as an external decision-maker. ","Promoting Fair Proposers, Fair Responders or Both? Cost-Efficient
Interference in the Spatial Ultimatum Game",2,"['New pre-print: we study how to interfere in a spatial Ultimatum Game to promote fairness at a minimal cost @Cedric_Perret13', 'We show that, to minimize the cost of incentive/intervention, it is important to distinguish the role (i.e. provider vs receiver): Promoting Fair Proposers, Fair Responders or Both? Cost-Efficient Interference in the Spatial Ultimatum Game']",21,02,371
207,211,1448248330371354624,1352621567906361354,RicardoPuebla,"Excited to see our results on critical quantum metrology out in arxiv where we study the different scaling regimes that one can get in critical fully-connected models Big thanks to Louis Garbe @AbahObinna @QuantoSimone for the hard work! check out also the very interesting work by K. Gietka, L. Ruks and @thomasbusch ",https://arxiv.org/abs/2110.04144,"Phase transitions represent a compelling tool for classical and quantum sensing applications. It has been demonstrated that quantum sensors can in principle saturate the Heisenberg scaling, the ultimate precision bound allowed by quantum mechanics, in the limit of large probe number and long measurement time. Due to the critical slowing down, the protocol duration time is of utmost relevance in critical quantum metrology. However, how the long-time limit is reached remains in general an open question. So far, only two dichotomic approaches have been considered, based on either static or dynamical properties of critical quantum systems. Here, we provide a comprehensive analysis of the scaling of the quantum Fisher information for different families of protocols that create a continuous connection between static and dynamical approaches. In particular, we consider fully-connected models, a broad class of quantum critical systems of high experimental relevance. Our analysis unveils the existence of universal precision-scaling regimes. These regimes remain valid even for finite-time protocols and finite-size systems. We also frame these results in a general theoretical perspective, by deriving a precision bound for arbitrary time-dependent quadratic Hamiltonians. ","Critical Quantum Metrology with Fully-Connected Models: From Heisenberg
to Kibble-Zurek Scaling",2,"['Excited to see our results on critical quantum metrology out in arxiv where we study the different scaling regimes that one can get in critical fully-connected models \nBig thanks to Louis Garbe @AbahObinna @QuantoSimone for the hard work!', 'check out also the very interesting work by K. Gietka, L. Ruks and @thomasbusch https://t.co/0z8ZkEebCU']",21,10,331
208,108,1062716984805244933,378228706,Hannah Wakeford,"Our #paper ""Disentangling the planet from the star in late type M dwarfs: A case study of TRAPPIST-1g"" is out We use the star to work out various possible contrast effects, and the geometry of the transit to rule them out. Then let the planet decide! In this study we find that #TRAPPIST1 is best described as a 0.08Msun, 0.117Rsun, M8V star with a photospheric Teff=2400K, with ~35% @ 3000K & <3% @ ~5800K Bottom panel - Left: data black, model green. Right: full res models used broken into components The reconstructed stellar flux results in 11 possible scenarios for the area occulted by the planet as it transits the star. We are able to rule out 8/11 based on the geometry and the planet #TRAPPIST1 #paper We are further able to use planetary models to rule out the 2Tc+m scenario. Because, while models fit in the wavelengths we measured with @NASAHubble if you include @NASAspitzer this one breaks down #TRAPPIST1 This leaves the final scenarios 2T and 3T which result in the same transmission spectrum for the planet, where no stellar flux is contaminating it, as the remaining most probable result where 3T is the favored scenario for the star #TRAPPIST1 The series of steps we take in the case study on #TRAPPIST1g can be applied to any star planet combination, but is most effective for late type M stars where stellar molecular features will mimic the planet signals. In each of the steps we detail the important procedures to use to make sure that units are correctly calculated and that each assumption is explored in relation to the data available. and finally I will say this work would not have been possible without my coauthors especially @NikoleKLewis, @Onoastrmer @Of_FallingStars, @NatashaBatalha, Giovanni Bruno, Jules Fowler, and Jeff Valenti who joined me in the #TRAPPISTbunker It also makes for a very pretty title slide #TRAPPIST1g #scicomm ",https://arxiv.org/abs/1811.04877,"The atmospheres of late M stars represent a significant challenge in the characterization of any transiting exoplanets due to the presence of strong molecular features in the stellar atmosphere. TRAPPIST-1 is an ultra-cool dwarf, host to seven transiting planets, and contains its own molecular signatures which can potentially be imprinted on planetary transit lightcurves due to inhomogeneities in the occulted stellar photosphere. We present a case study on TRAPPIST-1g, the largest planet in the system, using a new observation together with previous data, to disentangle the atmospheric transmission of the planet from that of the star. We use the out-of-transit stellar spectra to reconstruct the stellar flux based on one-, two-, and three-temperature components. We find that TRAPPIST-1 is a 0.08 M$_*$, 0.117 R$_*$, M8V star with a photospheric effective temperature of 2400 K, with ~35% 3000 K spot coverage and a very small fraction, <3%, of ~5800 K hot spot. We calculate a planetary radius for TRAPPIST-1g to be Rp = 1.124 R$_\oplus$ with a planetary density of $\rho_p$ = 0.8214 $\rho_\oplus$. Based on the stellar reconstruction there are eleven plausible scenarios for the combined stellar photosphere and planet transit geometry; in our analysis we are able to rule out 8 of the 11 scenarios. Using planetary models we evaluate the remaining scenarios with respect to the transmission spectrum of TRAPPIST-1g. We conclude that the planetary transmission spectrum is likely not contaminated by any stellar spectral features, and are able to rule out a clear solar H2/He-dominated atmosphere at greater than 3-sigma. ","Disentangling the planet from the star in late type M dwarfs: A case
study of TRAPPIST-1g",9,"['Our #paper ""Disentangling the planet from the star in late type M dwarfs: A case study of TRAPPIST-1g"" is out We use the star to work out various possible contrast effects, and the geometry of the transit to rule them out. Then let the planet decide! ', 'In this study we find that #TRAPPIST1 is best described as a 0.08Msun, 0.117Rsun, M8V star with a photospheric Teff=2400K, with ~35% @ 3000K & <3% @ ~5800K\nBottom panel - Left: data black, model green. Right: full res models used broken into components https://t.co/8NX7wkqsW7', 'The reconstructed stellar flux results in 11 possible scenarios for the area occulted by the planet as it transits the star. We are able to rule out 8/11 based on the geometry and the planet #TRAPPIST1 #paper https://t.co/nm3ZhPF07F', 'We are further able to use planetary models to rule out the 2Tc+m scenario. Because, while models fit in the wavelengths we measured with @NASAHubble if you include @NASAspitzer this one breaks down #TRAPPIST1 https://t.co/Qv4tNbXB2j', 'This leaves the final scenarios 2T and 3T which result in the same transmission spectrum for the planet, where no stellar flux is contaminating it, as the remaining most probable result where 3T is the favored scenario for the star #TRAPPIST1 https://t.co/USiN6eob5R', 'The series of steps we take in the case study on #TRAPPIST1g can be applied to any star planet combination, but is most effective for late type M stars where stellar molecular features will mimic the planet signals.', 'In each of the steps we detail the important procedures to use to make sure that units are correctly calculated and that each assumption is explored in relation to the data available.', 'and finally I will say this work would not have been possible without my coauthors especially @NikoleKLewis, @Onoastrmer @Of_FallingStars, @NatashaBatalha, Giovanni Bruno, Jules Fowler, and Jeff Valenti who joined me in the #TRAPPISTbunker', 'It also makes for a very pretty title slide #TRAPPIST1g #scicomm https://t.co/q6n2TNAFTI']",18,11,1922
209,74,1040574292449210368,806058672619212800,Guillaume Lample,"XNLI: Evaluating Cross-lingual Sentence Representations - @alex_conneau's #emnlp2018 new paper. Extends the NLI dataset to 15 languages, including low-resource ones such as Swahili and Urdu. If you had to classify Urdu sentences given English labeled data, what would you do between 1) translate the labeled data and train a Urdu classifier 2) translate Urdu sentences and feed them to the English classifier 3) use cross-lingual representations and a single classifier? For now, 2) works best, as cross-lingual representations are not perfect yet, and XNLI is a great way to evaluate this :)",https://arxiv.org/abs/1809.05053,"State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in cross-lingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines. ",XNLI: Evaluating Cross-lingual Sentence Representations,3,"[""XNLI: Evaluating Cross-lingual Sentence Representations - @alex_conneau's #emnlp2018 new paper. Extends the NLI dataset to 15 languages, including low-resource ones such as Swahili and Urdu. "", 'If you had to classify Urdu sentences given English labeled data, what would you do between 1) translate the labeled data and train a Urdu classifier 2) translate Urdu sentences and feed them to the English classifier 3) use cross-lingual representations and a single classifier?', 'For now, 2) works best, as cross-lingual representations are not perfect yet, and XNLI is a great way to evaluate this :)']",18,09,606
210,107,1197976468506071040,709593994095935488,Diego Aldarondo,"Happy to share this work! We built a model of a rodent, trained it to solve four tasks, and used methods common in neuroscience to study how the rodent controls its body. @Jessedmarshall, Josh Merel, Yuval Tassa, Greg Wayne and @BOlveczky @jessedmarshall @BOlveczky We think the virtual rodent will be a useful tool for modeling embodied motor control. Hopefully, refining the mechanics and neural architecture toward increasing biological realism will improve our understanding of both artificial and biological control. @jessedmarshall @BOlveczky Here's a short clip of the rodent solving a modified version of the two-tap task! @jessedmarshall @BOlveczky And another clip demonstrating how dynamics within the network's activity reflect the production of behaviors at several timescales. ",https://arxiv.org/abs/1911.09451,"Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems. However, this interaction between fields is less developed in the study of motor control. In this work, we develop a virtual rodent as a platform for the grounded study of motor activity in artificial models of embodied control. We then use this platform to study motor activity across contexts by training a model to solve four complex tasks. Using methods familiar to neuroscientists, we describe the behavioral representations and algorithms employed by different layers of the network using a neuroethological approach to characterize motor activity relative to the rodent's behavior and goals. We find that the model uses two classes of representations which respectively encode the task-specific behavioral strategies and task-invariant behavioral kinematics. These representations are reflected in the sequential activity and population dynamics of neural subpopulations. Overall, the virtual rodent facilitates grounded collaborations between deep reinforcement learning and motor neuroscience. ",Deep neuroethology of a virtual rodent,4,"['Happy to share this work! We built a model of a rodent, trained it to solve four tasks, and used methods common in neuroscience to study how the rodent controls its body. @Jessedmarshall, Josh Merel, Yuval Tassa, Greg Wayne and @BOlveczky', '@jessedmarshall @BOlveczky We think the virtual rodent will be a useful tool for modeling embodied motor control. Hopefully, refining the mechanics and neural architecture toward increasing biological realism will improve our understanding of both artificial and biological control.', ""@jessedmarshall @BOlveczky Here's a short clip of the rodent solving a modified version of the two-tap task! https://t.co/JAtk7mZg6v"", ""@jessedmarshall @BOlveczky And another clip demonstrating how dynamics within the network's activity reflect the production of behaviors at several timescales. https://t.co/hwR8q3bzKd""]",19,11,811
211,20,854964993028227072,303343906,SĂ©bastien Carassou đ,"Astrofolks, our new paper is out! We unveil a new way to infer robust constraints on models of galaxy evolution. Feedback is more than welcome! Especially from the stats community. We just started exploring Approximate Bayesian Computation techniques :)",https://arxiv.org/abs/1704.05559,"Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Monte Carlo Markov Chain methods. Using synthetic data matching most of the properties of a CFHTLS Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach.Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (3 photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases. ","Inferring the photometric and size evolution of galaxies from image
simulations",2,"['Astrofolks, our new paper is out! We unveil a new way to infer robust constraints on models of galaxy evolution. ', 'Feedback is more than welcome! Especially from the stats community. We just started exploring Approximate Bayesian Computation techniques :)']",17,04,267
212,133,1425216411505364994,843929270,Dr. Charan Ranganath,New preprint & my first paper w/Randy O'Reilly. We discuss the computational benefits of differentiating b/w content and structure in a variety of domains. We propose that event structure may be represented by the PM Network via predictive learning ,https://arxiv.org/abs/2108.03387,"A hallmark of human intelligence is the ability to adapt to new situations, by applying learned rules to new content (systematicity) and thereby enabling an open-ended number of inferences and actions (generativity). Here, we propose that the human brain accomplishes these feats through pathways in the parietal cortex that encode the abstract structure of space, events, and tasks, and pathways in the temporal cortex that encode information about specific people, places, and things (content). Recent neural network models show how the separation of structure and content might emerge through a combination of architectural biases and learning, and these networks show dramatic improvements in the ability to capture systematic, generative behavior. We close by considering how the hippocampal formation may form integrative memories that enable rapid learning of new structure and content representations. ",The Structure of Systematicity in the Brain,1,"[""New preprint & my first paper w/Randy O'Reilly. We discuss the computational benefits of differentiating b/w content and structure in a variety of domains. We propose that event structure may be represented by the PM Network via predictive learning ""]",21,08,255
213,159,1265121469157130240,2915749124,Dhiraj Hazra,"We find several correlations between reionization and other cosmological parameters are substantially reduced - with the Planck 2018 - early onsets of reionization are strongly disfavoured.. ... always a pleasure working with Daniela, Fabio and @georgesmoot ",https://arxiv.org/abs/2005.12222,"We provide an update on the constraints on extended reionization histories with the Planck 2018 cosmic microwave background anisotropy data. The Planck 2018 data on large angular scales improve the measurement of the $E$-mode polarization reionization bump at low multipoles providing the possibility to improve our previous results. Using a minor modification to the original Poly-reion model for the reionization history, we find that the Planck 2018 data significantly improve all our previous results: we find as optical depth of $\tau=0.0572_{-0.0075}^{+0.0064}$ at 68% CL, that early onsets of reionization are strongly disfavoured, i.e. redshift when the reionization begins, $z_{xe=0}=18.18_{-10.89}^{+1.61}$ at 68% CL,and that reionization duration (defined between 10% and 99% reionization) is significantly reduced, i.e. $\Delta_z^{Reion}=4.59_{-2.45}^{+1.67}$ at 68% CL. We explore possible correlations between reionization histories and cosmological parameters, including important extensions beyond $\Lambda$CDM. We find that the degeneracy between reionization and scalar spectral index,neutrino mass sum, spatial curvature, dark matter annihilation and other non-standard models are significantly reduced.The reduction of the error bars and the degeneracies, together with the shift towards lower values of the optical depth that we observe in the Poly-reion model are mainly driven by the new low-$\ell$ polarization likelihood of Planck 2018 baseline based on the HFI data. This is confirmed also by the results derived without this likelihood and the ones with different alternatives to the baseline that are presented for a subset of models. ","Extended reionization in models beyond $\Lambda$CDM with Planck 2018
data",2,"['We find several correlations between reionization and other cosmological parameters are substantially reduced - with the Planck 2018 - early onsets of reionization are strongly disfavoured.. ... always a pleasure working with Daniela, Fabio and @georgesmoot', 'https://t.co/1LvVZIa9Ar']",20,05,270
214,57,1140559642663239680,259050097,Namhoon Lee,"1/4 A Signal Propagation Perspective for Pruning Neural Networks at Initialization: our new work on pruning neural networks at initialization is available now. paper: with @tha_ajanthan, Stephen Gould, Philip Torr. 2/4 In SNIP (ICLR19), we showed that pruning can be done on a randomly initialized network without pretraining: ""pruning at initialization"". However, it's unclear exactly why pruning untrained networks is effective, how it should be understood, whether it can be extended further. 3/4 This work provides a signal propagation perspective based on dynamical isometry and mean field theory to pruning at initialization. We improve performance by orthogonal initialization, present unsupervised pruning, and bring forth the notion of neural architecture sculpting. 4/4 We are really excited about what this work could possibly bring us even more, so please stay tuned for our future endeavours! @_onionesque Thanks :))) Yeah I realized he had SNIPER paper a little while after I submitted SNIP; and of course I had chat with him about it :) @_onionesque Maybe I should name my future work SNIPER, aiming at the lottery ticket lol @_onionesque preempted here thanks to your help lol",https://arxiv.org/abs/1906.06307,"Network pruning is a promising avenue for compressing deep neural networks. A typical approach to pruning starts by training a model and then removing redundant parameters while minimizing the impact on what is learned. Alternatively, a recent approach shows that pruning can be done at initialization prior to training, based on a saliency criterion called connection sensitivity. However, it remains unclear exactly why pruning an untrained, randomly initialized neural network is effective. In this work, by noting connection sensitivity as a form of gradient, we formally characterize initialization conditions to ensure reliable connection sensitivity measurements, which in turn yields effective pruning results. Moreover, we analyze the signal propagation properties of the resulting pruned networks and introduce a simple, data-free method to improve their trainability. Our modifications to the existing pruning at initialization method lead to improved results on all tested network models for image classification tasks. Furthermore, we empirically study the effect of supervision for pruning and demonstrate that our signal propagation perspective, combined with unsupervised pruning, can be useful in various scenarios where pruning is applied to non-standard arbitrarily-designed architectures. ","A Signal Propagation Perspective for Pruning Neural Networks at
Initialization",7,"['1/4 A Signal Propagation Perspective for Pruning Neural Networks at Initialization: our new work on pruning neural networks at initialization is available now.\npaper: \nwith @tha_ajanthan, Stephen Gould, Philip Torr.', '2/4 In SNIP (ICLR19), we showed that pruning can be done on a randomly initialized network without pretraining: ""pruning at initialization"". However, it\'s unclear exactly why pruning untrained networks is effective, how it should be understood, whether it can be extended further.', '3/4 This work provides a signal propagation perspective based on dynamical isometry and mean field theory to pruning at initialization. We improve performance by orthogonal initialization, present unsupervised pruning, and bring forth the notion of neural architecture sculpting.', '4/4 We are really excited about what this work could possibly bring us even more, so please stay tuned for our future endeavours!', '@_onionesque Thanks :))) Yeah I realized he had SNIPER paper a little while after I submitted SNIP; and of course I had chat with him about it :)', '@_onionesque Maybe I should name my future work SNIPER, aiming at the lottery ticket lol', '@_onionesque preempted here thanks to your help lol']",19,06,1199
215,169,1184379591047163904,89709541,Anke Arentsen,"I am proud to present: the first Pristine Inner Galaxy Survey (PIGS) paper! đ In it, we study the kinematics of metal-poor stars in the Galactic bulge region. A summary follows: [1/7] We used the anti-metal detector powers of CaHK photometry obtained at @CFHTelescope to find rare metal-poor stars in the metal-rich needle stack that is the Galactic bulge. [2/7] The kinematics of the metal-poor ([Fe/H] < -1) component of the bulge hasn't really been studied in detail yet. We use a sample of thousands of stars observed with AAT/AAOmega+2dF to look at how this old component behaves! [3/7] We find that the rotation of stars in the inner Galaxy decreases with decreasing metallicity, until it completely disappears for the most metal-poor stars ([Fe/H] < -2), see this figure (the line is a bar model): [4/7] We also found a strikingly strong and continuous relation between the velocity dispersion (how hot the population is) and the metallicity: [5/7] We propose a few interpretations: a density transition of components of different kinematics & metallicities, a different mapping of stars onto the boxy/peanut bulge and/or the influence of the bar on a pressure-supported component. Probably all of them play a role! [6/7] So this is clearly not the last PIGS paper, since a lot is still unknown about this exciting population of stars..! :) [7/7] (with Twitter people @nfmartin1980, @Nico_Longeard, @kmalhan07, @Astro_Sestitof, @Thomas_gft, @Cosmic_Horizons and @FadAstra) @nfmartin1980 @CFHTelescope I guess I shouldnât have put that - there... but yes, this paper is secretly about how we detected anti-matter in metal-poor stars! Should have submitted to Nature... @arm2armtweet Do you recognise the giraffe plot?!",http://arxiv.org/abs/1910.06337,"Our Galaxy is known to contain a central boxy/peanut-shaped bulge, yet the importance of a classical, pressure-supported component within the central part of the Milky Way is still being debated. It should be most visible at low metallicity, a regime that has not yet been studied in detail. Using metallicity-sensitive narrow-band photometry, the Pristine Inner Galaxy Survey (PIGS) has collected a large sample of metal-poor ([Fe/H] < -1.0) stars in the inner Galaxy to address this open question. We use PIGS to trace the metal-poor inner Galaxy kinematics as function of metallicity for the first time. We find that the rotational signal decreases with decreasing [Fe/H], until it becomes negligible for the most metal-poor stars. Additionally, the velocity dispersion increases with decreasing metallicity for -3.0 < [Fe/H] < -0.5, with a gradient of -44 $\pm$ 4 km$\,$s$^{-1}\,$dex$^{-1}$. These observations may signal a transition between Galactic components of different metallicities and kinematics, a different mapping onto the boxy/peanut-shaped bulge for former disk stars of different metallicities and/or the secular dynamical and gravitational influence of the bar on the pressure-supported component. Our results provide strong constraints on models that attempt to explain the properties of the inner Galaxy. ","The Pristine Inner Galaxy Survey (PIGS) I: Tracing the kinematics of
metal-poor stars in the Galactic bulge",9,"['I am proud to present: the first Pristine Inner Galaxy Survey (PIGS) paper! đ In it, we study the kinematics of metal-poor stars in the Galactic bulge region. A summary follows: [1/7] ', 'We used the anti-metal detector powers of CaHK photometry obtained at @CFHTelescope to find rare metal-poor stars in the metal-rich needle stack that is the Galactic bulge. [2/7]', ""The kinematics of the metal-poor ([Fe/H] < -1) component of the bulge hasn't really been studied in detail yet. We use a sample of thousands of stars observed with AAT/AAOmega+2dF to look at how this old component behaves! [3/7]"", 'We find that the rotation of stars in the inner Galaxy decreases with decreasing metallicity, until it completely disappears for the most metal-poor stars ([Fe/H] < -2), see this figure (the line is a bar model): [4/7] https://t.co/ucWKllIU0F', 'We also found a strikingly strong and continuous relation between the velocity dispersion (how hot the population is) and the metallicity: [5/7] https://t.co/0jN7F4NANR', 'We propose a few interpretations: a density transition of components of different kinematics & metallicities, a different mapping of stars onto the boxy/peanut bulge and/or the influence of the bar on a pressure-supported component. Probably all of them play a role! [6/7]', 'So this is clearly not the last PIGS paper, since a lot is still unknown about this exciting population of stars..! :) [7/7] \n\n(with Twitter people @nfmartin1980, @Nico_Longeard, @kmalhan07, @Astro_Sestitof, @Thomas_gft, @Cosmic_Horizons and @FadAstra)', '@nfmartin1980 @CFHTelescope I guess I shouldnât have put that - there... but yes, this paper is secretly about how we detected anti-matter in metal-poor stars! Should have submitted to Nature...', '@arm2armtweet Do you recognise the giraffe plot?!']",19,10,1759
216,194,1357364064154968066,1062061528931819523,Kale-ab Tessera,"Excited to present my first preprint đ„łđ - ""Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization"" , with @sarahookr and @BenjaminRosman . đ . We use Gradient Flow (GF) to study sparse network optimization. That is to say, we use GF to study how sparse networks are affected by different optimizers, activation functions, architectures, learning rates and regularizers. This follows on promising GF work - (by @utkuevci) and . 1. Firstly, to study sparse networks, we propose a simple, empirical framework - Same Capacity Sparse vs Dense Comparison (SC-SDC). The key idea is to compare sparse networks to their equivalent dense counterparts (same number of connections and same initial weight init). 2. Secondly, we also propose a new, normalized, layerwise measure of gradient flow, Effective Gradient Flow (EGF). EGF is normalized by the number of active weights and distributed evenly across all the layers. We show EGF correlates better, than other GF measures, to ... performance in sparse networks and hence it is a good formulation for studying the training dynamics of sparse networks. 3.1 Using EGF and SC-SDC, we show that BatchNorm is more important for sparse networks than it is for dense networks (the result is statistically significant), which suggests that gradient instability is a key obstacle to starting sparse. 3.2 We show that optimizers that use an exponentially weighted moving average (EWMA) to obtain an estimate of the variance of the gradient, such as Adam and RMSProp, are sensitive to higher gradient flow. This could explain why these methods are more sensitive to L2 and data aug. 3.3 Finally, we show that Swish and PReLU (when using SGD) are promising activation functions, especially for sparse networks. For the Swish result, we suggest this could be due to Swishâs non-monotonic formulation, that allows for negative gradient flow, which helps with 3.2. We also extend some of these results from MLPs -> CNNs and from random, fixed sparse networks -> magnitude pruned networks. In conclusion, our work agrees with and contributes to the literature that emphasizes that initialization is only one piece of the puzzle and taking a wider view of tailoring optimization to sparse networks yields promising results. Thanks for reading. Also, please let us know if we missed any related work/results and if you have any feedback đ:)",https://arxiv.org/abs/2102.01670,"Training sparse networks to converge to the same performance as dense neural architectures has proven to be elusive. Recent work suggests that initialization is the key. However, while this direction of research has had some success, focusing on initialization alone appears to be inadequate. In this paper, we take a broader view of training sparse networks and consider the role of regularization, optimization, and architecture choices on sparse models. We propose a simple experimental framework, Same Capacity Sparse vs Dense Comparison (SC-SDC), that allows for a fair comparison of sparse and dense networks. Furthermore, we propose a new measure of gradient flow, Effective Gradient Flow (EGF), that better correlates to performance in sparse networks. Using top-line metrics, SC-SDC and EGF, we show that default choices of optimizers, activation functions and regularizers used for dense networks can disadvantage sparse networks. Based upon these findings, we show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime. Our work suggests that initialization is only one piece of the puzzle and taking a wider view of tailoring optimization to sparse networks yields promising results. ","Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network
Optimization",11,"['Excited to present my first preprint đ„łđ - ""Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization"" , with @sarahookr and @BenjaminRosman .\n\nđ .\n\nWe use Gradient Flow (GF) to study sparse network optimization. ', 'That is to say, we use GF to study how sparse networks are affected by different optimizers, activation functions, architectures, learning rates and regularizers. \n\nThis follows on promising GF work - https://t.co/OI4skNZc20 (by @utkuevci) and https://t.co/9BqCmwendV .', '1. Firstly, to study sparse networks, we propose a simple, empirical framework - Same Capacity Sparse vs Dense Comparison (SC-SDC). The key idea is to compare sparse networks to their equivalent dense counterparts (same number of connections and same initial weight init). https://t.co/mahGNc0gxs', '2. Secondly, we also propose a new, normalized, layerwise measure of gradient flow, Effective Gradient Flow (EGF). EGF is normalized by the number of active weights and distributed evenly across all the layers. We show EGF correlates better, than other GF measures, to ... https://t.co/pWBxLTVDhs', 'performance in sparse networks and hence it is a good formulation for studying the training dynamics of sparse networks. https://t.co/VY7cEESZ8V', '3.1 Using EGF and SC-SDC, we show that BatchNorm is more important for sparse networks than it is for dense networks (the result is statistically significant), which suggests that gradient instability is a key obstacle to starting sparse.', '3.2 We show that optimizers that use an exponentially weighted moving average (EWMA) to obtain an estimate of the variance of the gradient, such as Adam and RMSProp, are sensitive to higher gradient flow. This could explain why these methods are more sensitive to L2 and data aug.', '3.3 Finally, we show that Swish and PReLU (when using SGD) are promising activation functions, especially for sparse networks. For the Swish result, we suggest this could be due to Swishâs non-monotonic formulation, that allows for negative gradient flow, which helps with 3.2. https://t.co/rUj6onLF4B', 'We also extend some of these results from MLPs -> CNNs and from random, fixed sparse networks -> magnitude pruned networks. https://t.co/13t4TTct5Q', 'In conclusion, our work agrees with and contributes to the literature that emphasizes that initialization is only one piece of the puzzle and taking a wider view of tailoring optimization to sparse networks yields promising results.', 'Thanks for reading. Also, please let us know if we missed any related work/results and if you have any feedback đ:)']",21,02,2464
217,96,1083877101395214337,264501255,Eric Horvitz,"While there are many theories, #humor and its link to human #cognition remains a mystery. What might we learn about #satire--and what people find funny--by transforming humor back to serious? @cervisiarius @EPFL_en @MSFTResearch @RealAAAI @TheOfficialACM",https://arxiv.org/abs/1901.03253,"Humor is an essential human trait. Efforts to understand humor have called out links between humor and the foundations of cognition, as well as the importance of humor in social engagement. As such, it is a promising and important subject of study, with relevance for artificial intelligence and human-computer interaction. Previous computational work on humor has mostly operated at a coarse level of granularity, e.g., predicting whether an entire sentence, paragraph, document, etc., is humorous. As a step toward deep understanding of humor, we seek fine-grained models of attributes that make a given text humorous. Starting from the observation that satirical news headlines tend to resemble serious news headlines, we build and analyze a corpus of satirical headlines paired with nearly identical but serious headlines. The corpus is constructed via Unfun.me, an online game that incentivizes players to make minimal edits to satirical headlines with the goal of making other players believe the results are serious headlines. The edit operations used to successfully remove humor pinpoint the words and concepts that play a key role in making the original, satirical headline funny. Our analysis reveals that the humor tends to reside toward the end of headlines, and primarily in noun phrases, and that most satirical headlines follow a certain logical pattern, which we term false analogy. Overall, this paper deepens our understanding of the syntactic and semantic structure of satirical news headlines and provides insights for building humor-producing systems. ","Reverse-Engineering Satire, or ""Paper on Computational Humor Accepted
Despite Making Serious Advances""",1,"['While there are many theories, #humor and its link to human #cognition remains a mystery. What might we learn about #satire--and what people find funny--by transforming humor back to serious? @cervisiarius @EPFL_en @MSFTResearch @RealAAAI @TheOfficialACM']",19,01,261
218,57,1374923106905698309,838066124,Jeff Filippini,"New paper: Constraint on primordial gravitational waves from the first flight of SPIDER, a balloon-borne CMB telescope: SPIDER is an ambitious instrument: six millimeter-wave telescopes containing >2000 superconducting detectors, housed in a 1300 liter cryostat, hanging from a string at 35 km altitude. Ballooning lets us observe in space-like conditions, largely free from the fluctuating glow of the atmosphere that plagues even the best Earth-bound observing sites. This pristine vantage point brings challenges: strict mass and power constraints, and a horrifying inability to fiddle with your instrument after launch. It has to basically run itself, with minimal communication, its first time out. Ballooning is also a key way we flight-qualify technologies for future space missions. And incredible training for new crops of space scientists and engineers. This has been a long road, and I've been privileged to work with a fantastic team. Here's to further adventures with new instruments, on the ground and in the air! The view from the stratosphere over Antarctica: looking down with an optical camera on the gondola, and looking up with SPIDER itself. While I'm here, I should advertise: we have postdoc positions open, in cosmology from balloons and ground! ",https://arxiv.org/abs/2103.13334,"We present the first linear polarization measurements from the 2015 long-duration balloon flight of SPIDER, an experiment designed to map the polarization of the cosmic microwave background (CMB) on degree angular scales. Results from these measurements include maps and angular power spectra from observations of 4.8% of the sky at 95 and 150 GHz, along with the results of internal consistency tests on these data. While the polarized CMB anisotropy from primordial density perturbations is the dominant signal in this region of sky, Galactic dust emission is also detected with high significance; Galactic synchrotron emission is found to be negligible in the SPIDER bands. We employ two independent foreground-removal techniques in order to explore the sensitivity of the cosmological result to the assumptions made by each. The primary method uses a dust template derived from Planck data to subtract the Galactic dust signal. A second approach, employing a joint analysis of SPIDER and Planck data in the harmonic domain, assumes a modified-blackbody model for the spectral energy distribution of the dust with no constraint on its spatial morphology. Using a likelihood that jointly samples the template amplitude and $r$ parameter space, we derive 95% upper limits on the primordial tensor-to-scalar ratio from Feldman-Cousins and Bayesian constructions, finding $r<0.11$ and $r<0.19$, respectively. Roughly half the uncertainty in $r$ derives from noise associated with the template subtraction. New data at 280 GHz from SPIDER's second flight will complement the Planck polarization maps, providing powerful measurements of the polarized Galactic dust emission. ","A Constraint on Primordial $B$-Modes from the First Flight of the SPIDER
Balloon-Borne Telescope",8,"['New paper: Constraint on primordial gravitational waves from the first flight of SPIDER, a balloon-borne CMB telescope: ', 'SPIDER is an ambitious instrument: six millimeter-wave telescopes containing >2000 superconducting detectors, housed in a 1300 liter cryostat, hanging from a string at 35 km altitude.', 'Ballooning lets us observe in space-like conditions, largely free from the fluctuating glow of the atmosphere that plagues even the best Earth-bound observing sites.', 'This pristine vantage point brings challenges: strict mass and power constraints, and a horrifying inability to fiddle with your instrument after launch. It has to basically run itself, with minimal communication, its first time out.', 'Ballooning is also a key way we flight-qualify technologies for future space missions. And incredible training for new crops of space scientists and engineers.', ""This has been a long road, and I've been privileged to work with a fantastic team. Here's to further adventures with new instruments, on the ground and in the air!"", 'The view from the stratosphere over Antarctica: looking down with an optical camera on the gondola, and looking up with SPIDER itself. https://t.co/luxg73CBWM', ""While I'm here, I should advertise: we have postdoc positions open, in cosmology from balloons and ground! https://t.co/h64hof8zOv""]",21,03,1300
219,56,1194062996961517568,11778512,Mason Porter,"My new paper: ""Nonlinearity + Networks: A 2020 Vision"": ""I highlight a few methods and ideas, including several of personal interest, that I anticipate to be especially important during the next several years."" Can you find all of the easter eggs in it?",https://arxiv.org/abs/1911.03805,"I briefly survey several fascinating topics in networks and nonlinearity. I highlight a few methods and ideas, including several of personal interest, that I anticipate to be especially important during the next several years. These topics include temporal networks (in which the entities and/or their interactions change in time), stochastic and deterministic dynamical processes on networks, adaptive networks (in which a dynamical process on a network is coupled to dynamics of network structure), and network structure and dynamics that include ""higher-order"" interactions (which involve three or more entities in a network). I draw examples from a variety of scenarios, including contagion dynamics, opinion models, waves, and coupled oscillators. ",Nonlinearity + Networks: A 2020 Vision,1,"['My new paper: ""Nonlinearity + Networks: A 2020 Vision"": \n\n""I highlight a few methods and ideas, including several of personal interest, that I anticipate to be especially important during the next several years.""\n\nCan you find all of the easter eggs in it?']",19,11,260
220,104,1371771163504951296,2813168019,EGO & the Virgo Collaboration,"A paper published today on @arxiv by the Virgo, @LIGO and @KAGRA_PR Collaborations, based on data from the first three observing runs of the Virgo and LIGO detectors, sets new constraints on anisotropies of stochastic gravitational wave backgrounds. Different GW backgrounds are generated from the combination of all the too faint merger signals, that our detectors are not able to individually resolve, or from early Universe phenomena, such as phase transitions and primordial black hole mergers. Hence observing... direction-dependent features in the GW backgrounds could give us insights into history of the early universe and matter distribution of the nearby one. This research didn't find any significant evidence for a gravitational-wave background, but it sets more stringent upper limits.",https://arxiv.org/abs/2103.08520,"We report results from searches for anisotropic stochastic gravitational-wave backgrounds using data from the first three observing runs of the Advanced LIGO and Advanced Virgo detectors. For the first time, we include Virgo data in our analysis and run our search with a new efficient pipeline called {\tt PyStoch} on data folded over one sidereal day. We use gravitational-wave radiometry (broadband and narrow band) to produce sky maps of stochastic gravitational-wave backgrounds and to search for gravitational waves from point sources. A spherical harmonic decomposition method is employed to look for gravitational-wave emission from spatially-extended sources. Neither technique found evidence of gravitational-wave signals. Hence we derive 95\% confidence-level upper limit sky maps on the gravitational-wave energy flux from broadband point sources, ranging from $F_{\alpha, \Theta} < {\rm (0.013 - 7.6)} \times 10^{-8} {\rm erg \, cm^{-2} \, s^{-1} \, Hz^{-1}},$ and on the (normalized) gravitational-wave energy density spectrum from extended sources, ranging from $\Omega_{\alpha, \Theta} < {\rm (0.57 - 9.3)} \times 10^{-9} \, {\rm sr^{-1}}$, depending on direction ($\Theta$) and spectral index ($\alpha$). These limits improve upon previous limits by factors of $2.9 - 3.5$. We also set 95\% confidence level upper limits on the frequency-dependent strain amplitudes of quasimonochromatic gravitational waves coming from three interesting targets, Scorpius X-1, SN 1987A and the Galactic Center, with best upper limits range from $h_0 < {\rm (1.7-2.1)} \times 10^{-25},$ a factor of $\geq 2.0$ improvement compared to previous stochastic radiometer searches. ","Search for anisotropic gravitational-wave backgrounds using data from
Advanced LIGO and Advanced Virgo's first three observing runs",3,"['A paper published today on @arxiv by the Virgo, @LIGO and @KAGRA_PR Collaborations, based on data from the first three observing runs of the Virgo and LIGO detectors, sets new constraints on anisotropies of stochastic gravitational wave backgrounds. ', 'Different GW backgrounds are generated from the combination of all the too faint merger signals, that our detectors are not able to individually resolve, or from early Universe phenomena, such as phase transitions and primordial black hole mergers. Hence observing...', ""direction-dependent features in the GW backgrounds could give us insights into history of the early universe and matter distribution of the nearby one. This research didn't find any significant evidence for a gravitational-wave background, but it sets more stringent upper limits.""]",21,03,812
221,133,1402883773805518851,3881712928,Valentina,"New paper!đ«đ In we show that future neutrino experiments DUNE (@DUNEScience) and THEIA will be able to set competitive constraints on primordial black hole dark matter. Great collaboration with @pablommirave and @MariamTortola, @AHEPGroup @IFICorpuscular ",https://arxiv.org/abs/2106.05013,"Primordial black holes (PBHs) are a potential dark matter candidate whose masses can span over many orders of magnitude. If they have masses in the $10^{15}-10^{17}$ g range, they can emit sizeable fluxes of MeV neutrinos through evaporation via Hawking radiation. We explore the possibility of detecting light (non-)rotating PBHs with future neutrino experiments. We focus on two next generation facilities: the Deep Underground Neutrino Experiment (DUNE) and THEIA. We simulate the expected event spectra at both experiments assuming different PBH mass distributions and spins, and we extract the expected 95% C.L. sensitivities to these scenarios. Our analysis shows that future neutrino experiments like DUNE and THEIA will be able to set competitive constraints on PBH dark matter, thus providing complementary probes in a part of the PBH parameter space currently constrained mainly by photon data. ",Signatures of primordial black hole dark matter at DUNE and THEIA,1,"['New paper!đ«đ In we show that future neutrino experiments DUNE (@DUNEScience) and THEIA will be able to set competitive constraints on primordial black hole dark matter. Great collaboration with @pablommirave and @MariamTortola, @AHEPGroup @IFICorpuscular ']",21,06,268
222,253,1318592850964205570,1020920111476236288,Haggai Maron,"New paper! âHow to Stop Epidemics: Controlling Graph Dynamics with RL and GNNsâ. An epidemic is a partially-observed dynamic process that spreads over a temporal contact graph. In this setting, how should we prioritize COVID-19 tests? We formulate this problem as a sequential decision problem over a graph. In face of an exponential state space, combinatorial action space, and partial observability, we design RLGN, a tractable RL scheme to prioritize which nodes should be tested, using GNNs to rank the nodes. We evaluate this approach in three types of social networks: community-structured, preferential attachment, and based on statistics from real cellular tracking. RLGN consistently outperforms all baselines in our experiments. We show that prioritizing tests using RLGN on temporal graphs can increase the number of healthy people by 25% and contain the epidemic 30% more often than supervised approaches and 2.5Ă more often than non-learned baselines using the same resources. Dynamics matter: Building the contacts map is crucial. The information in the temporal dynamics matters and can help detecting fast moving pockets of the epidemic. Joint work with @GalChechik and Shie Mannor Led by Eli Meirom @NVIDIAAI",https://arxiv.org/abs/2010.05313,"We consider the problem of controlling a partially-observed dynamic process on a graph by a limited number of interventions. This problem naturally arises in contexts such as scheduling virus tests to curb an epidemic; targeted marketing in order to promote a product; and manually inspecting posts to detect fake news spreading on social networks. We formulate this setup as a sequential decision problem over a temporal graph process. In face of an exponential state space, combinatorial action space and partial observability, we design a novel tractable scheme to control dynamical processes on temporal graphs. We successfully apply our approach to two popular problems that fall into our framework: prioritizing which nodes should be tested in order to curb the spread of an epidemic, and influence maximization on a graph. ","Controlling Graph Dynamics with Reinforcement Learning and Graph Neural
Networks",5,"['New paper! âHow to Stop Epidemics: Controlling Graph Dynamics with RL and GNNsâ. An epidemic is a partially-observed dynamic process that spreads over a temporal contact graph. In this setting, how should we prioritize COVID-19 tests? ', 'We formulate this problem as a sequential decision problem over a graph. In face of an exponential state space, combinatorial action space, and partial observability, we design RLGN, a tractable RL scheme to prioritize which nodes should be tested, using GNNs to rank the nodes.', 'We evaluate this approach in three types of social networks: community-structured, preferential attachment, and based on statistics from real cellular tracking. RLGN consistently outperforms all baselines in our experiments.', 'We show that prioritizing tests using RLGN\non temporal graphs can increase the number of healthy people by 25% and contain the epidemic 30% more often than supervised approaches and 2.5Ă more often than non-learned baselines using the same resources.', 'Dynamics matter: Building the contacts map is crucial. The information in the temporal dynamics matters and can help detecting fast moving pockets of the epidemic.\n\nJoint work with @GalChechik and Shie Mannor\nLed by Eli Meirom \n@NVIDIAAI']",20,10,1239
223,70,1450246388550414337,1437185924354478082,Barry McKernan,"New paper out! Starfall! There are stellar mass black holes in AGN disks, sure, but there are also stars. Stars that orbit in the same sense as the gas in the disk can become massive O(100Msun) & undying! E.g. Here we look at stars that move backwards through the AGN gas disk. They're ~ half the stars in the disk when it forms. These stars feel intense drag & fall inwards fast (<0.1Myr) through the disk towards the central supermassive black hole (SMBH). As the population of backwards stars grows at small disk radii, due to starfall, interactions can occur, binaries can form & scatterings can happen. Chaotic scatterings, particularly between binaries & singles, can send stars rocketing towards the central shred zone. What happens next depends on the mass of the SMBH & whether a star is flung to one side or the other of the SMBH.... If the star is sent too close to the SMBH (<=100million Msun), the tidal forces across it cause it to rupture, spilling its guts at high speed around the SMBH & into the inner disk, creating a tidal disruption event (TDE) If the star is sent around the SMBH against the flow of gas, e.g. then the angular momentum of the unbound ejecta flung outwards cancels the angular momentum of the inner disk and causes much of the inner disk to slump onto the SMBH, clearing out a small cavity. The result is an over-luminous TDE (tidal disruption event), followed by a low AGN state afterwards as the disk recovers on the viscous timescale. Cartoon Lightcurve is: If the star goes the other way around the SMBH, the unbound ejecta adds angular momentum to the inner disk & pushes it outwards temporarily. The result is a more 'normal' TDE, followed by a higher AGN state as the ejecta + inner disk accrete on the viscous timescale. Lightcurve: We expect roughly equal numbers of these 'prograde' and 'retrograde' TDEs in AGN. Particularly early on as starfall allows this population to build up & scatter in the inner disk. So, AGN TDEs are a nice probe of 'turning on' AGN. Now we need to find AGN flares that look like this! Stay tuned. With: @saavikford, @kantyellow, @AdamSJermyn, @doccosmos, Dan Stern, Nate Leigh, Taeho Ryu",https://arxiv.org/abs/2110.03741,"As active galactic nuclei (AGN) `turn on', some stars end up embedded in accretion disks around supermassive black holes (SMBHs) on retrograde orbits. Such stars experience strong headwinds, aerodynamic drag, ablation and orbital evolution on short timescales. Loss of orbital angular momentum in the first $\sim 0.1$~Myr of an AGN leads to a heavy rain of stars (`starfall') into the inner disk and onto the SMBH. A large AGN loss cone ($\theta_{\rm AGN,lc}$) can result from binary scatterings in the inner disk and yield tidal disruption events (TDEs). Signatures of starfall include optical/UV flares that rise in luminosity over time, particularly in the inner disk. If the SMBH mass is $M_{\rm SMBH} \ge 10^{8}M_{\odot}$, flares truncate abruptly and the star is swallowed. If $M_{\rm SMBH}<10^{8}M_{\odot}$, and if the infalling orbit lies within $\theta_{\rm AGN,lc}$, the flare is followed by a TDE which can be prograde or retrograde relative to the AGN inner disk. Retrograde AGN TDEs are over-luminous and short-lived as in-plane ejecta collide with the inner disk and a lower AGN state follows. Prograde AGN TDEs add angular momentum to inner disk gas and so start off looking like regular TDEs but are followed by an AGN high state. Searches for such flare signatures test models of AGN `turn on', SMBH mass, as well as disk properties and the embedded population. ",Starfall: A heavy rain of stars in 'turning on' AGN,13,"['New paper out! Starfall!\n', 'There are stellar mass black holes in AGN disks, sure, but there are also stars. Stars that orbit in the same sense as the gas in the disk can become massive O(100Msun) & undying! E.g.\nhttps://t.co/gC8kS1cRY7', ""Here we look at stars that move backwards through the AGN gas disk. They're ~ half the stars in the disk when it forms. These stars feel intense drag & fall inwards fast (<0.1Myr) through the disk towards the central supermassive black hole (SMBH). https://t.co/9YV45zRNQj"", 'As the population of backwards stars grows at small disk radii, due to starfall, interactions can occur, binaries can form & scatterings can happen. Chaotic scatterings, particularly between binaries & singles, can send stars rocketing towards the central shred zone. https://t.co/S59bpyUdIu', 'What happens next depends on the mass of the SMBH & whether a star is flung to one side or the other of the SMBH....', 'If the star is sent too close to the SMBH (<=100million Msun), the tidal forces across it cause it to rupture, spilling its guts at high speed around the SMBH & into the inner disk, creating a tidal disruption event (TDE)', 'If the star is sent around the SMBH against the flow of gas, e.g. https://t.co/aO4BCzTsSJ', 'then the angular momentum of the unbound ejecta flung outwards cancels the angular momentum of the inner disk and causes much of the inner disk to slump onto the SMBH, clearing out a small cavity. https://t.co/4TtobVnGxn', 'The result is an over-luminous TDE (tidal disruption event), followed by a low AGN state afterwards as the disk recovers on the viscous timescale. Cartoon Lightcurve is: https://t.co/eXJDTISFN8', ""If the star goes the other way around the SMBH, the unbound ejecta adds angular momentum to the inner disk & pushes it outwards temporarily. The result is a more 'normal' TDE, followed by a higher AGN state as the ejecta + inner disk accrete on the viscous timescale. Lightcurve: https://t.co/RYRsVctPn9"", ""We expect roughly equal numbers of these 'prograde' and 'retrograde' TDEs in AGN. Particularly early on as starfall allows this population to build up & scatter in the inner disk. So, AGN TDEs are a nice probe of 'turning on' AGN."", 'Now we need to find AGN flares that look like this! Stay tuned.', 'With: @saavikford, @kantyellow, @AdamSJermyn, @doccosmos, Dan Stern, Nate Leigh, Taeho Ryu']",21,10,2226
224,163,1297957447294947329,850415526602059777,Vikram Dwarkadas,"No mid-life crisis! Contrary to Dittmann et al.(2014, ApJ, 788, 38), we () do not find 3x increase in X-ray flux from SN 1970G at 40+ years - no newly forming PWN. Thanks to the excellent efforts of my Master's student V. Ramakrishnan, now PhD at Purdue. ",https://arxiv.org/abs/2008.09137,"Core-collapse supernovae (SNe) expand into a medium created by winds from the pre-SN progenitor. The SN explosion and resulting shock wave(s) heat up the surrounding plasma, giving rise to thermal X-ray emission, which depends on the density of the emitting material. Tracking the variation of the X-ray luminosity over long periods of time thus allows for investigation of the kinematics of the SN shock waves, the structure of the surrounding medium, and the nature of the progenitor star. In this paper X-ray observations of five of the oldest known X-ray supernovae - SN 1970G, SN 1968D, SN 1959D, SN 1957D and SN 1941C - are analyzed, with the aim of reconstructing their light curves over several decades. For those supernovae for which we can extract multi-epoch data, the X-ray luminosity appears to decline with time, although with large error bars. No increase in the X-ray emission from SN 1970G is found at later epochs, contrary to previous reports. All five SNe show X-ray luminosities that are of comparable magnitude. We compare the late-time X-ray luminosities of these SNe to those of supernova remnants (SNRs) in the Galaxy which are a few hundred years old, and find that when the tentative decline is taken into account, the luminosity of the old SNe studied herein could fall below the luminosity of some of the younger SNRs within a few hundred years. However, the X-ray luminosity should begin to increase as the SNe expand in the Sedov phase, thus reaching that of the observed SNRs. ","From Supernova to Remnant: Tracking the Evolution of the Oldest Known
X-ray Supernovae",1,"[""No mid-life crisis! Contrary to Dittmann et al.(2014, ApJ, 788, 38), we () do not find 3x increase in X-ray flux from SN 1970G at 40+ years - no newly forming PWN. Thanks to the excellent efforts of my Master's student V. Ramakrishnan, now PhD at Purdue. ""]",20,08,267
225,27,709174610277715969,169081481,Hanno Rein đ«,"My new paper with @astrodantamayo is now online! Title: ""Second-order variational equations for N-body simulations"" Code is on @github. Better yet: reproduce all figures interactively in the browser. #NoInstallation! @ProjectJupyter Effort seems to pay off: Just a few minutes after the paper appeared on the arXiv, 30 people are running the code! ",http://arxiv.org/abs/1603.03424,"First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing. ",Second-order variational equations for N-body simulations,3,"['My new paper with @astrodantamayo is now online! Title: ""Second-order variational equations for N-body simulations"" ', 'Code is on @github. Better yet: reproduce all figures interactively in the browser. #NoInstallation! @ProjectJupyter https://t.co/SJQMCI85rx', 'Effort seems to pay off: Just a few minutes after the paper appeared on the arXiv, 30 people are running the code! https://t.co/l2qHiLscbD']",16,03,368
226,7,1466491878916956161,2737324542,Philip Muirhead,"New paper from the RIT-BU-UToronto PCEB collab. Analyzed K2 and HST data on V471 Tau, an eclipsing dK+WD in Hyades. Recovered the transit of the WD with lensing(!). We find the dK isn't inflated, but the WD is still weird (too hot/massive for Hyades): ",https://arxiv.org/abs/2111.06905,"V471 Tau is a post-common-envelope binary consisting of an eclipsing DA white dwarf and a K-type main-sequence star in the Hyades star cluster. We analyzed publicly available photometry and spectroscopy of V471 Tau to revise the stellar and orbital parameters of the system. We used archival K2 photometry, archival Hubble Space Telescope spectroscopy, and published radial-velocity measurements of the K-type star. Employing Gaussian processes to fit for rotational modulation of the system flux by the main-sequence star, we recovered the transits of the white dwarf in front of the main-sequence star for the first time. The transits are shallower than would be expected from purely geometric occultations owing to gravitational microlensing during transit, which places an additional constraint on the white-dwarf mass. Our revised mass and radius for the main-sequence star is consistent with single-star evolutionary models given the age and metallicity of the Hyades. However, as noted previously in the literature, the white dwarf is too massive and too hot to be the result of single-star evolution given the age of the Hyades, and may be the product of a merger scenario. We independently estimate the conditions of the system at the time of common envelope that would result in the measured orbital parameters today. ","Revised Stellar Parameters for V471 Tau, A Post-common Envelope Binary
in the Hyades",1,"[""New paper from the RIT-BU-UToronto PCEB collab. Analyzed K2 and HST data on V471 Tau, an eclipsing dK+WD in Hyades. Recovered the transit of the WD with lensing(!). We find the dK isn't inflated, but the WD is still weird (too hot/massive for Hyades): ""]",21,11,258
227,98,1136535682573320192,1104703454,Francesco Silvestri,"New paper! âFair Near Neighbor Search: Independent Range Sampling in High Dimensionsâ by M. AumĂŒller @RasmusPagh1 and myself: . Given a query q, we show how to evenly sample near neighbors of q in a high dimensional space. (1/2) It is ""fair"" in the sense that all points in the neighborhood have equal opportunity. Just using LSH is not enough: the closest point is more likely to be returned! We show how to make any LSH fair and also give a nearly-linear space for inner product. (2/2)",https://arxiv.org/abs/1906.01859,"Similarity search is a fundamental algorithmic primitive, widely used in many computer science disciplines. There are several variants of the similarity search problem, and one of the most relevant is the $r$-near neighbor ($r$-NN) problem: given a radius $r>0$ and a set of points $S$, construct a data structure that, for any given query point $q$, returns a point $p$ within distance at most $r$ from $q$. In this paper, we study the $r$-NN problem in the light of fairness. We consider fairness in the sense of equal opportunity: all points that are within distance $r$ from the query should have the same probability to be returned. In the low-dimensional case, this problem was first studied by Hu, Qiao, and Tao (PODS 2014). Locality sensitive hashing (LSH), the theoretically strongest approach to similarity search in high dimensions, does not provide such a fairness guarantee. To address this, we propose efficient data structures for $r$-NN where all points in $S$ that are near $q$ have the same probability to be selected and returned by the query. Specifically, we first propose a black-box approach that, given any LSH scheme, constructs a data structure for uniformly sampling points in the neighborhood of a query. Then, we develop a data structure for fair similarity search under inner product that requires nearly-linear space and exploits locality sensitive filters. The paper concludes with an experimental evaluation that highlights (un)fairness in a recommendation setting on real-world datasets and discusses the inherent unfairness introduced by solving other variants of the problem. ",Fair Near Neighbor Search: Independent Range Sampling in High Dimensions,2,"['New paper! âFair Near Neighbor Search: Independent Range Sampling in High Dimensionsâ by M. AumĂŒller @RasmusPagh1 and myself: . Given a query q, we show how to evenly sample near neighbors of q in a high dimensional space. (1/2)', 'It is ""fair"" in the sense that all points in the neighborhood have equal opportunity. Just using LSH is not enough: the closest point is more likely to be returned! We show how to make any LSH fair and also give a nearly-linear space for inner product. (2/2)']",19,06,493
228,229,1255657923797225472,1060569630857674752,Alysson Bessani,"For me, it is clear that we'll have a world with many blockchains, instead of a single blockchain ruling the world. Our recent paper (Smart Contracts on the Move, to appear on DSN'20) propose a mechanism to move assets and contracts between blockchains: ",https://arxiv.org/abs/2004.05933,"Blockchain systems have received much attention and promise to revolutionize many services. Yet, despite their popularity, current blockchain systems exist in isolation, that is, they cannot share information. While interoperability is crucial for blockchain to reach widespread adoption, it is difficult to achieve due to differences among existing blockchain technologies. This paper presents a technique to allow blockchain interoperability. The core idea is to provide a primitive operation to developers so that contracts and objects can switch from one blockchain to another, without breaking consistency and violating key blockchain properties. To validate our ideas, we implemented our protocol in two popular blockchain clients that use the Ethereum virtual machine. We discuss how to build applications using the proposed protocol and show examples of applications based on real use cases that can move across blockchains. To analyze the system performance we use a real trace from one of the most popular Ethereum applications and replay it in a multi-blockchain environment. ",Smart Contracts on the Move,1,"[""For me, it is clear that we'll have a world with many blockchains, instead of a single blockchain ruling the world. Our recent paper (Smart Contracts on the Move, to appear on DSN'20) propose a mechanism to move assets and contracts between blockchains: ""]",20,04,260
229,136,1301792837214797827,561899047,Aki Vehtari,"Akash Dhaka, @AleexCatalina, @Michael_riis, @MansMeg, @jhhhuggins, and I have a new paper ""Robust, Accurate Stochastic Optimization for Variational Inference"" tl;dr We combine PolyakâRuppert averaging with MCMC convergence diagnostics to make stochastic optimization in variational inference more robust or get a warning when it performs badly. These help to make automated use of VI safer in probabilistic programming frameworks. Many VI methods use stochastic optimization either due to using random mini-batches of data or Monte Carlo to estimate expectations of the divergences. For example. For example, autodiff VI (ADVI) in Stan has stochastic target and gradients due to the latter. To fulfill Robbins-Monroe condition of reaching eventually the optimum, the step size is usually gradually decreased. Although this guarantees asymptotic convergence, it may take unfeasible amount of time and the last iteration in finite time can be far from the optimum. Under certain conditions, stochastic optimization with a fixed step size converges to a finite variance stationary process around the optimum. Average of the iterations converges towards the optimum faster. This is an old but under-used idea, aka PolyakâRuppert averaging. Recently iterate averaging has been used also with name stochastic weight averaging (SWA) in context of deep learning. What we add is 1) a diagnostic for detecting when we have reached stationarity and can start averaging, and 2) a standard error estimate to decide when we can stop averaging or give up if the standard error is not decreasing (indicating violation of conditions). The diagnostics are familiar from MCMC convergence literature (e.g. Rhat, MCSE, and autocorrelation) and VI diagnostics literature (e.g. Pareto k). @lauretig Oops, forgot that. The repo will be public next week.",https://arxiv.org/abs/2009.00666,"We consider the problem of fitting variational posterior approximations using stochastic optimization methods. The performance of these approximations depends on (1) how well the variational family matches the true posterior distribution,(2) the choice of divergence, and (3) the optimization of the variational objective. We show that even in the best-case scenario when the exact posterior belongs to the assumed variational family, common stochastic optimization methods lead to poor variational approximations if the problem dimension is moderately large. We also demonstrate that these methods are not robust across diverse model types. Motivated by these findings, we develop a more robust and accurate stochastic optimization framework by viewing the underlying optimization algorithm as producing a Markov chain. Our approach is theoretically motivated and includes a diagnostic for convergence and a novel stopping rule, both of which are robust to noisy evaluations of the objective function. We show empirically that the proposed framework works well on a diverse set of models: it can automatically detect stochastic optimization failure or inaccurate variational approximation ","Robust, Accurate Stochastic Optimization for Variational Inference",9,"['Akash Dhaka, @AleexCatalina, @Michael_riis, @MansMeg, @jhhhuggins, and I have a new paper ""Robust, Accurate Stochastic Optimization for Variational Inference"" ', 'tl;dr We combine PolyakâRuppert averaging with MCMC convergence diagnostics to make stochastic optimization in variational inference more robust or get a warning when it performs badly. These help to make automated use of VI safer in probabilistic programming frameworks. https://t.co/3RUKPPdzfs', 'Many VI methods use stochastic optimization either due to using random mini-batches of data or Monte Carlo to estimate expectations of the divergences. For example. For example, autodiff VI (ADVI) in Stan has stochastic target and gradients due to the latter.', 'To fulfill Robbins-Monroe condition of reaching eventually the optimum, the step size is usually gradually decreased. Although this guarantees asymptotic convergence, it may take unfeasible amount of time and the last iteration in finite time can be far from the optimum.', 'Under certain conditions, stochastic optimization with a fixed step size converges to a finite variance stationary process around the optimum. Average of the iterations converges towards the optimum faster. This is an old but under-used idea, aka PolyakâRuppert averaging.', 'Recently iterate averaging has been used also with name stochastic weight averaging (SWA) in context of deep learning.', 'What we add is 1) a diagnostic for detecting when we have reached stationarity and can start averaging, and 2) a standard error estimate to decide when we can stop averaging or give up if the standard error is not decreasing (indicating violation of conditions).', 'The diagnostics are familiar from MCMC convergence literature (e.g. Rhat, MCSE, and autocorrelation) and VI diagnostics literature (e.g. Pareto k).', '@lauretig Oops, forgot that. The repo will be public next week.']",20,09,1850
230,62,971421923816157185,3301643341,Roger Grosse,"Generalization to longer horizons is the Achilles' heel of gradient-based meta-optimization. Short horizon meta-optimizers decay the learning rate really quickly and stop making progress. New paper w/ @Yuhu_ai_, @mengyer, and Renjie Liao. ",https://arxiv.org/abs/1803.02021,"Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes. ",Understanding Short-Horizon Bias in Stochastic Meta-Optimization,1,"[""Generalization to longer horizons is the Achilles' heel of gradient-based meta-optimization. Short horizon meta-optimizers decay the learning rate really quickly and stop making progress. New paper w/ @Yuhu_ai_, @mengyer, and Renjie Liao.\n\n ""]",18,03,252
231,226,1445794870354870278,3119778197,Hanie Sedghi,"đđ°âExploring the limits of large scale pre-trainingâ We systematically study the effect of scaling up data, model size and training time in image recognition on a wide range of downstream tasks, pinpoint the limits, the reasons & provide guidelines. 𧔠We establish that scaling doesnât lead to a one-model-fits-all solution. As US acc. improves, DS acc. saturates to values << 100%. This gap depends on the relationship between US & DS tasks.+in set of models with similar US accuracy, the best model for different DS tasks varies. We investigate more than 4800 experiments on Vision Transformers, MLP-Mixers & ResNets with No. of parameters ranging from ten million to ten billion, trained on the largest scale of available image data (JFT, ImageNet21K) & evaluated on > 20 downstream image recognition tasks We propose a model for downstream performance that reflects the saturation phenomena & captures the nonlinear relationship in upstream and downstream performance. The model is fitted to the upper hull of trained networks & is robust to sample size variations & sampling biases. We study how scaling up the model size, data size, and compute affects DS performance and show that these parameters impact DS performance mainly through the US performance. 5/10 Delving deeper to understand the reasons that give rise to these phenomena, we show that the saturation behavior we observe is closely related to the way that representations evolve through the layers of the Models. 6/10 We showcase an even more extreme scenario where performance on upstream and downstream are at odds with each other: in order to have a better DS performance, we need to hurt US accuracy.+the optimal hyper-parameters for the head used in pre-training are different for US and DS. The reason behind this discrepancy: by changing head hyper-parameters such as WD & LR, we push the information compressed in the head down to lower layers which leads to performance degradation on US and performance improvement on DS tasks that are related to the US task. 8/10 Our observations are robust to several choices: US data size, no. of shots, transfer vs few-shot setting & architecture. We assert that we should make design choices that improve performance on a breadth of DS tasks +when investing on scaling look at an extra axis: data diversity with my awesome collaborators @bneyshabur @samiraabnar @m__dehghani 10/10",https://arxiv.org/abs/2110.02095,"Recent developments in large-scale machine learning suggest that by scaling up data, model size and training time properly, one might observe that improvements in pre-training would transfer favorably to most downstream tasks. In this work, we systematically study this phenomena and establish that, as we increase the upstream accuracy, the performance of downstream tasks saturates. In particular, we investigate more than 4800 experiments on Vision Transformers, MLP-Mixers and ResNets with number of parameters ranging from ten million to ten billion, trained on the largest scale of available image data (JFT, ImageNet21K) and evaluated on more than 20 downstream image recognition tasks. We propose a model for downstream performance that reflects the saturation phenomena and captures the nonlinear relationship in performance of upstream and downstream tasks. Delving deeper to understand the reasons that give rise to these phenomena, we show that the saturation behavior we observe is closely related to the way that representations evolve through the layers of the models. We showcase an even more extreme scenario where performance on upstream and downstream are at odds with each other. That is, to have a better downstream performance, we need to hurt upstream accuracy. ",Exploring the Limits of Large Scale Pre-training,10,"['đđ°âExploring the limits of large scale pre-trainingâ\n\n\n\nWe systematically study the effect of scaling\nup data, model size and training time in image recognition on a wide range of downstream tasks, pinpoint the limits, the reasons & provide guidelines.\n𧔠', 'We establish that scaling doesnât lead to a one-model-fits-all solution. As US acc. improves, DS acc. saturates to values << 100%. This gap depends on the relationship between US & DS tasks.+in set of models with similar US accuracy, the best model for different DS tasks varies.', 'We investigate more than 4800 experiments on Vision Transformers,\nMLP-Mixers & ResNets with No. of parameters ranging from ten million to\nten billion, trained on the largest scale of available image data (JFT,\nImageNet21K) & evaluated on > 20 downstream image recognition tasks https://t.co/137zv2J22B', 'We propose a model for downstream performance that reflects the saturation\nphenomena & captures the nonlinear relationship in upstream and downstream performance. The model is fitted to the upper hull of trained networks & is robust to sample size variations & sampling biases.', 'We study how scaling up the model size, data size, and compute affects DS performance and show that these parameters impact DS performance mainly through the US performance. 5/10 https://t.co/UrTy1jqHJv', 'Delving deeper to understand the reasons that give rise to\nthese phenomena, we show that the saturation behavior we observe is closely\nrelated to the way that representations evolve through the layers of the\nModels. 6/10 https://t.co/iJ6FA4Y0nT', 'We showcase an even more extreme scenario where performance on upstream and\ndownstream are at odds with each other: in order to have a better\nDS performance, we need to hurt US accuracy.+the optimal hyper-parameters for the head used in pre-training are different for US and DS. https://t.co/puKEwVkFso', 'The reason behind this discrepancy: by changing head hyper-parameters such as WD & LR, we push the information compressed in the head down to lower layers which leads to performance degradation on US and performance improvement on DS tasks that are related to the US task. 8/10', 'Our observations are robust to several choices: US data size, no. of shots, transfer vs few-shot setting & architecture. We assert that we should make design choices that improve performance on a breadth of DS tasks +when investing on scaling look at an extra axis: data diversity', 'with my awesome collaborators @bneyshabur @samiraabnar @m__dehghani 10/10']",21,10,2450
232,89,1306233895323656192,1215310334,Timo Schick,"đ New paper đ We show that language models are few-shot learners even if they have far less than 175B parameters. Our method performs similar to @OpenAI's GPT-3 on SuperGLUE after training on 32 examples with just 0.1% of its parameter count: #NLProc This is achieved by combining PET () with pretrained ALBERT. Key factors for strong performance include concurrently using multiple ""task descriptions"" and using labeled data to perform actual parameter updates.",https://arxiv.org/abs/2009.07118,"When scaled to hundreds of billions of parameters, pretrained language models such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance. However, enormous amounts of compute are required for training and applying such big models, resulting in a large carbon footprint and making it difficult for researchers and practitioners to use them. We show that performance similar to GPT-3 can be obtained with language models that are much ""greener"" in that their parameter count is several orders of magnitude smaller. This is achieved by converting textual inputs into cloze questions that contain a task description, combined with gradient-based optimization; exploiting unlabeled data gives further improvements. We identify key factors required for successful natural language understanding with small language models. ","It's Not Just Size That Matters: Small Language Models Are Also Few-Shot
Learners",2,"[""đ New paper đ We show that language models are few-shot learners even if they have far less than 175B parameters. Our method performs similar to @OpenAI's GPT-3 on SuperGLUE after training on 32 examples with just 0.1% of its parameter count: #NLProc "", 'This is achieved by combining PET (https://t.co/YCrBnggof7) with pretrained ALBERT. Key factors for strong performance include concurrently using multiple ""task descriptions"" and using labeled data to perform actual parameter updates.']",20,09,482
233,75,1426259065202692103,1425872270627639303,or castel,"*New #NLProc paper alert!* We examine greedy decoding for extractive QA, by comparing it with our optimal algorithm: exact-extract. We found out that given just a few examples - greedy decoding does a great job! Paper: 1/N Greedy decoding is used with great success for extractive QA. But it is not necessarily extractive (=generates spans from the passage) nor exact (=produces the max prob span). Can we do better? With @ori__ram @AviaEfrat and @omerlevy_ Summary Thread: 2/N We present a novel decoding algorithm: *exact-extract*, a dynamic programming algorithm that efficiently calculates the probability of all possible spans from the input passage, enabling us to find the most probable one. 3/N We first examine T5âs performance in a few-shot setting. Given only 16 examples, the model reaches ~82% F1 on SQuAD. 1024 examples are enough to reach human performance (91% F1); all this while using the naive greedy decoding, matching our optimal algorithm quite quickly. 4/N On some datasets the gap is more substantial - but it shrinks down at 1024 examples too. 5/N This is not the case for the zero-shot setting, where exact-extract obtains 60% F1 on SQuAD, without any fine-tuning (10 points over greedy decoding). 6/N We have plenty more analyses, results, and even a lightweight pretraining method for improving greedy decoding in zero-shot. 7/N Overall, our results show that the naive greedy decoding is nearly as good as the optimal strategy even when only a handful of labeled examples are available; and that pretrained language models can be easily adapted to extractive QA and with great results. 8/8 and the paper, since tweeter is hiding the first tweetđ€š : and the paper, since tweeter is hiding the first tweet đ€š : ",https://arxiv.org/abs/2108.05857,"Fine-tuned language models use greedy decoding to answer reading comprehension questions with relative success. However, this approach does not ensure that the answer is a span in the given passage, nor does it guarantee that it is the most probable one. Does greedy decoding actually perform worse than an algorithm that does adhere to these properties? To study the performance and optimality of greedy decoding, we present exact-extract, a decoding algorithm that efficiently finds the most probable answer span in the context. We compare the performance of T5 with both decoding algorithms on zero-shot and few-shot extractive question answering. When no training examples are available, exact-extract significantly outperforms greedy decoding. However, greedy decoding quickly converges towards the performance of exact-extract with the introduction of a few training examples, becoming more extractive and increasingly likelier to generate the most probable span as the training set grows. We also show that self-supervised training can bias the model towards extractive behavior, increasing performance in the zero-shot setting without resorting to annotated examples. Overall, our results suggest that pretrained language models are so good at adapting to extractive question answering, that it is often enough to fine-tune on a small training set for the greedy algorithm to emulate the optimal decoding strategy. ",How Optimal is Greedy Decoding for Extractive Question Answering?,10,"['*New #NLProc paper alert!*\n \nWe examine greedy decoding for extractive QA, by comparing it with our optimal algorithm: exact-extract. We found out that given just a few examples - greedy decoding does a great job!\nPaper: \n \n1/N', 'Greedy decoding is used with great success for extractive QA. But it is not necessarily extractive (=generates spans from the passage) nor exact (=produces the max prob span). Can we do better?\n\nWith @ori__ram @AviaEfrat and @omerlevy_ \n\nSummary Thread:\n\n2/N', 'We present a novel decoding algorithm: *exact-extract*, a dynamic programming algorithm that efficiently calculates the probability of all possible spans from the input passage, enabling us to find the most probable one.\n\n3/N', 'We first examine T5âs performance in a few-shot setting. Given only 16 examples, the model reaches ~82% F1 on SQuAD. 1024 examples are enough to reach human performance (91% F1); all this while using the naive greedy decoding, matching our optimal algorithm quite quickly. \n\n4/N https://t.co/0msfiP5UCk', 'On some datasets the gap is more substantial - but it shrinks down at 1024 examples too.\n\n5/N https://t.co/EBQP9pj8vn', 'This is not the case for the zero-shot setting, where exact-extract obtains 60% F1 on SQuAD, without any fine-tuning (10 points over greedy decoding).\n\n6/N', 'We have plenty more analyses, results, and even a lightweight pretraining method for improving greedy decoding in zero-shot.\n\n7/N', 'Overall, our results show that the naive greedy decoding is nearly as good as the optimal strategy even when only a handful of labeled examples are available; and that pretrained language models can be easily adapted to extractive QA and with great results. \n\n8/8', 'and the paper, since tweeter is hiding the first tweetđ€š : https://t.co/Rfo8eIZs65', 'and the paper, since tweeter is hiding the first tweet đ€š : https://t.co/Rfo8eIZs65']",21,08,1775
234,17,1234293645274075136,2781150596,Innes Bigaran,"New paper today on the arXiv today: ""Getting chirality right"": Exploring the chiral scalar LQ solutions to g-2 (e/mu) in a top-philic framework. @RVolkas #AcademicChatter #phdchat đ @will_pietrak @RVolkas Not entirely sure yet, waiting excitedly for @Fermilab :)",https://arxiv.org/abs/2002.12544,"We identify the two scalar leptoquarks capable of generating sign-dependent contributions to leptonic magnetic moments, $R_2\sim (\mathbf{3}, \mathbf{2}, 7/6)$ and $S_1\sim (\mathbf{3}, \mathbf{1}, -1/3)$, as favoured by current measurements. We consider the case in which the electron and muon sectors are decoupled, and real-valued Yukawa couplings are specified using an up-type quark mass-diagonal basis. Contributions to $\Delta a_e$ arise from charm-containing loops and $\Delta a_\mu$ from top-containing loops -- hence avoiding dangerous LFV constraints, particularly from $\mu \to e \gamma$. The strongest constraints on these models arise from contributions to the Z leptonic decay widths, high-$p_T$ leptonic tails at the LHC, and from (semi)leptonic kaon decays. To be a comprehensive solution to the $(g-2)_{e/\mu}$ puzzle we find that the mass of either leptoquark must be $\lesssim 65$ TeV. This analysis can be embedded within broader flavour anomaly studies, including those of hierarchical leptoquark coupling structures. It can also be straightforwardly adapted to accommodate future measurements of leptonic magnetic moments, such as those expected from the Muon $g-2$ collaboration in the near future. ","Getting chirality right: single scalar leptoquark solutions to the
$(g-2)_{e,\mu}$ puzzle",2,"['New paper today on the arXiv today: ""Getting chirality right"": \nExploring the chiral scalar LQ solutions to g-2 (e/mu) in a top-philic framework. @RVolkas #AcademicChatter #phdchat đ', '@will_pietrak @RVolkas Not entirely sure yet, waiting excitedly for \n@Fermilab :)']",20,02,269
235,69,1427446085631451160,1093387119148462081,Daniel Green,New paper: This is the outcome of (a) learning a lot of neutrino cosmology for the CMB-S4 Science Book and (b) needing an excuse to collaborate with some old friends Key idea: CMB lensing already constraints sub-percent of DM interacting with neutrinos ,https://arxiv.org/abs/2108.06928,"The cosmic neutrino background is both a dramatic prediction of the hot Big Bang and a compelling target for current and future observations. The impact of relativistic neutrinos in the early universe has been observed at high significance in a number of cosmological probes. In addition, the non-zero mass of neutrinos alters the growth of structure at late times, and this signature is a target for a number of upcoming surveys. These measurements are sensitive to the physics of the neutrino and could be used to probe physics beyond the standard model in the neutrino sector. We explore an intriguing possibility where light right-handed neutrinos are coupled to all, or a fraction of, the dark matter through a mediator. In a wide range of parameter space, this interaction only becomes important at late times and is uniquely probed by late-time cosmological observables. Due to this coupling, the dark matter and neutrinos behave as a single fluid with a non-trivial sound speed, leading to a suppression of power on small scales. In current and near-term cosmological surveys, this signature is equivalent to an increase in the sum of the neutrino masses. Given current limits, we show that at most 0.5% of the dark matter could be coupled to neutrinos in this way. ",Neutrino Interactions in the Late Universe,1,['New paper: \n\nThis is the outcome of (a) learning a lot of neutrino cosmology for the CMB-S4 Science Book and (b) needing an excuse to collaborate with some old friends\n\nKey idea: CMB lensing already constraints sub-percent of DM interacting with neutrinos '],21,08,266
236,164,1270093422691364864,54599008,Chase,"New paper drop! Anomaly Detection with TensorNetworks. The basic idea is really simple. We map our input into a vector space. If the length of our vector is close to 0, we flag it as an anomaly. We achieve this by taking the inner product of our input/MPO model with itself. Again, very simple. One of the awesome advantages of using a tensor network is that our model is entirely linear. This allows us to include a GLOBAL penalty that tries to bring all possible inputs close to 0. This penalty is just the partition function of our MPO. These two terms bring us to a really simple and beautiful loss function. The first term tries to bring all normal inputs to the edge of a unit sphere, the second tries to bring everything to 0. And our results kick ass. I want to point out the wine dataset specifically. This one is hard because its so small and so sparse. The second best method got 60% AUC, our method got 97%. This was a really awesome project, and Jensen knocked it out of the park with how quickly we were able to publish this. Special thanks to @sleichen and @Theteamatx for making this work possible! Congrats again Jensen on the phenomenal paper! Defund the police. ",https://arxiv.org/abs/2006.02516,"Originating from condensed matter physics, tensor networks are compact representations of high-dimensional tensors. In this paper, the prowess of tensor networks is demonstrated on the particular task of one-class anomaly detection. We exploit the memory and computational efficiency of tensor networks to learn a linear transformation over a space with dimension exponential in the number of original features. The linearity of our model enables us to ensure a tight fit around training instances by penalizing the model's global tendency to a predict normality via its Frobenius norm---a task that is infeasible for most deep learning models. Our method outperforms deep and classical algorithms on tabular datasets and produces competitive results on image datasets, despite not exploiting the locality of images. ",Anomaly Detection with Tensor Networks,8,"['New paper drop!\n\nAnomaly Detection with TensorNetworks.\n', 'The basic idea is really simple. We map our input into a vector space. If the length of our vector is close to 0, we flag it as an anomaly. https://t.co/Ksbk4psTLG', 'We achieve this by taking the inner product of our input/MPO model with itself. Again, very simple. https://t.co/WTHkeKGlTQ', 'One of the awesome advantages of using a tensor network is that our model is entirely linear. This allows us to include a GLOBAL penalty that tries to bring all possible inputs close to 0.\n\nThis penalty is just the partition function of our MPO. https://t.co/2akTwaG8zl', 'These two terms bring us to a really simple and beautiful loss function. The first term tries to bring all normal inputs to the edge of a unit sphere, the second tries to bring everything to 0. https://t.co/CQPYMaGVTO', 'And our results kick ass. I want to point out the wine dataset specifically. This one is hard because its so small and so sparse. The second best method got 60% AUC, our method got 97%. https://t.co/qluZ0Wv2Hr', 'This was a really awesome project, and Jensen knocked it out of the park with how quickly we were able to publish this.\n\nSpecial thanks to @sleichen and @Theteamatx for making this work possible!\n\nCongrats again Jensen on the phenomenal paper!', 'Defund the police.\n\nhttps://t.co/UEVk2XPhzk']",20,06,1229
237,123,1369688157558431749,1253758756304809984,Igor Mordatch,"What are the limits to the generalization of large pretrained transformer models? We find minimal fine-tuning (~0.1% of params) performs as well as training from scratch on a completely new modality! with @_kevinlu, @adityagrover_, @pabbeel paper: 1/8 We take pretrained GPT-2 and freeze the attention & FF layers to obtain core of Frozen Pretrained Transformer (FPT). To adapt to new modality & task, we init *linear* input and output layers. Despite only training .1% of params, FPT matches performance of full transformer! 2/8 We visualize the attention maps of the frozen transformer. Despite not finetuning the self-attention layers on the new modality, FPT is able to learn to attend to the relevant bits to compute an elementwise XOR with perfect accuracy for sequences of length up to 256. 3/8 Compared to a randomly initialized frozen transformer, pretraining with language (FPT) yields large compute benefits, showing that, much like common practice for in-modality finetuning, we can save computation by starting from a pretrained model: 4/8 What enables this transfer? We find that simply using a randomly initialized frozen transformer already greatly outperforms a randomly initialized frozen LSTM: 5/8 Additionally, by incorporating various sources of pretraining supervision, even a little pretraining, for example learning to memorize bits, can help transfer: 6/8 As we move from small specialist models to large generalist models, weâre excited by the potential for pretraining regimes that could train a universal computation engine. Simply adding more parameters and using a larger model already improves performance: 7/8 For more details, see our paper on arXiv or play our demo/code on Github! arXiv: Github: 8/8 @rndmcnlly @_kevinlu @adityagrover_ @pabbeel Indeed Adam! That was our starting hypothesis. One thing we're still not quite clear on is whether this FPT interpreter largely performs generic input summarization or more extensive computation.",http://arxiv.org/abs/2103.05247,"We investigate the capability of a transformer pretrained on natural language to generalize to other modalities with minimal finetuning -- in particular, without finetuning of the self-attention and feedforward layers of the residual blocks. We consider such a model, which we call a Frozen Pretrained Transformer (FPT), and study finetuning it on a variety of sequence classification tasks spanning numerical computation, vision, and protein fold prediction. In contrast to prior works which investigate finetuning on the same modality as the pretraining dataset, we show that pretraining on natural language can improve performance and compute efficiency on non-language downstream tasks. Additionally, we perform an analysis of the architecture, comparing the performance of a random initialized transformer to a random LSTM. Combining the two insights, we find language-pretrained transformers can obtain strong performance on a variety of non-language tasks. ",Pretrained Transformers as Universal Computation Engines,9,"['What are the limits to the generalization of large pretrained transformer models?\n\nWe find minimal fine-tuning (~0.1% of params) performs as well as training from scratch on a completely new modality!\n\nwith @_kevinlu, @adityagrover_, @pabbeel\npaper: \n\n1/8', 'We take pretrained GPT-2 and freeze the attention & FF layers to obtain core of Frozen Pretrained Transformer (FPT).\n\nTo adapt to new modality & task, we init *linear* input and output layers.\n\nDespite only training .1% of params, FPT matches performance of full transformer!\n\n2/8 https://t.co/1zjfCKm9rK', 'We visualize the attention maps of the frozen transformer.\n\nDespite not finetuning the self-attention layers on the new modality, FPT is able to learn to attend to the relevant bits to compute an elementwise XOR with perfect accuracy for sequences of length up to 256.\n\n3/8 https://t.co/tC1USSV4rS', 'Compared to a randomly initialized frozen transformer, pretraining with language (FPT) yields large compute benefits, showing that, much like common practice for in-modality finetuning, we can save computation by starting from a pretrained model:\n\n4/8 https://t.co/vmJlrBsfhg', 'What enables this transfer? We find that simply using a randomly initialized frozen transformer already greatly outperforms a randomly initialized frozen LSTM:\n\n5/8 https://t.co/17uMUZ1YO5', 'Additionally, by incorporating various sources of pretraining supervision, even a little pretraining, for example learning to memorize bits, can help transfer:\n\n6/8 https://t.co/KqOxlBgTY9', 'As we move from small specialist models to large generalist models, weâre excited by the potential for pretraining regimes that could train a universal computation engine.\n\nSimply adding more parameters and using a larger model already improves performance:\n\n7/8 https://t.co/sxtgfCUrGw', 'For more details, see our paper on arXiv or play our demo/code on Github!\n\narXiv: https://t.co/DtWGJ0Afh7\n\nGithub: https://t.co/9ts1FlqHFw\n\n8/8', ""@rndmcnlly @_kevinlu @adityagrover_ @pabbeel Indeed Adam! That was our starting hypothesis. One thing we're still not quite clear on is whether this FPT interpreter largely performs generic input summarization or more extensive computation.""]",21,03,2038
238,206,1252078787144986625,1710697381,Diego F. Torres,"Today we release 'Introducing the HD+B model for pulsar wind nebulae: a hybrid hydrodynamics/radiative approach', MNRAS @RoyalAstroSoc, . This is a novel method to study pulsar wind nebulae. A hopefully intuitive conceptual summary is shown in the figure. ",https://arxiv.org/abs/2004.08171,"Identification and characterization of a rapidly increasing number of pulsar wind nebulae is, and will continue to be, a challenge of high-energy gamma-ray astrophysics. Given that such systems constitute -- by far -- the most numerous expected population in the TeV regime, such characterization is important not only to learn about the sources per se from an individual and population perspective, but also to be able to connect them with observations at other frequencies, especially in radio and X-rays. Also, we need to remove the emission from nebulae in highly confused regions of the sky for revealing other underlying emitters. In this paper we present a new approach for theoretical modelling of pulsar wind nebulae: a hybrid hydrodynamic-radiative model able to reproduce morphological features and spectra of the sources, with relatively limited numerical cost. ","Introducing the HD+B model for pulsar wind nebulae: a hybrid
hydrodynamics/radiative approach",1,"[""Today we release 'Introducing the HD+B model for pulsar wind nebulae: a hybrid hydrodynamics/radiative approach', MNRAS @RoyalAstroSoc, . This is a novel method to study pulsar wind nebulae. A hopefully intuitive conceptual summary is shown in the figure. ""]",20,04,268
239,60,1308357094136053760,1288452767376445440,Patrick Kidger,"New paper, with @RickyTQChen! ""Hey, that's not an ODE"": Faster ODE Adjoints with 12 Lines of Code We roughly double the training speed of neural ODEs. 1/ We show how to make the backward (adjoint) pass much cheaper. We see a median of 40% fewer function evaluations (NFEs) across experiments on multiple domains: time series problems, generative modelling, and physical control. On some problems we see as much as 62% fewer NFEs. 2/ The best bit is, we do this with just one easy change - the ""12 lines of code"" from the title. (A number that includes visual whitespace, incidentally :D ) This makes it an easy thing to add to any existing project. 3/ The idea is that when solving the adjoint equations, typical adaptive-step differential equation solvers will be overzealous about rejecting steps. By exploiting the particular structure of the adjoint equations, things get much things cheaper/faster. 4/ Questions? Comments? Let me or @RickyTQChen know! 5/5 @MichaelPoli6 @RickyTQChen Haha :D Yup, it's available in torchdiffeq. We even include a short code example in the paper you can just copy-paste.",https://arxiv.org/abs/2009.09457,"Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step is accepted if its error, \emph{relative to some norm}, is sufficiently small; else it is rejected, the step is shrunk, and the process is repeated. Here, we demonstrate that the particular structure of the adjoint equations makes the usual choices of norm (such as $L^2$) unnecessarily stringent. By replacing it with a more appropriate (semi)norm, fewer steps are unnecessarily rejected and the backpropagation is made faster. This requires only minor code modifications. Experiments on a wide range of tasks -- including time series, generative modeling, and physical control -- demonstrate a median improvement of 40% fewer function evaluations. On some problems we see as much as 62% fewer function evaluations, so that the overall training time is roughly halved. ","""Hey, that's not an ODE"": Faster ODE Adjoints via Seminorms",6,"['New paper, with @RickyTQChen!\n\n""Hey, that\'s not an ODE"": Faster ODE Adjoints with 12 Lines of Code\n\n\n\nWe roughly double the training speed of neural ODEs.\n\n1/ ', 'We show how to make the backward (adjoint) pass much cheaper.\n\nWe see a median of 40% fewer function evaluations (NFEs) across experiments on multiple domains: time series problems, generative modelling, and physical control. On some problems we see as much as 62% fewer NFEs.\n\n2/', 'The best bit is, we do this with just one easy change - the ""12 lines of code"" from the title.\n(A number that includes visual whitespace, incidentally :D )\n\nThis makes it an easy thing to add to any existing project.\n\n3/', 'The idea is that when solving the adjoint equations, typical adaptive-step differential equation solvers will be overzealous about rejecting steps. By exploiting the particular structure of the adjoint equations, things get much things cheaper/faster.\n\n4/', 'Questions? Comments? Let me or @RickyTQChen know!\n\n5/5', ""@MichaelPoli6 @RickyTQChen Haha :D Yup, it's available in torchdiffeq. We even include a short code example in the paper you can just copy-paste.""]",20,09,1127
240,71,960872925560823810,20309837,Michael Veale,"We talk lots about theoretical algorithmic fairness+accountability, but can we learn from those grappling w/ these issues on the ground? 27 folks in police, justice, child welfare ML shared their experiences in our new #chi2018 paper #fatml @emax @RDBinns ",https://arxiv.org/abs/1802.01029,"Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions---like taxation, justice, and child protection---are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and 'discrimination-aware' machine learning---absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the 'street-level bureaucrats' on the frontlines of public service. We conclude by outlining ethical challenges and future directions for collaboration in these high-stakes applications. ","Fairness and Accountability Design Needs for Algorithmic Support in
High-Stakes Public Sector Decision-Making",1,"['We talk lots about theoretical algorithmic fairness+accountability, but can we learn from those grappling w/ these issues on the ground? 27 folks in police, justice, child welfare ML shared their experiences in our new #chi2018 paper #fatml @emax @RDBinns ']",18,02,269
241,5,723001334908100608,56872711,Tejas Kulkarni,"check our new paper on - Hierarchical Deep RL: Integrating temporal abstraction and intrinsic motivation"" @iandanforth agreed re saliency. Ext rewards have always bothered me. But they seem appropriate to represent unalterable rewards w.r.t agent @Miles_Brundage if you chop top layer, model becomes DQN. Need flexible memory and qnet to learn objecty things to exploit top layer",https://arxiv.org/abs/1604.06057,"Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'. ","Hierarchical Deep Reinforcement Learning: Integrating Temporal
Abstraction and Intrinsic Motivation",3,"['check our new paper on - Hierarchical Deep RL: Integrating temporal abstraction and intrinsic motivation"" ', '@iandanforth agreed re saliency. Ext rewards have always bothered me. But they seem appropriate to represent unalterable rewards w.r.t agent', '@Miles_Brundage if you chop top layer, model becomes DQN. Need flexible memory and qnet to learn objecty things to exploit top layer']",16,04,386
242,176,1458172614019633169,726232195707318273,âš Maicol | A | Ochoa â©,"We study the emergence of the molecule-plasmon strong-coupling regime with semiclassical models, deriving forms for the coupling strength, damping rates, and conditions for observing the Rabi splitting in terms of geometric parameters. #physics #Chemistry ",https://arxiv.org/abs/2111.03730,"The interaction between excited states of a molecule and excited states of metal nanostructure (e.g. plasmons) leads to hybrid states with modified optical properties. When plasmon resonance is swept through molecular transition frequency an avoided crossing may be observed, which is often regarded as a signature of strong coupling between plasmons and molecules. Such strong coupling is expected to be realized when $2|U|/{\hbar\Gamma}>1$, where $U$ and ${\Gamma}$ are the molecule-plasmon coupling and the spectral width of the optical transition respectively. Because both $U$ and ${\Gamma}$ strongly increase with decreasing distance between a molecule and a plasmonic structure it is not obvious that this condition can be satisfied for any molecule-metal surface distance. In this work we investigate the behavior of $U$ and ${\Gamma}$ for several geometries. Surprisingly, we find that if the only contributions to ${\Gamma}$ are lifetime broadenings associated with the radiative and nonradiative relaxation of a single molecular vibronic transition, including effects on molecular radiative and nonradiative lifetimes induced by the metal, the criterion $2|U|/{\hbar\Gamma}>1$ is easily satisfied by many configurations irrespective of the metal-molecule distance. This implies that the Rabi splitting can be observed in such structures if other sources of broadening are suppressed. Additionally, when the molecule-metal surface distance is varied keeping all other molecular and metal parameters constant, this behavior is mitigated due to the spectral shift associated with the same molecule-plasmon interaction, making the observation of Rabi splitting more challenging. ","Coupling, lifetimes and ""strong coupling"" maps for single molecules at
plasmonic interfaces",1,"['We study the emergence of the molecule-plasmon strong-coupling regime with semiclassical models, deriving forms for the coupling strength, damping rates, and conditions for observing the Rabi splitting in terms of geometric parameters. #physics #Chemistry\n']",21,11,262
243,22,1409404108776390658,70445227,Ike Kunze,"Our new paper on four spin-bit cousins that focus on loss detection is out and will be presented at ANRW '21 (co-located with IETF 111 @inretafo/@ietf). Check out the paper () and try our experiments yourself (). [1/6] In a Mininet testbed, we study the loss detection capabilities of the L-, Q-, R- and T-Bits, which are currently proposed in the @ietf IPPM WG and, similar to the spin-bit, shape measurable signals onto a (transport) connection. () [2/6] Overall, all approaches provide reasonable loss estimates close to the ground truth when subject to random loss. However, longer algorithmic intervals of Q, R, T cause fluctuations, while the L follows the ground truth with a slight delay. [3/6] These intervals of Q, R, T also impact their accuracy in times of bursty loss as whole phases might be wiped out. The L-Bit is quite robust to this as it reports the loss detected by the sender-side loss detection on a packet-by-packet level. [4/6] The previous results base on long-running experiments with roughly 1 Mio sent packets each. When looking at shorter connections, it can be seen that the approaches require certain amounts of sent data to produce readings. Results only stabilize for higher flow volumes. [5/6] From a pure measurement accuracy perspective, the L-Bit and combinations of Q&L / Q&R, look the most promising. However, the approaches allow for different path segmentation granularities. While this might impact real deployments, it is not the focus of our paper. [6/6] ",http://arxiv.org/abs/2106.13710,"Network operators utilize traffic monitoring to locate and fix faults or performance bottlenecks. This often relies on intrinsic protocol semantics, e.g., sequence numbers, that many protocols share implicitly through their packet headers. The arrival of (almost) fully encrypted transport protocols, such as QUIC, significantly complicates this monitoring as header data is no longer visible to passive observers. Recognizing this challenge, QUIC offers explicit measurement semantics by exposing the spin bit to measure a flow's RTT. Ongoing efforts in the IETF IPPM working group argue to expose further information and enable the passive quantification of packet loss. This work implements and evaluates four currently proposed measurement techniques (L-, Q-, R-, and T-bit). We find that all techniques generally provide accurate loss estimations, but that longer algorithmic intervals for Q and R, yet foremost for T, complicate detecting very small loss rates or loss on short connections. Deployment combinations of Q & R as well as Q & L, thus, have the best potential for accurately grasping the loss in networks. ","L, Q, R, and T -- Which Spin Bit Cousin Is Here to Stay?",6,"[""Our new paper on four spin-bit cousins that focus on loss detection is out and will be presented at ANRW '21 (co-located with IETF 111 @inretafo/@ietf). Check out the paper () and try our experiments yourself (). [1/6] "", 'In a Mininet testbed, we study the loss detection capabilities of the L-, Q-, R- and T-Bits, which are currently proposed in the @ietf IPPM WG and, similar to the spin-bit, shape measurable signals onto a (transport) connection. (https://t.co/JwTHhKtQJI) [2/6]', 'Overall, all approaches provide reasonable loss estimates close to the ground truth when subject to random loss. However, longer algorithmic intervals of Q, R, T cause fluctuations, while the L follows the ground truth with a slight delay. [3/6] https://t.co/zRwGZ92y53', 'These intervals of Q, R, T also impact their accuracy in times of bursty loss as whole phases might be wiped out. The L-Bit is quite robust to this as it reports the loss detected by the sender-side loss detection on a packet-by-packet level. [4/6] https://t.co/rXhqfVTAhQ', 'The previous results base on long-running experiments with roughly 1 Mio sent packets each. When looking at shorter connections, it can be seen that the approaches require certain amounts of sent data to produce readings. Results only stabilize for higher flow volumes. [5/6] https://t.co/o58R9CHXJn', 'From a pure measurement accuracy perspective, the L-Bit and combinations of Q&L / Q&R, look the most promising. However, the approaches allow for different path segmentation granularities. While this might impact real deployments, it is not the focus of our paper. [6/6] https://t.co/qSzAi4OIPO']",21,06,1551
244,162,1316458822135767040,1210312444221935616,Cyrus Rashtchian,"[1/4] Challenge: recover an unknown string, but you only see noisy copies, bits randomly deleted. How many samples do you need? New survey paper ""Trace Reconstruction Problems in Computational Biology"" w/ Vinnu Bhardwaj, Pavel A. Pevzner @yana_safonova_ [2/4] We've been working on this for about a year. It has two main goals: 1) Describe many new models motivated by computational immunology, so that #MachineLearning / #Statistics / #compbio ppl work on them! 2) Provide a coherent survey of the lots of recent work in the area [3/4] The above pic shows some models, where instead of indels, there is random trimming/errors in prefix or suffix. This is motivated by VDJ recombination, which is a crucial and very cool way that your body fights off bad pathogens (leading to the 1987 #NobelPrize !). [4/4] Beyond immunology, we also discuss #DNA data storage and why the trace reconstruction problems show up naturally when you are recovering the data. We also describe the main theoretical advances, with even more open questions! ",https://arxiv.org/abs/2010.06083,"The problem of reconstructing a string from its error-prone copies, the trace reconstruction problem, was introduced by Vladimir Levenshtein two decades ago. While there has been considerable theoretical work on trace reconstruction, practical solutions have only recently started to emerge in the context of two rapidly developing research areas: immunogenomics and DNA data storage. In immunogenomics, traces correspond to mutated copies of genes, with mutations generated naturally by the adaptive immune system. In DNA data storage, traces correspond to noisy copies of DNA molecules that encode digital data, with errors being artifacts of the data retrieval process. In this paper, we introduce several new trace generation models and open questions relevant to trace reconstruction for immunogenomics and DNA data storage, survey theoretical results on trace reconstruction, and highlight their connections to computational biology. Throughout, we discuss the applicability and shortcomings of known solutions and suggest future research directions. ",Trace Reconstruction Problems in Computational Biology,4,"['[1/4] Challenge: recover an unknown string, but you only see noisy copies, bits randomly deleted. How many samples do you need?\n\nNew survey paper ""Trace Reconstruction Problems in Computational Biology"" w/ Vinnu Bhardwaj, Pavel A. Pevzner @yana_safonova_ \n\n', ""[2/4] We've been working on this for about a year. It has two main goals: \n1) Describe many new models motivated by computational immunology, so that #MachineLearning / #Statistics / #compbio ppl work on them!\n2) Provide a coherent survey of the lots of recent work in the area https://t.co/fYHDTNlH9g"", '[3/4] The above pic shows some models, where instead of indels, there is random trimming/errors in prefix or suffix. This is motivated by VDJ recombination, which is a crucial and very cool way that your body fights off bad pathogens (leading to the 1987 #NobelPrize !). https://t.co/2fprDA6WCj', '[4/4] Beyond immunology, we also discuss #DNA data storage and why the trace reconstruction problems show up naturally when you are recovering the data. We also describe the main theoretical advances, with even more open questions! https://t.co/ookRHrLB0O']",20,10,1062
245,57,1131832606968897537,948044683673923584,Yiping Lu đđ,Our YOPO has a new arxiv version! 5 time faster than original adversarial training and only need 2/3 GPU time compares with a concurrent paper! **Neural ODE**'s idea is the core! struggling for arxiv version of other neurips submissions... ,https://arxiv.org/abs/1905.00877,"Deep learning achieves state-of-the-art results in many tasks in computer vision and natural language processing. However, recent works have shown that deep networks can be vulnerable to adversarial perturbations, which raised a serious robustness issue of deep networks. Adversarial training, typically formulated as a robust optimization problem, is an effective way of improving the robustness of deep networks. A major drawback of existing adversarial training algorithms is the computational overhead of the generation of adversarial examples, typically far greater than that of the network training. This leads to the unbearable overall computational cost of adversarial training. In this paper, we show that adversarial training can be cast as a discrete time differential game. Through analyzing the Pontryagin's Maximal Principle (PMP) of the problem, we observe that the adversary update is only coupled with the parameters of the first layer of the network. This inspires us to restrict most of the forward and back propagation within the first layer of the network during adversary updates. This effectively reduces the total number of full forward and backward propagation to only one for each group of adversary updates. Therefore, we refer to this algorithm YOPO (You Only Propagate Once). Numerical experiments demonstrate that YOPO can achieve comparable defense accuracy with approximately 1/5 ~ 1/4 GPU time of the projected gradient descent (PGD) algorithm. Our codes are available at https://this https URL ","You Only Propagate Once: Accelerating Adversarial Training via Maximal
Principle",1,"[""Our YOPO has a new arxiv version! 5 time faster than original adversarial training and only need 2/3 GPU time compares with a concurrent paper! **Neural ODE**'s idea is the core!\nstruggling for arxiv version of other neurips submissions... ""]",19,05,253
246,126,1334389947265126401,802543221943439360,Andrea Caputo," Paper out, pretty excited with it đŠđŠđŠ With Jose Luis Bernal and Marc Kamionkowski we propose a new way to detect dark matter decay using Line Intensity Mapping! We essentially suggest to treat the dark matter as a line interloper and look for it đ§đ§ ",https://arxiv.org/abs/2012.00771,"The nature of dark matter is a longstanding mystery in cosmology, which can be studied with laboratory or collider experiments, as well as astrophysical and cosmological observations. In this work, we propose realistic and efficient strategies to detect radiative products from dark-matter decays with line-intensity mapping (LIM) experiments. This radiation will behave as a line interloper for the atomic and molecular spectral lines targeted by LIM surveys. The most distinctive signatures of the contribution from dark-matter radiative decays are an extra anisotropy on the LIM power spectrum due to projection effects, as well as a narrowing and a shift towards higher intensities of the voxel intensity distribution. We forecast the minimum rate of decays into two photons that LIM surveys will be sensitive to as function of the dark-matter mass in the range $\sim 10^{-6}-10$ eV, and discuss how to reinterpret such results for dark matter that decays into a photon and another particle. We find that both the power spectrum and the voxel intensity distribution are expected to be very sensitive to the dark-matter contribution, with the voxel intensity distribution being more promising for most experiments considered. Interpreting our results in terms of the axion, we show that LIM surveys will be extremely competitive to detect its decay products, improving several orders of magnitudes (depending on the mass) the sensitivity of laboratory and astrophysical searches, especially in the mass range $\sim 1-10$ eV. ",Strategies to Detect Dark-Matter Decays with Line-Intensity Mapping,1,"['\n\nPaper out, pretty excited with it đŠđŠđŠ\nWith Jose Luis Bernal and Marc Kamionkowski we propose a new way to detect dark matter decay using Line Intensity Mapping! We essentially suggest to treat the dark matter as a line interloper and look for it đ§đ§ ']",20,12,262
247,147,1484451949386817536,1238834561892659207,Jessica Di Cocco,"We were informed of an error in our dataset used for the article ""How Populist are Parties?"" (doi:10.1017/pan.2021.29). You find the updated dataset here and an addendum/corrigendum where we explain the error here @BernardoMonechi We thank @michaelj505 and @Robert_A_Huber for finding the erorr!",https://arxiv.org/abs/2201.07972,"This paper is a corrigendum and addendum to the previously published article: 'How Populist are Parties? Measuring Degrees of Populism in Party Manifestos Using Supervised Machine Learning' (Political Analysis, 1-17. doi:10.1017/pan.2021.29). These corrigendum and addendum were prepared to correct errors in data labelling and show some extra insights not included in the previously published paper. Here, we report these corrections and point to some additional conclusions by focusing on the effects of the label reshuffling per parties and years and presenting new figures wherever appropriate. We show that although the simplified labelling method proposed in the previously-published article can induce biases in the correlations with expert scores, random labelling reduces correlations significantly. We show that this is also true for correlations based on a manually-coded data set. These modifications are based on other evidence and results reported in detail in a future publication. ","Corrigendum and addendum to: How Populist are Parties? Measuring Degrees
of Populism in Party Manifestos Using Supervised Machine Learning",2,"['We were informed of an error in our dataset used for the article ""How Populist are Parties?"" (doi:10.1017/pan.2021.29). You find the updated dataset here and an addendum/corrigendum where we explain the error here \n@BernardoMonechi', 'We thank @michaelj505 and @Robert_A_Huber for finding the erorr!']",22,01,309
248,25,1032977805649371136,185910194,Graham Neubig,"New #EMNLP2018 paper proposing 1) a mathematical framework for describing data augmentation methods for text 2) a super-simple augmentation method for NMT: replace random source/target words. Nice results on several tasks vs. strong baselines! Great work by authors @cindyxinyiwang, @hieupham789, and Zihang Dai!",https://arxiv.org/abs/1808.07512,"In this work, we examine methods for data augmentation for text-based tasks such as neural machine translation (NMT). We formulate the design of a data augmentation policy with desirable properties as an optimization problem, and derive a generic analytic solution. This solution not only subsumes some existing augmentation schemes, but also leads to an extremely simple data augmentation strategy for NMT: randomly replacing words in both the source sentence and the target sentence with other random words from their corresponding vocabularies. We name this method SwitchOut. Experiments on three translation datasets of different scales show that SwitchOut yields consistent improvements of about 0.5 BLEU, achieving better or comparable performances to strong alternatives such as word dropout (Sennrich et al., 2016a). Code to implement this method is included in the appendix. ","SwitchOut: an Efficient Data Augmentation Algorithm for Neural Machine
Translation",2,"['New #EMNLP2018 paper proposing 1) a mathematical framework for describing data augmentation methods for text 2) a super-simple augmentation method for NMT: replace random source/target words. Nice results on several tasks vs. strong baselines! ', 'Great work by authors @cindyxinyiwang, @hieupham789, and Zihang Dai!']",18,08,326
249,3,1301475345636487168,28734416,Sebastian Risi,"Can we use machine learning methods such as GANs to assess creativity? In a recent collaboration lead by @jrafner we try to find out by letting players ""blend"" existing images into new images under varying constraints. Paper: Our study indicates that the system provides a playful experience, affords players a sense of control over the interface, and elicits different types of player behavior, supporting further study of the tool for use in a scalable, playful, creativity assessment. w/ @jacobsherson, @sparvell, @Learnonomy, @jacksohne, @ACRold, @asmaalfadala, @dominicregester",https://arxiv.org/abs/2008.05914,"We present a pilot study on crea.blender, a novel co-creative game designed for large-scale, systematic assessment of distinct constructs of human creativity. Co-creative systems are systems in which humans and computers (often with Machine Learning) collaborate on a creative task. This human-computer collaboration raises questions about the relevance and level of human creativity and involvement in the process. We expand on, and explore aspects of these questions in this pilot study. We observe participants play through three different play modes in crea.blender, each aligned with established creativity assessment methods. In these modes, players ""blend"" existing images into new images under varying constraints. Our study indicates that crea.blender provides a playful experience, affords players a sense of control over the interface, and elicits different types of player behavior, supporting further study of the tool for use in a scalable, playful, creativity assessment. ","crea.blender: A Neural Network-Based Image Generation Game to Assess
Creativity",3,"['Can we use machine learning methods such as GANs to assess creativity? In a recent collaboration lead by @jrafner we try to find out by letting players ""blend"" existing images into new images under varying constraints. Paper: ', 'Our study indicates that the system provides a playful experience, affords players a sense of control over the interface, and elicits different types of player behavior, supporting further study of the tool for use in a scalable, playful, creativity assessment. https://t.co/PvIyNv9pBA', 'w/ @jacobsherson, @sparvell, @Learnonomy, @jacksohne, @ACRold, @asmaalfadala, @dominicregester']",20,08,603
250,102,1448364157498060802,1331816112502104130,DeWeese Lab,"Weâre thrilled to announce our new paper, in which we derive and test a âšfirst-principles theory of generalization in deep learning!âš We affectionately call it the Theory of Eigenlearning. 𧔗 Recent breakthroughs have shown that â-width nets are actually very simple: their evolutionâs described by a âneural tangent kernelâ (NTK), and their final learned functions (when trained on MSE loss) are given by kernel regression with that NTK. (2/n) We show that wide net learning is easily understood in the eigenbasis of the NTK: the network more easily learns high-eigenvalue eigenfunctions, which typically correspond to low spatial freq, quantifying the well-known but vague notion of âspectral biasâ to low freqs. (3/n) We derive simple expressions for the generalization of NTK regression, then show that they accurately predict generalization *even in finite nets!* We focus on a simple new measure of generalization we call ""learnability""; in Fig, exp (dots) agrees v well w/theory (curves) (4/n) We also prove a fundamental conservation law governing the inductive bias of wide nets: all NTKs have the same fixed budget of âlearnabilityâ that they must divvy up among their eigenmodes. Fig shows how the sum of these learnabilities is always the trainset size. (5/n) This work was done by lab members Jamie Simon and Maddie Dickens. It builds on this pioneering work from @blake__bordelon, @canatar_a, and @CPehlevan, which you should also check out! If you have any questions, drop us a tweet or an email! (7/7)",https://arxiv.org/abs/2110.03922,"Kernel regression is an important nonparametric learning algorithm with an equivalence to neural networks in the infinite-width limit. Understanding its generalization behavior is thus an important task for machine learning theory. In this work, we provide a theory of the inductive bias and generalization of kernel regression using a new measure characterizing the ""learnability"" of a given target function. We prove that a kernel's inductive bias can be characterized as a fixed budget of learnability, allocated to its eigenmodes, that can only be increased with the addition of more training data. We then use this rule to derive expressions for the mean and covariance of the predicted function and gain insight into the overfitting and adversarial robustness of kernel regression and the hardness of the classic parity problem. We show agreement between our theoretical results and both kernel regression and wide finite networks on real and synthetic learning tasks. ","A Theory of the Inductive Bias and Generalization of Kernel Regression
and Wide Neural Networks",7,"['Weâre thrilled to announce our new paper, in which we derive and test a âšfirst-principles theory of generalization in deep learning!âš We affectionately call it the Theory of Eigenlearning. 𧔗\n\n', 'Recent breakthroughs have shown that â-width nets are actually very simple: their evolutionâs described by a âneural tangent kernelâ (NTK), and their final learned functions (when trained on MSE loss) are given by kernel regression with that NTK. (2/n)', 'We show that wide net learning is easily understood in the eigenbasis of the NTK: the network more easily learns high-eigenvalue eigenfunctions, which typically correspond to low spatial freq, quantifying the well-known but vague notion of âspectral biasâ to low freqs. (3/n) https://t.co/Imki19Nzll', 'We derive simple expressions for the generalization of NTK regression, then show that they accurately predict generalization *even in finite nets!* We focus on a simple new measure of generalization we call ""learnability""; in Fig, exp (dots) agrees v well w/theory (curves) (4/n) https://t.co/zPP9omyrEb', 'We also prove a fundamental conservation law governing the inductive bias of wide nets: all NTKs have the same fixed budget of âlearnabilityâ that they must divvy up among their eigenmodes. Fig shows how the sum of these learnabilities is always the trainset size. (5/n) https://t.co/5U6T9Snpw6', 'This work was done by lab members Jamie Simon and Maddie Dickens. It builds on this pioneering work from @blake__bordelon, @canatar_a, and @CPehlevan, which you should also check out! https://t.co/QeDKdT6U7E', 'If you have any questions, drop us a tweet or an email! (7/7)']",21,10,1552
251,160,1433804864828608524,328326288,Mikhail Kats,"Our new preprint, led by the @RonningGroup: ""Fast recovery of ion-irradiation-induced defects in Ge2Sb2Te5 (GST) thin films at room temperature"" This paper studies ion-induced defects in GST, a phase-change material used in photonics to enable tunability We explore how GST transitions phases upon ion irradiation, and the differences between defect creation and annealing between hexagonal and rock-salt GST, via optical, electrical, and x-ray experiments The @KatsGroup work was supported by @USNavyResearch and in part by NG Next ",https://arxiv.org/abs/2109.00716,"Phase-change materials serve a broad field of applications ranging from non-volatile electronic memory to optical data storage by providing reversible, repeatable, and rapid switching between amorphous and crystalline states accompanied by large changes in the electrical and optical properties. Here, we demonstrate how ion irradiation can be used to tailor disorder in initially crystalline Ge2Sb2Te5 (GST) thin films via the intentional creation of lattice defects. We found that continuous Ar ion irradiation at room temperature of GST films causes complete amorphization of GST when exceeding 0.6 (for rock-salt GST) and 3 (for hexagonal GST) displacements per atom (n_dpa). While the transition from rock-salt to amorphous GST is caused by progressive amorphization via the accumulation of lattice defects, several transitions occur in hexagonal GST upon ion irradiation. In hexagonal GST, the creation of point defects and small defect clusters leads to disordering of intrinsic vacancy layers (van der Waals gaps) that drives the electronic metal-insulator transition. Increasing disorder then induces a structural transition from hexagonal to rock-salt and then leads to amorphization. Furthermore, we observed different annealing behavior of defects for rock-salt and hexagonal GST. The higher amorphization threshold in hexagonal GST compared to rock-salt GST is caused by an increased defect-annealing rate, i.e., a higher resistance against ion-beam-induced disorder. Moreover, we observed that the recovery of defects in GST is on the time scale of seconds or less at room temperature. ","Fast recovery of ion-irradiation-induced defects in Ge2Sb2Te5 thin films
at room temperature",2,"['Our new preprint, led by the @RonningGroup: ""Fast recovery of ion-irradiation-induced defects in Ge2Sb2Te5 (GST) thin films at room temperature"" \n\nThis paper studies ion-induced defects in GST, a phase-change material used in photonics to enable tunability', 'We explore how GST transitions phases upon ion irradiation, and the differences between defect creation and annealing between hexagonal and rock-salt GST, via optical, electrical, and x-ray experiments\n\nThe @KatsGroup work was supported by @USNavyResearch and in part by NG Next https://t.co/cztqhErNY9']",21,09,546
252,226,1374716954276151296,14659319,Arash Badie-Modiri,"Check out our paper on contact tracing apps (i.e. #COVID19 exposure notification) where we study effects of degree distribution, homophily/heterophily of app users and failure (or people just not paying attention!) (w/ @abbas_k_rizi @Faqeeh_ali_ @bolozna) For example, while heterophily is generally bad (since app users happen to interact with mostly non-app users, hence tracing apps are not used at all) too much homophily is also detrimental. There's a sweet spot for mixing of app users and non-app users. Also studied is the effect of targeting/convincing high degree nodes (socially highly active people) to use the app and to what extend this pushed the epidemic threshold back. Check it out on ArXiv: ",https://arxiv.org/abs/2103.12634,"Contact tracing via digital tracking applications installed on mobile phones is an important tool for controlling epidemic spreading. Its effectivity can be quantified by modifying the standard methodology for analyzing percolation and connectivity of contact networks. We apply this framework to networks with varying degree distributions, numbers of application users, and probabilities of quarantine failures. Further, we study structured populations with homophily and heterophily and the possibility of degree-targeted application distribution. Our results are based on a combination of explicit simulations and mean-field analysis. They indicate that there can be major differences in the epidemic size and epidemic probabilities which are equivalent in the normal SIR processes. Further, degree heterogeneity is seen to be especially important for the epidemic threshold but not as much for the epidemic size. The probability that tracing leads to quarantines is not as important as the application adoption rate. Finally, both strong homophily and especially heterophily with regard to application adoption can be detrimental. Overall, epidemic dynamics are very sensitive to all of the parameter values we tested out, which makes the problem of estimating the effect of digital contact tracing an inherently multidimensional problem. ","Epidemic Spreading and Digital Contact Tracing: Effects of Heterogeneous
Mixing and Quarantine Failures",4,"['Check out our paper on contact tracing apps (i.e. #COVID19 exposure notification) where we study effects of degree distribution, homophily/heterophily of app users and failure (or people just not paying attention!)\n\n\n(w/ @abbas_k_rizi @Faqeeh_ali_ @bolozna)', ""For example, while heterophily is generally bad (since app users happen to interact with mostly non-app users, hence tracing apps are not used at all) too much homophily is also detrimental. There's a sweet spot for mixing of app users and non-app users."", 'Also studied is the effect of targeting/convincing high degree nodes (socially highly active people) to use the app and to what extend this pushed the epidemic threshold back.', 'Check it out on ArXiv:\nhttps://t.co/h0X6fV7NeW']",21,03,723
253,30,1164665133890592768,1137569442404036609,Daniel Liu,"New paper đ°: Adversarial point perturbations on 3D objects Blog post: *FOUR* novel #adversarial attacks against #3D #pointcloud #DeepLearning! #security #MachineLearning 1/10 Thanks to Ronald Yu and Hao Su from @UCSanDiego for minimally mentoring me and providing GPUs even though I am a high school student! This paper would not be possible without previous work by @goodfellow_ian @aleks_madry @NicolasPapernot @alexey2004 and many others! 2/10 Idea đĄ: shape-aware attacks with *just* unordered point sets. Two categories: - Distributional: imperceptible; measures perturbation through Hausdorff metric. - Shape: global changes to shape; realistic and robust against point removal defenses. 3/10 Distributional attack: estimate shape of point set and use projected gradient descent to keep perturbations on the shape. Shape estimation uses triangulation algorithms, and projection is sped up with a metric tree. 4/10 For reference, directly using gradient descent results in many outlier points floating around in mid-air: 5/10 Perturbation resampling attack: gradient descent + resampling points onto shape inferred through triangulation to maintain an even sampling of points on the shape. 6/10 Adversarial sticks: perturb a few points and connect them to triangulated shape by resampling points, instead of the harder task of orienting sticks in 3D. This is easier to construct IRL. 7/10 Adversarial sinks: instead of perturbing single points, move ""black holes"" that attract points based on a radial basis function to change the shape. This is fully differentiable. 8/10 Shape attacks perform much better than naive gradient descent (iter. gradient L_2) even when we remove up to half of the points (using previously proposed algorithms) as a defense. 9/10 Please read the blog post () for a brief history of ideas in the field of point set adversarial attacks. 10/10 @UCSanDiego @goodfellow_ian @aleks_madry @NicolasPapernot @alexey2004 Also @ChrSzegedy for the first paper on adversarial examples!",https://arxiv.org/abs/1908.06062,"The importance of training robust neural network grows as 3D data is increasingly utilized in deep learning for vision tasks in robotics, drone control, and autonomous driving. One commonly used 3D data type is 3D point clouds, which describe shape information. We examine the problem of creating robust models from the perspective of the attacker, which is necessary in understanding how 3D neural networks can be exploited. We explore two categories of attacks: distributional attacks that involve imperceptible perturbations to the distribution of points, and shape attacks that involve deforming the shape represented by a point cloud. We explore three possible shape attacks for attacking 3D point cloud classification and show that some of them are able to be effective even against preprocessing steps, like the previously proposed point-removal defenses. ",Adversarial shape perturbations on 3D point clouds,11,"['New paper đ°:\nAdversarial point perturbations on 3D objects\n\n\nBlog post: \n\n*FOUR* novel #adversarial attacks against #3D #pointcloud #DeepLearning!\n\n#security #MachineLearning\n\n1/10 ', 'Thanks to Ronald Yu and Hao Su from @UCSanDiego for minimally mentoring me and providing GPUs even though I am a high school student!\n\nThis paper would not be possible without previous work by @goodfellow_ian @aleks_madry @NicolasPapernot @alexey2004 and many others!\n\n2/10', 'Idea đĄ: shape-aware attacks with *just* unordered point sets.\n\nTwo categories:\n- Distributional: imperceptible; measures perturbation through Hausdorff metric.\n- Shape: global changes to shape; realistic and robust against point removal defenses.\n\n3/10 https://t.co/EKOSqIN1qO', 'Distributional attack: estimate shape of point set and use projected gradient descent to keep perturbations on the shape.\n\nShape estimation uses triangulation algorithms, and projection is sped up with a metric tree.\n\n4/10 https://t.co/nf3YqF5mkD', 'For reference, directly using gradient descent results in many outlier points floating around in mid-air:\n\n5/10 https://t.co/tcMVqQYSTx', 'Perturbation resampling attack: gradient descent + resampling points onto shape inferred through triangulation to maintain an even sampling of points on the shape.\n\n6/10 https://t.co/V73fsEiUWk', 'Adversarial sticks: perturb a few points and connect them to triangulated shape by resampling points, instead of the harder task of orienting sticks in 3D. This is easier to construct IRL.\n\n7/10 https://t.co/87ZqfUFGnc', 'Adversarial sinks: instead of perturbing single points, move ""black holes"" that attract points based on a radial basis function to change the shape. This is fully differentiable.\n\n8/10 https://t.co/hYzl6eOoou', 'Shape attacks perform much better than naive gradient descent (iter. gradient L_2) even when we remove up to half of the points (using previously proposed algorithms) as a defense.\n\n9/10 https://t.co/Yj5aItwIYV', 'Please read the blog post (https://t.co/FPbzLsT2Op) for a brief history of ideas in the field of point set adversarial attacks.\n\n10/10', '@UCSanDiego @goodfellow_ian @aleks_madry @NicolasPapernot @alexey2004 Also @ChrSzegedy for the first paper on adversarial examples!']",19,08,2081
254,55,1230644029311901703,494134136,Krzysztof Geras,"We just released a new paper on deep learning for screening mammography! âAn interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localizationâ. 1/7 Despite learning only with image-level labels, the model achieves an AUC of 0.93, outperforming ResNet-34 and Faster R-CNN. Compared to ResNet-34, it is 4.1x faster while using 78.4% less memory. In a reader study, it surpasses radiologist-level AUC by a margin of 0.11. 2/7 Our model works in 3 stages. (1) Looking at the entire image with a network of a relatively low capacity to identify the most informative patches. (2) Looking at these patches with a network of a higher capacity. (3) Integrating information obtained in stages (1) and (2). 3/7 We conducted extensive experiments on the importance of various design choices in the model, the type of pooling aggregating saliency maps, the number of patches extracted to examine further, ... 4/7 We are expecting that our model will be especially useful when applied to data for which it is difficult or impossible to collect pixel-level annotations. We also hypothesize that this or similar models could be useful to discover new biomarkers. 5/7 We made the code and the model public at . We are hoping to enable others to experiment with and build upon our model! Please let us know if you find it useful. 6/7 This paper is another product of a collaboration between @cai2r and @NYUDataScience. It was led by @ArtieShen, supported by @NanWu__, @zhansheng, @jpatrickpark, Kangning Liu, @TyagiSudarshini, Laura Heacock, S. Gene Kim, @DrLindaMoy, @kchonyc and myself. 7/7",http://arxiv.org/abs/2002.07613,"Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we extend the globally-aware multiple instance classifier, a framework we proposed to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a final prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, consisting of more than one million images, our model achieves an AUC of 0.93 in classifying breasts with malignant findings, outperforming ResNet-34 and Faster R-CNN. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11. The proposed model is available online: this https URL ","An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization",7,"['We just released a new paper on deep learning for screening mammography!\n\nâAn interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localizationâ.\n\n\n\n1/7 ', 'Despite learning only with image-level labels, the model achieves an AUC of 0.93, outperforming ResNet-34 and Faster R-CNN. Compared to ResNet-34, it is 4.1x faster while using 78.4% less memory. In a reader study, it surpasses radiologist-level AUC by a margin of 0.11.\n\n2/7 https://t.co/d0PdA0ZBg9', 'Our model works in 3 stages. (1) Looking at the entire image with a network of a relatively low capacity to identify the most informative patches. (2) Looking at these patches with a network of a higher capacity. (3) Integrating information obtained in stages (1) and (2).\n\n3/7 https://t.co/YsoNpVmhjs', 'We conducted extensive experiments on the importance of various design choices in the model, the type of pooling aggregating saliency maps, the number of patches extracted to examine further, ...\n\n4/7 https://t.co/4uZr23aEcC', 'We are expecting that our model will be especially useful when applied to data for which it is difficult or impossible to collect pixel-level annotations. We also hypothesize that this or similar models could be useful to discover new biomarkers.\n\n5/7 https://t.co/I4S9N87lcX', 'We made the code and the model public at https://t.co/xATXueLLWw. We are hoping to enable others to experiment with and build upon our model! Please let us know if you find it useful.\n\n6/7', 'This paper is another product of a collaboration between @cai2r and @NYUDataScience. It was led by @ArtieShen, supported by @NanWu__, @zhansheng, @jpatrickpark, Kangning Liu, @TyagiSudarshini, Laura Heacock, S. Gene Kim, @DrLindaMoy, @kchonyc and myself.\n\n7/7']",20,02,1674
255,123,1303254365793333249,1133276730595192832,Raphael Sonabend,"{distr6} introduces a novel approach to handling probability distributions in #rstats. In this paper we introduce {distr6}, compare different OOP paradigms in R, discuss new & efficient math notation for distributions. #datascience #statistics #coding",https://arxiv.org/abs/2009.02993,"distr6 is an object-oriented (OO) probability distributions interface leveraging the extensibility and scalability of R6, and the speed and efficiency of Rcpp. Over 50 probability distributions are currently implemented in the package with `core' methods including density, distribution, and generating functions, and more `exotic' ones including hazards and distribution function anti-derivatives. In addition to simple distributions, distr6 supports compositions such as truncation, mixtures, and product distributions. This paper presents the core functionality of the package and demonstrates examples for key use-cases. In addition this paper provides a critical review of the object-oriented programming paradigms in R and describes some novel implementations for design patterns and core object-oriented features introduced by the package for supporting distr6 components. ",distr6: R6 Object-Oriented Probability Distributions Interface in R,1,"['{distr6} introduces a novel approach to handling probability distributions in #rstats. In this paper we introduce {distr6}, compare different OOP paradigms in R, discuss new & efficient math notation for distributions.\n\n\n\n#datascience #statistics #coding']",20,09,258
256,18,1355092054771183619,869896064802934788,Jan Rybizki,Submitted my new paper () three minutes after the opening for new submissions (don't make last minute latex changes..). Came out 9th on the list. So only about 20% of submitters try to put their article on top. Do people use submission schedulers yet?,https://arxiv.org/abs/2101.11641,"The Gaia early Data Release 3 has delivered exquisite astrometric data for 1.47 billion sources, which is revolutionizing many fields in astronomy. For a small fraction of these sources, the astrometric solutions are poor, and the reported values and uncertainties may not apply. Before any analysis, it is important to recognize and excise these spurious results - this is commonly done by means of quality flags in the Gaia catalog. Here, we devise a means of separating 'good' from 'bad' astrometric solutions that is an order of magnitude cleaner than any single flag: 99.3% pure and 97.3% complete, as validated on our test data. We devise an extensive sample of manifestly bad astrometric solutions, with parallax that is negative at > 4.5 sigma; and a corresponding sample of presumably good solutions, including sources in HEALPix pixels on the sky that do not contain such negative parallaxes, and sources that fall on the main sequence in a color-absolute magnitude diagram. We then train a neural network that uses 17 pertinent Gaia catalog entries and information about nearby sources to discriminate between these two samples, captured in a single 'astrometric fidelity' parameter. A diverse set of verification tests shows that our approach works very cleanly, including for sources with positive parallaxes. The main limitations of our approach are in the very low-SNR and the crowded regime. Our astrometric fidelities for all of eDR3 can be queried via the Virtual Observatory, our code and data are public. ",A classifier for spurious astrometric solutions in Gaia EDR3,1,"[""Submitted my new paper () three minutes after the opening for new submissions (don't make last minute latex changes..). Came out 9th on the list. So only about 20% of submitters try to put their article on top. Do people use submission schedulers yet?""]",21,01,257
257,17,1453994946860519431,1070325662253166594,Aadarsh Sahoo,"CoMix is now available on ArXiv! We introduce a new temporal contrastive learning approach for unsupervised video domain adaptation by jointly leveraging: video speed, background mixing, and target self-supervision. Paper link: #NeurIPS2021 Either you walk indoors or walk outside, action recognition should not be affected by the ""shift"" in domain/background. Focussing on action semantics is crucial. Our proposed contrastive background mixing component helps in that: Project page: Codes and poster coming soon! Joint work with @rutavms @rpanda89 @kate_saenko_ @AbirDasUCR",https://arxiv.org/abs/2110.15128,"Unsupervised domain adaptation which aims to adapt models trained on a labeled source domain to a completely unlabeled target domain has attracted much attention in recent years. While many domain adaptation techniques have been proposed for images, the problem of unsupervised domain adaptation in videos remains largely underexplored. In this paper, we introduce Contrast and Mix (CoMix), a new contrastive learning framework that aims to learn discriminative invariant feature representations for unsupervised video domain adaptation. First, unlike existing methods that rely on adversarial learning for feature alignment, we utilize temporal contrastive learning to bridge the domain gap by maximizing the similarity between encoded representations of an unlabeled video at two different speeds as well as minimizing the similarity between different videos played at different speeds. Second, we propose a novel extension to the temporal contrastive loss by using background mixing that allows additional positives per anchor, thus adapting contrastive learning to leverage action semantics shared across both domains. Moreover, we also integrate a supervised contrastive learning objective using target pseudo-labels to enhance discriminability of the latent space for video domain adaptation. Extensive experiments on several benchmark datasets demonstrate the superiority of our proposed approach over state-of-the-art methods. Project page: this https URL ","Contrast and Mix: Temporal Contrastive Video Domain Adaptation with
Background Mixing",4,"['CoMix is now available on ArXiv! We introduce a new temporal contrastive learning approach for unsupervised video domain adaptation by jointly leveraging: video speed, background mixing, and target self-supervision.\nPaper link: \n#NeurIPS2021 ', 'Either you walk indoors or walk outside, action recognition should not be affected by the ""shift"" in domain/background. Focussing on action semantics is crucial. Our proposed contrastive background mixing component helps in that: https://t.co/AlOtrHGErF', 'https://t.co/G0rxPdK7KD', 'Project page: https://t.co/E1dgpPFB3a\nCodes and poster coming soon!\nJoint work with @rutavms @rpanda89 @kate_saenko_ @AbirDasUCR']",21,10,610
258,92,1504266340072407043,1352219823539957762,Liam Dugan,"Ever wanted to automatically generate flashcards from a textbook PDF? Our new paper âA Feasibility Study of Answer-Unaware Question Generation for Educationâ investigates how feasible this is given recent advancements! Threadđ Fully automatic generation of quizzes or flashcards necessitates âanswer-unawareâ QG models (i.e. ones that don't require manual selection of answer spans). These models have to decide what is and isnât relevant to ask. This is non-trivial! (1/5) Unsurprisingly, these QG models (typically trained on datasets like SQuAD) tend to ask about topics that ""annotators are most likely to pick"". These topics are not necessarily the most educationally salient. Summarization helps solve this problem! (2/5) Running QG on auto-summarized text allows us to restrict the model to only see sentences that are highly salient to the larger passage. This improves relevance of generated questions (61% -> 78%). The effect is even more pronounced with human-summarized text (61% -> 95%). (3/5) Also, since summaries tend to consist of simpler, more self-contained sentences, QG on summaries produces questions that tend to be more interpretable out of context (56% -> 74%). Again, this effect is even larger when using human-written summaries as input (56% -> 94%). (4/5) These two factors lead to large increases in acceptability of generated Qs on summaries (83%) vs original text (33%) with no corresponding drop in bolded key-term coverage. tl;dr Automatic flashcard/quiz generation *is* currently feasible with min. supervisionđ (5/5) Donât believe the results? Run our repo and try it out for yourself! We include a joint summarization and QG pipeline interface as well as .txt files of textbook chapters. We also provide our data and the code to reproduce our analysis. @abhisuri97 Oh you bet it can! Currently only takes .txt files as input but I you can pretty easily hook up some PDF extraction to run it on PDFs đ",https://arxiv.org/abs/2203.08685,"We conduct a feasibility study into the applicability of answer-agnostic question generation models to textbook passages. We show that a significant portion of errors in such systems arise from asking irrelevant or uninterpretable questions and that such errors can be ameliorated by providing summarized input. We find that giving these models human-written summaries instead of the original text results in a significant increase in acceptability of generated questions (33% $\rightarrow$ 83%) as determined by expert annotators. We also find that, in the absence of human-written summaries, automatic summarization can serve as a good middle ground. ",A Feasibility Study of Answer-Agnostic Question Generation for Education,8,"['Ever wanted to automatically generate flashcards from a textbook PDF? Our new paper âA Feasibility Study of Answer-Unaware Question Generation for Educationâ investigates how feasible this is given recent advancements!\n\n\n\nThreadđ ', ""Fully automatic generation of quizzes or flashcards necessitates âanswer-unawareâ QG models (i.e. ones that don't require manual selection of answer spans). These models have to decide what is and isnât relevant to ask. This is non-trivial! (1/5) https://t.co/IG0kaTHPTw"", 'Unsurprisingly, these QG models (typically trained on datasets like SQuAD) tend to ask about topics that ""annotators are most likely to pick"". These topics are not necessarily the most educationally salient. Summarization helps solve this problem! (2/5)', 'Running QG on auto-summarized text allows us to restrict the model to only see sentences that are highly salient to the larger passage. This improves relevance of generated questions (61% -> 78%). The effect is even more pronounced with human-summarized text (61% -> 95%). (3/5) https://t.co/H7VIwZ0bzu', 'Also, since summaries tend to consist of simpler, more self-contained sentences, QG on summaries produces questions that tend to be more interpretable out of context (56% -> 74%). Again, this effect is even larger when using human-written summaries as input (56% -> 94%). (4/5)', 'These two factors lead to large increases in acceptability of generated Qs on summaries (83%) vs original text (33%) with no corresponding drop in bolded key-term coverage. \n\ntl;dr Automatic flashcard/quiz generation *is* currently feasible with min. supervisionđ (5/5) https://t.co/pLcf4i2Aq4', 'Donât believe the results? Run our repo and try it out for yourself! \n\nhttps://t.co/OimFWV9D4U \n\nWe include a joint summarization and QG pipeline interface as well as .txt files of textbook chapters. We also provide our data and the code to reproduce our analysis.', '@abhisuri97 Oh you bet it can! Currently only takes .txt files as input but I you can pretty easily hook up some PDF extraction to run it on PDFs đ']",22,03,1994
259,34,1341615441697894400,1185977761032110080,Kazumasa Ohno (性é ćæŁ),"A new paper posted! Though the paper is still under review, we have studied haze formation on Triton, an ultra-cold moon of Neptune, using microphysical models. We have discussed what physical processes are going on in Triton's cold and hazy atmosphere. We developed a bin-scheme microphysical model that can trace size and porosity distributions of haze particles in a self-consistent manner. We tested several possible nature of Triton hazes, namely Titan-like sphere and aggregate haze, as well as the hazes coated by C2H4 ices. In a nutshell, our main conclusion is that condensation of C2H4 ices likely play crucial role to control physical and optical properties of Triton hazes. It is hard to explain existing observations of Triton hazes by Voyager 2 assuming Titan-like haze without condensation. Both sphere and aggregate icy hazes can reasonably explain observations, so it is currently difficult to conclude which is true. As the spectral behavior and scattering properties are different, future mission on outer solar system will help to unveil the morphological nature. In both sphere and aggregate cases, total haze mass flux (tholin+ice) is about an order of magnitude lower than that on Triton. Given predominant icy composition, Titan-like haze production rate on Triton is likely much lower than that on Titan. Our findings are bloadly consistent with the results and expectation of a Pluto (+Triton) haze paper that appeared in recent Nature Astronomy. I hope our works can complement their paper. The paper is my first solar system paper. The results highlight a deep connection between haze and cloud formation: vapor condensation considerably alter the haze properties, depending on thermal structure. Hopefully, I would like to explore this phenomena in exoplanet context.",https://arxiv.org/abs/2012.11932,"The largest moon of Neptune, Triton, possess a cold and hazy atmosphere. Since the discovery of near-surface haze layer during the Voyager fly in 1989, the haze formation mechanism has not been investigated in detail. Here, we provide the first haze microphysical model on Triton. Our model solves the evolution of both size and porosity distributions of haze particles in a self-consistent manner. We simulated the formation of sphere and aggregate hazes with and without condensation of the C$_2$H$_4$ ice. The haze particles can grow into fractal aggregates with mass-equivalent sphere sizes of $\sim0.1$--$1~{\rm {\mu}m}$ and fractal dimension of $D_{\rm f} = 1.8$--$2.2$. The ice-free hazes cannot simultaneously explain both UV and visible observations of Voyager 2, while including the condensation of C$_2$H$_4$ ices provides two better solutions. For ice aggregates, the required total haze mass flux is $\sim2\times{10}^{-15}~{\rm g~{cm}^{-2}~s^{-1}}$. For the icy sphere scenario, the column integrated C$_2$H$_4$ production rate is $\sim8\times{10}^{-15}~{\rm g~{cm}^{-2}~s^{-1}}$, and the ice-free mass flux of $\sim6\times{10}^{-17}~{\rm g~{cm}^{-2}~s^{-1}}$. The UV occultation observations at short wavelength $<0.15~{\rm {\mu}m}$ may slightly favor the icy aggregates. Observations of the haze optical depth and the degree of forward scattering in UV and visible should be able to distinguish whether Triton's hazes are icy spheres or ice aggregates in future Triton missions. ",Haze Formation on Triton,7,"[""A new paper posted! Though the paper is still under review, we have studied haze formation on Triton, an ultra-cold moon of Neptune, using microphysical models. We have discussed what physical processes are going on in Triton's cold and hazy atmosphere.\n"", 'We developed a bin-scheme microphysical model that can trace size and porosity distributions of haze particles in a self-consistent manner. We tested several possible nature of Triton hazes, namely Titan-like sphere and aggregate haze, as well as the hazes coated by C2H4 ices.', 'In a nutshell, our main conclusion is that condensation of C2H4 ices likely play crucial role to control physical and optical properties of Triton hazes. It is hard to explain existing observations of Triton hazes by Voyager 2 assuming Titan-like haze without condensation.', 'Both sphere and aggregate icy hazes can reasonably explain observations, so it is currently difficult to conclude which is true. As the spectral behavior and scattering properties are different, future mission on outer solar system will help to unveil the morphological nature.', 'In both sphere and aggregate cases, total haze mass flux (tholin+ice) is about an order of magnitude lower than that on Triton. Given predominant icy composition, Titan-like haze production rate on Triton is likely much lower than that on Titan.', 'Our findings are bloadly consistent with the results and expectation of a Pluto (+Triton) haze paper that appeared in recent Nature Astronomy. I hope our works can complement their paper.\nhttps://t.co/rQEbeARDv2', 'The paper is my first solar system paper. The results highlight a deep connection between haze and cloud formation: vapor condensation considerably alter the haze properties, depending on thermal structure. Hopefully, I would like to explore this phenomena in exoplanet context.']",20,12,1810
260,205,1505202773046046723,1164592828640632832,Zorik Gekhman,"New Preprint â RED-ACE: Robust Error Detection for ASR using Confidence Embeddings. We propose a novel model for ASR Errors Detection that leverages ASR confidence scores. Joint work with Dina Zverinski, Jonathan Mallinson and Genady Beryozkin @GoogleAI ",http://arxiv.org/abs/2203.07172,"ASR Error Detection (AED) models aim to post-process the output of Automatic Speech Recognition (ASR) systems, in order to detect transcription errors. Modern approaches usually use text-based input, comprised solely of the ASR transcription hypothesis, disregarding additional signals from the ASR model. Instead, we propose to utilize the ASR system's word-level confidence scores for improving AED performance. Specifically, we add an ASR Confidence Embedding (ACE) layer to the AED model's encoder, allowing us to jointly encode the confidence scores and the transcribed text into a contextualized representation. Our experiments show the benefits of ASR confidence scores for AED, their complementary effect over the textual signal, as well as the effectiveness and robustness of ACE for combining these signals. To foster further research, we publish a novel AED dataset consisting of ASR outputs on the LibriSpeech corpus with annotated transcription errors. ",RED-ACE: Robust Error Detection for ASR using Confidence Embeddings,1,"['New Preprint â RED-ACE: Robust Error Detection for ASR using Confidence Embeddings.\n\nWe propose a novel model for ASR Errors Detection that leverages ASR confidence scores.\nJoint work with Dina Zverinski, Jonathan Mallinson and Genady Beryozkin\n\n@GoogleAI ']",22,03,267
261,198,1301767040953462785,1121482511576653824,Samuel Pawel,"New preprint ""The sceptical Bayes factor for the assessment of replication success"" () together with @HeldLeonhard. We propose a new method for the analysis of replication studies that combines Bayesian hypothesis testing with reverse-Bayes analysis.",https://arxiv.org/abs/2009.01520,"Replication studies are increasingly conducted but there is no established statistical criterion for replication success. We propose a novel approach combining reverse-Bayes analysis with Bayesian hypothesis testing: a sceptical prior is determined for the effect size such that the original finding is no longer convincing in terms of a Bayes factor. This prior is then contrasted to an advocacy prior (the reference posterior of the effect size based on the original study), and replication success is declared if the replication data favour the advocacy over the sceptical prior at a higher level than the original data favoured the sceptical prior over the null hypothesis. The sceptical Bayes factor is the highest level where replication success can be declared. A comparison to existing methods reveals that the sceptical Bayes factor combines several notions of replicability: it ensures that both studies show sufficient evidence against the null and penalises incompatibility of their effect estimates. Analysis of asymptotic properties and error rates, as well as case studies from the Social Sciences Replication Project show the advantages of the method for the assessment of replicability. ",The sceptical Bayes factor for the assessment of replication success,1,"['New preprint ""The sceptical Bayes factor for the assessment of replication success"" () together with @HeldLeonhard.\nWe propose a new method for the analysis of replication studies that combines Bayesian hypothesis testing with reverse-Bayes analysis.']",20,09,256
262,66,1460774809114136577,1254948077825294336,Mahdi Haghifam,"đ„ New paper đ„on @shortstein and @zakynthinou's CMI framework, demonstrating its unifying nature for obtaining optimal or near-optimal bounds for the expected excess risk in the realizable setting. Will be a spotlight at NeurIPSâ21! Different frameworks for proving generalization are not compatible. For instance, the minmax learnability of thresholds cannot be established using input-output MI, PAC-Bayes bounds, and privacy frameworks. How expressive is CMI in terms of explaining generalization? Some results 1- The CMI framework yields optimal risk bound for sample compression schemes. Covers many algorithms, including SVMs. 2- A broad class of proper ERMs achieves CMI of order O(1) (without suboptimal log n factor) for learning VC classes with finite star number. 3. Steinke and Zakynthinou introduced a variant of CMI called âevaluated"" CMI (eCMI). We show that, for every interpolating algorithm and data distribution, the expected risk vanishes as the number of samples (n) diverges if and only if its eCMI has sublinear growth with n. Joint work with @kdziugaite, Shay Moran, and @roydanroy. @lzamparo @shortstein @zakynthinou Thanks!",https://arxiv.org/abs/2111.05275,"In this work, we investigate the expressiveness of the ""conditional mutual information"" (CMI) framework of Steinke and Zakynthinou (2020) and the prospect of using it to provide a unified framework for proving generalization bounds in the realizable setting. We first demonstrate that one can use this framework to express non-trivial (but sub-optimal) bounds for any learning algorithm that outputs hypotheses from a class of bounded VC dimension. We prove that the CMI framework yields the optimal bound on the expected risk of Support Vector Machines (SVMs) for learning halfspaces. This result is an application of our general result showing that stable compression schemes Bousquet al. (2020) of size $k$ have uniformly bounded CMI of order $O(k)$. We further show that an inherent limitation of proper learning of VC classes contradicts the existence of a proper learner with constant CMI, and it implies a negative resolution to an open problem of Steinke and Zakynthinou (2020). We further study the CMI of empirical risk minimizers (ERMs) of class $H$ and show that it is possible to output all consistent classifiers (version space) with bounded CMI if and only if $H$ has a bounded star number (Hanneke and Yang (2015)). Moreover, we prove a general reduction showing that ""leave-one-out"" analysis is expressible via the CMI framework. As a corollary we investigate the CMI of the one-inclusion-graph algorithm proposed by Haussler et al. (1994). More generally, we show that the CMI framework is universal in the sense that for every consistent algorithm and data distribution, the expected risk vanishes as the number of samples diverges if and only if its evaluated CMI has sublinear growth with the number of samples. ",Towards a Unified Information-Theoretic Framework for Generalization,6,"[""đ„ New paper đ„on @shortstein and @zakynthinou's CMI framework, demonstrating its unifying nature for obtaining optimal or near-optimal bounds for the expected excess risk in the realizable setting.\n\nWill be a spotlight at NeurIPSâ21! "", 'Different frameworks for proving generalization are not compatible. For instance, the minmax learnability of thresholds cannot be established using input-output MI, PAC-Bayes bounds, and privacy frameworks. How expressive is CMI in terms of explaining generalization?', 'Some results\n1- The CMI framework yields optimal risk bound for sample compression schemes. Covers many algorithms, including SVMs.\n2- A broad class of proper ERMs achieves CMI of order O(1) (without suboptimal log n factor) for learning VC classes with finite star number.', '3. Steinke and Zakynthinou introduced a variant of CMI called âevaluated"" CMI (eCMI). We show that, for every interpolating algorithm and data distribution, the expected risk vanishes as the number of samples (n) diverges if and only if its eCMI has sublinear growth with n.', 'Joint work with @kdziugaite, Shay Moran, and @roydanroy.', '@lzamparo @shortstein @zakynthinou Thanks!']",21,11,1162
263,87,986765345708011525,4639078397,John Wise,"Do black holes from the first stars grow early on (z>8)? We find that it's rare to have any strong accretion events. Why? These BHs wander around the galaxy, rarely encountering clumps that are quickly eroded by stars. Led by @aBrittonSmith The green shows the ""accretability"" of the gas in the halo with the most (77) black holes. Halo mass = 1.6e9 Msun. Co-authors: @jaregan, @TurloughDownes, @bwoshea, and M. Norman ",http://arxiv.org/abs/1804.06477,"The formation of stellar mass black holes from the remnants of Population III stars provides a source of initial black hole seeds with the potential to grow into intermediate or, in rare cases, possibly supermassive black holes. We use the Renaissance simulation suite to follow the growth of over 15,000 black holes born into mini-haloes in the early Universe. We compute the evolution of the black holes by post-processing individual remnant Population III star particles in the Renaissance simulation snapshots. The black holes populate haloes from 10$^{6}$ M$_{\odot}$ up to 10$^{9}$ M$_{\odot}$. We find that all of the black holes display very inefficient growth. On average the black holes increase their initial mass by a factor 10$^{-5}$, with the most active black holes increasing their mass by approximately 10%. Only a single black hole experiences any period of super-Eddington accretion, but the duration is very short and not repeated. Furthermore, we find no correlation of black hole accretion with halo mass in the mass range sampled. Within most haloes, we identify clumps of cool, dense gas for which accretion rates would be high, but instances of black holes encountering these clumps are rare and short-lived. Star formation competes with black hole growth by consuming available gas and driving down accretion rates through feedback. We conclude that the black holes born from Population III remnants do not form a significant population of intermediate mass black holes in the early Universe and will need to wait until later times to undergo significant accretion, if at all. ","The Growth of Black Holes from Population III Remnants in the
Renaissance Simulations",2,"[""Do black holes from the first stars grow early on (z>8)? We find that it's rare to have any strong accretion events.\n\nWhy? These BHs wander around the galaxy, rarely encountering clumps that are quickly eroded by stars. Led by @aBrittonSmith "", 'The green shows the ""accretability"" of the gas in the halo with the most (77) black holes. Halo mass = 1.6e9 Msun. Co-authors: @jaregan, @TurloughDownes, @bwoshea, and M. Norman https://t.co/pCeN97I0Ww']",18,04,435
264,69,1517308234004058117,17993000,Zach Shahn,"Sorry @KhoaVuUmn. In our new ""working paper"" (are non-economists allowed to have working papers #EconTwitter?), we show that Structural Nested Mean Models are identified under time-varying conditional parallel trends assumptions. (This is good!) 𧔠Why does this matter? 1) SNMMs are models of time-varying treatment effect heterogeneity. So now we can learn how effects vary as a function of time-varying covariates under DiD type assumptions. 2) We only require parallel trends to hold conditional on time-varying covariates. So if there's a measured time-varying confounder of trends, you can adjust for it under our approach. 3) We can identify additional causal contrasts, eg the effect of one blip of treatment followed by no further treatment, beyond the ITT effect of an initial blip like @CdeChaisemartin and D'Haultfoeuille or the effect of sustained treatment like Callaway and @pedrohcgs 4) Our estimating equations are doubly robust and allow ML estimation of nuisance functions with cross-fitting So this seems to be that mythical free lunch where you can estimate more stuff than the other time-varying DiD methods under weaker assumptions? (Except that in practice to reap some of these rewards you will prob need to correctly specify a parametric blip model.) But we're interlopers in econometrics and the paper is still a work in progress, so I'm really hoping for both feedback and pushback from people like @pedrohcgs, @jmwooldridge, @CdeChaisemartin, @Susan_Athey, and others in what I understand to be the working paper tradition Will add simulations and real data analysis among other things soon, just had to put this out before one of you econ jackals took our idea! Also lots of directions for future work opened up by SNMMs. We mention some in the paper. Very joint work with Jamie Robins, Oliver Dukes, David Richardson, and Eric Tchetgen Tchetgen. I don't think any of them are (publicly) on twitter though. @matt_blackwell @a_strezh @KhoaVuUmn Looking forward to reading it!",https://arxiv.org/abs/2204.10291,"In this paper, we generalize methods in the Difference in Differences (DiD) literature by showing that both additive and multiplicative standard and coarse Structural Nested Mean Models (Robins, 1994, 1997, 1998, 2000, 2004; Lok and Degruttola, 2012; Vansteelandt and Joffe, 2014) are identified under parallel trends assumptions. Our methodology enables adjustment for time-varying covariates, identification of effect heterogeneity as a function of time-varying covariates, and estimation of treatment effects under a general class of treatment patterns (e.g. we do not restrict to the `staggered adoption' setting). We stress that these extensions come essentially for free, as our parallel trends assumption is not stronger than other parallel trends assumptions in the DiD literature. However, in contrast to much of the DiD literature, we only consider panel data, not repeated cross sectional data. ",Structural Nested Mean Models Under Parallel Trends Assumptions,10,"['Sorry @KhoaVuUmn. In our new ""working paper"" (are non-economists allowed to have working papers #EconTwitter?), we show that Structural Nested Mean Models are identified under time-varying conditional parallel trends assumptions. (This is good!) 𧔠', 'Why does this matter? 1) SNMMs are models of time-varying treatment effect heterogeneity. So now we can learn how effects vary as a function of time-varying covariates under DiD type assumptions.', ""2) We only require parallel trends to hold conditional on time-varying covariates. So if there's a measured time-varying confounder of trends, you can adjust for it under our approach."", ""3) We can identify additional causal contrasts, eg the effect of one blip of treatment followed by no further treatment, beyond the ITT effect of an initial blip like @CdeChaisemartin and D'Haultfoeuille or the effect of sustained treatment like Callaway and @pedrohcgs"", '4) Our estimating equations are doubly robust and allow ML estimation of nuisance functions with cross-fitting', 'So this seems to be that mythical free lunch where you can estimate more stuff than the other time-varying DiD methods under weaker assumptions? (Except that in practice to reap some of these rewards you will prob need to correctly specify a parametric blip model.)', ""But we're interlopers in econometrics and the paper is still a work in progress, so I'm really hoping for both feedback and pushback from people like @pedrohcgs, @jmwooldridge, @CdeChaisemartin, @Susan_Athey, and others in what I understand to be the working paper tradition"", 'Will add simulations and real data analysis among other things soon, just had to put this out before one of you econ jackals took our idea! Also lots of directions for future work opened up by SNMMs. We mention some in the paper.', ""Very joint work with Jamie Robins, Oliver Dukes, David Richardson, and Eric Tchetgen Tchetgen. I don't think any of them are (publicly) on twitter though."", '@matt_blackwell @a_strezh @KhoaVuUmn Looking forward to reading it!']",22,04,2017
265,156,1280482460510433281,1279570369184247810,PĂ©ter Mernyei,"Just published our work on Wiki-CS, a new node classification benchmark! Many existing datasets are structurally similar. Our benchmark provides more variety and raises new challenges. Paper: Will present as a spotlight at the GRL+ workshop at #icml2020! Compared to standard citation network datasets, this graph is much denser, with an average node degree of ~37 as opposed to ~4-5. The Deep Graph Mapper visualisation above also seems to indicate a more centralised, hierarchical structure. There is a lot more inter-class connectivity: we calculated the share of each node's same-class neighbours, and plotted the distribution of this property for different datasets. There is a significant spread in Wiki-CS, most nodes are not in homogenous neighbourhoods. This suggests that more complex methods for aggregating large neighbourhoods might be able to improve prediction accuracy. The work was the result of my final year undergraduate project at @Cambridge_CL, supervised by @catalinacangea. Thanks also to @PetarV_93 for inspiring my interest in this area!",https://arxiv.org/abs/2007.02901,"We present Wiki-CS, a novel dataset derived from Wikipedia for benchmarking Graph Neural Networks. The dataset consists of nodes corresponding to Computer Science articles, with edges based on hyperlinks and 10 classes representing different branches of the field. We use the dataset to evaluate semi-supervised node classification and single-relation link prediction models. Our experiments show that these methods perform well on a new domain, with structural properties different from earlier benchmarks. The dataset is publicly available, along with the implementation of the data pipeline and the benchmark experiments, at this https URL . ",Wiki-CS: A Wikipedia-Based Benchmark for Graph Neural Networks,5,"['Just published our work on Wiki-CS, a new node classification benchmark! Many existing datasets are structurally similar. Our benchmark provides more variety and raises new challenges.\nPaper: \nWill present as a spotlight at the GRL+ workshop at #icml2020! ', 'Compared to standard citation network datasets, this graph is much denser, with an average node degree of ~37 as opposed to ~4-5. The Deep Graph Mapper visualisation above also seems to indicate a more centralised, hierarchical structure. https://t.co/boSESmdxjI', ""There is a lot more inter-class connectivity: we calculated the share of each node's same-class neighbours, and plotted the distribution of this property for different datasets. There is a significant spread in Wiki-CS, most nodes are not in homogenous neighbourhoods. https://t.co/Qusn4PD0JS"", 'This suggests that more complex methods for aggregating large neighbourhoods might be able to improve prediction accuracy.', 'The work was the result of my final year undergraduate project at @Cambridge_CL, supervised by @catalinacangea. Thanks also to @PetarV_93 for inspiring my interest in this area!']",20,07,1091
266,131,994929743022653440,3160301736,Andrés A. Plazas Malagón,"""The Dark Energy Survey Scientists and Celebrities: What do they know about EPO? Do they know things?? Let's find out!"" We wrote a paper about EPO in DES (unfortunately, Mr. Peanutbutter is not part of the authors :P :P) #scicomm #STEM #EPO @theDESurvey",https://arxiv.org/abs/1805.04034,"We present a programmatic study of physicists' and astronomers' attitudes toward education and public outreach (EPO) using 131 survey responses from members of the Dark Energy Survey. We find a disparity between the types of EPO activities researchers deem valuable and those in which they participate. Most respondents are motivated to engage in EPO by a desire to educate the public. Barriers to engagement include career- and skill-related concerns, but lack of time is the main deterrent. We explore the value of centralized EPO efforts and conclude with a list of recommendations for increasing researchers' engagement. ","Astronomers' and Physicists' Attitudes Toward Education & Public
Outreach: A Programmatic Study with The Dark Energy Survey",1,"['""The Dark Energy Survey Scientists and Celebrities: What do they know about EPO? Do they know things?? Let\'s find out!"" We wrote a paper about EPO in DES (unfortunately, Mr. Peanutbutter is not part of the authors :P :P) #scicomm #STEM #EPO @theDESurvey']",18,05,260
267,129,1334130228512354304,314014164,Adrian Price-Whelan,"Paper day! ""Orbital Torus Imaging"" is a new method for dynamical inference (measuring the Galactic mass / dark matter distribution, etc.) that exploits the existence of element-abundance gradients, like these from @APOGEEsurvey: Conceptually, the method words because the element-abundance contours ""show"" us the shapes of orbits, demonstrated here with vertical kinematics of stars (in zâvz) You can think of ""Orbital Torus Imaging"" as a replacement for Jeans modeling that will be more precise and requires fewer assumptions in practice: We only require that orbits are phase-mixed locally in action-space We do a simple demonstration in this paper using @APOGEEsurvey data: Using just 8 element abundances, and only modeling the vertical kinematics of stars (but don't assume separability) we get constraints on the disk mass and scale height that are precise to a few percent Plus, now we finally have a (dynamical) use for all of those millions x 30 element abundances that spectroscopic surveys like @APOGEEsurvey, @galahsurvey, LAMOST, etc. are delivering :) As usual in Galactic dynamics, there are many caveats and we make many assumptions, but I'm excited to see what we will learn when we apply this to larger data sets and with more ambitious / flexible models of the Galactic mass! Thanks to everyone that contributed and helped with this project along the way! @davidwhogg @kvj_astro @melissakness @FritzZwicky @rareflwr41 + many others",https://arxiv.org/abs/2012.00015,"Many approaches to galaxy dynamics assume that the gravitational potential is simple and the distribution function is time-invariant. Under these assumptions there are traditional tools for inferring potential parameters given observations of stellar kinematics (e.g., Jeans models). However, spectroscopic surveys measure many stellar properties beyond kinematics. Here we present a new approach for dynamical inference, Orbital Torus Imaging, which makes use of kinematic measurements and element abundances (or other invariant labels). We exploit the fact that, in steady state, stellar labels vary systematically with orbit characteristics (actions), yet must be invariant with respect to orbital phases (conjugate angles). The orbital foliation of phase space must therefore coincide with surfaces along which all moments of all stellar label distributions are constant. Both classical-statistics and Bayesian methods can be built on this; these methods will be more robust and require fewer assumptions than traditional tools because they require no knowledge of the (spatial) survey selection function and they do not involve second moments of velocity distributions. We perform a classical-statistics demonstration with red giant branch stars from the APOGEE surveys: We model the vertical orbit structure in the Milky Way disk to constrain the local disk mass, scale height, and the disk--halo mass ratio (at fixed local circular velocity). We find that the disk mass can be constrained (na\""ively) at the few-percent level with Orbital Torus Imaging using only eight element-abundance ratios, demonstrating the promise of combining stellar labels with dynamical invariants. ","Orbital Torus Imaging: Using Element Abundances to Map Orbits and Mass
in the Milky Way",7,"['Paper day! \n\n""Orbital Torus Imaging"" is a new method for dynamical inference (measuring the Galactic mass / dark matter distribution, etc.) that exploits the existence of element-abundance gradients, like these from @APOGEEsurvey: ', 'Conceptually, the method words because the element-abundance contours ""show"" us the shapes of orbits, demonstrated here with vertical kinematics of stars (in zâvz) https://t.co/gxAGZADoCz', 'You can think of ""Orbital Torus Imaging"" as a replacement for Jeans modeling that will be more precise and requires fewer assumptions in practice: We only require that orbits are phase-mixed locally in action-space', ""We do a simple demonstration in this paper using @APOGEEsurvey data: Using just 8 element abundances, and only modeling the vertical kinematics of stars (but don't assume separability) we get constraints on the disk mass and scale height that are precise to a few percent https://t.co/qde8OQI18F"", 'Plus, now we finally have a (dynamical) use for all of those millions x 30 element abundances that spectroscopic surveys like @APOGEEsurvey, @galahsurvey, LAMOST, etc. are delivering :)', ""As usual in Galactic dynamics, there are many caveats and we make many assumptions, but I'm excited to see what we will learn when we apply this to larger data sets and with more ambitious / flexible models of the Galactic mass!"", 'Thanks to everyone that contributed and helped with this project along the way! @davidwhogg @kvj_astro @melissakness @FritzZwicky @rareflwr41 + many others']",20,12,1478
268,263,1315670167012204549,409006348,Shrimai,"New paper ""Case Study: Deontological Ethics in NLP"" with Brendon Boldt, @rsalakhu & Alan Black. Takeaways: 1) use ethical frameworks & apply principles to NLP systems like case studies discussed, 2) explore directions in paper to improve NLP systems. Link: ",https://arxiv.org/abs/2010.04658,"Recent work in natural language processing (NLP) has focused on ethical challenges such as understanding and mitigating bias in data and algorithms; identifying objectionable content like hate speech, stereotypes and offensive language; and building frameworks for better system design and data handling practices. However, there has been little discussion about the ethical foundations that underlie these efforts. In this work, we study one ethical theory, namely deontological ethics, from the perspective of NLP. In particular, we focus on the generalization principle and the respect for autonomy through informed consent. We provide four case studies to demonstrate how these principles can be used with NLP systems. We also recommend directions to avoid the ethical issues in these systems. ",Case Study: Deontological Ethics in NLP,1,"['New paper ""Case Study: Deontological Ethics in NLP"" with Brendon Boldt, @rsalakhu & Alan Black. Takeaways: 1) use ethical frameworks & apply principles to NLP systems like case studies discussed, 2) explore directions in paper to improve NLP systems.\nLink: ']",20,10,263
269,115,1468108378475937793,1191386359707029505,Animesh Mukherjee,"New paper: ""Marching with the Pink Parade: Evaluating Visual Search Recommendations for Non-binary Clothing Items"". @acm_chi 2022 (case studies) With @siddsjaiswal. #discrimination #visualsearch #lgbtq #Amazon #stylesnap #beaglevision #lykdat Paper: ",https://arxiv.org/abs/2112.02384,"Fashion, a highly subjective topic is interpreted differently by all individuals. E-commerce platforms, despite these diverse requirements, tend to cater to the average buyer instead of focusing on edge cases like non-binary shoppers. This case study, through participant surveys, shows that visual search on e-commerce platforms like Amazon, Beagle.Vision and Lykdat, is particularly poor for non-binary clothing items. Our comprehensive quantitative analysis shows that these platforms are more robust to binary clothing inputs. The non-binary clothing items are recommended in a haphazard manner, as observed through negative correlation coefficients of the ranking order. The participants also rate the non-binary recommendations lower than the binary ones. Another intriguing observation is that male raters are more inclined to make binary judgements compared to female raters. Thus it is clear that these systems are not inclusive to the minority, disadvantaged communities of society, like LGBTQ+ people. We conclude with a call to action for the e-commerce platforms to take cognizance of our results and be more inclusive. ","Marching with the Pink Parade: Evaluating Visual Search Recommendations
for Non-binary Clothing Items",1,"['New paper: ""Marching with the Pink Parade: Evaluating Visual Search Recommendations for Non-binary Clothing Items"". @acm_chi 2022 (case studies)\n\nWith @siddsjaiswal. \n\n#discrimination #visualsearch #lgbtq #Amazon #stylesnap #beaglevision #lykdat\n\nPaper: ']",21,12,264
270,97,1491439202885861376,109603566,Stephen James,"New paper! Not all reinforcement learning problems are suited for a Gaussian policy parameterization!đ€Ż Plan to use 3D rotation or 6D pose as part of an action space? Consider the Bingham distribution! đ§”1/5 Paper: Code: w/@pabbeel Quaternions are often used as the output rotation representation when using deep networks, but due to their antipodal symmetric property, sampling a quaternion from a Gaussian doesn't seem appropriate. Here comes the Bingham distribution to the rescue! đ§”2/5 The Bingham distribution is super cool! It's parameterized by an orthogonal 4x4 matrix M, and a diag matrix Z. Intuitively, M holds information regarding the *direction* (akin to mean of Gaussian), while Z controls the *spread* (akin to variance) along the direction. đ§”3/5 Although unexplored for RL, the Bingham distribution has had success in supervised learning! Two notable works are by @valentinp @domo_mr_roboto () and @igilitschenski (). đ§”4/5 The gist of our paper is to show *how* we can leverage the power of Bingham for RL! đȘ When evaluating our approach on the Wahba problem and a set of vision-based robot manipulation tasks from RLBench, we achieve superior performance over a Gaussian parameterization! đ§”5/5 @rndmcnlly @pabbeel I'm not familiar with the *matrix* variant you spoke of, but (correct me if I'm wrong), von Mises-Fisher does not model the antipodally symmetric property of Quaternions; I imagine the *matrix* variant would also have this issue. If true, Bingham seems the better option here.",https://arxiv.org/abs/2202.03957,"We propose a new policy parameterization for representing 3D rotations during reinforcement learning. Today in the continuous control reinforcement learning literature, many stochastic policy parameterizations are Gaussian. We argue that universally applying a Gaussian policy parameterization is not always desirable for all environments. One such case in particular where this is true are tasks that involve predicting a 3D rotation output, either in isolation, or coupled with translation as part of a full 6D pose output. Our proposed Bingham Policy Parameterization (BPP) models the Bingham distribution and allows for better rotation (quaternion) prediction over a Gaussian policy parameterization in a range of reinforcement learning tasks. We evaluate BPP on the rotation Wahba problem task, as well as a set of vision-based next-best pose robot manipulation tasks from RLBench. We hope that this paper encourages more research into developing other policy parameterization that are more suited for particular environments, rather than always assuming Gaussian. ","Bingham Policy Parameterization for 3D Rotations in Reinforcement
Learning",6,"['New paper! Not all reinforcement learning problems are suited for a Gaussian policy parameterization!đ€Ż\nPlan to use 3D rotation or 6D pose as part of an action space? Consider the Bingham distribution! đ§”1/5\n\nPaper: \nCode: \nw/@pabbeel ', ""Quaternions are often used as the output rotation\nrepresentation when using deep networks, but due to their antipodal symmetric property, sampling a quaternion from a Gaussian doesn't seem appropriate. Here comes the Bingham distribution to the rescue! đ§”2/5 https://t.co/YE6g66omoV"", ""The Bingham distribution is super cool! It's parameterized by an orthogonal 4x4 matrix M, and a diag matrix Z.\nIntuitively, M holds information regarding the *direction* (akin to mean of Gaussian), while Z controls the *spread* (akin to variance) along the direction. đ§”3/5 https://t.co/JJRgKStxgc"", 'Although unexplored for RL, the Bingham distribution has had success in supervised learning! Two notable works are by @valentinp @domo_mr_roboto (https://t.co/EmplIffFRg) and @igilitschenski (https://t.co/YJqrHgu5os). đ§”4/5', 'The gist of our paper is to show *how* we can leverage the power of Bingham for RL! đȘ\nWhen evaluating our approach on the Wahba problem and a set of vision-based robot manipulation tasks from RLBench, we achieve superior performance over a Gaussian parameterization! đ§”5/5 https://t.co/OORSIqRa6y', ""@rndmcnlly @pabbeel I'm not familiar with the *matrix* variant you spoke of, but (correct me if I'm wrong), von Mises-Fisher does not model the antipodally symmetric property of Quaternions; I imagine the *matrix* variant would also have this issue. If true, Bingham seems the better option here.""]",22,02,1560
271,61,1339484903524712448,1968365508,Samaya Nissanke (she/her) đ,"New @GRAPPAinstitute paper led by my PhD student Banafsheh Shiralilou on computing gravitational waveforms in subset of quadratic gravity theories beyond General Relativity. Working in the post-Newtonian approximation, we produce GW waveforms where curvature non-linearities beyond a scalar field first appear for the inspiral phase. Instead of null tests of GR, this approach takes forward modelling. A big step & congratulations to Banafsheh!!! Banafsheh has just started her second year of PhD (most of it has been during the pandemic), her family is in Iran&Turkey, we are so proud of her hard work and determination. Huge kudos to former group member Tanja Hinderer, who just left, it was such a fun collaboration (some photos attached of last night submission)! @anwesh05 @GRAPPAinstitute Thank you! And of course Tanja, NĂ©stor and Helvi! It was one of those papers where the team spirit was lovely and calm from start to end đđŸ but the science very hard and required true strength and grit and perseverance from Banafsheh (speaking as a former post Newtonian dudette)",https://arxiv.org/abs/2012.09162,"Gravitational waves (GWs) from merging black holes allow for unprecedented probes of strong-field gravity. Testing gravity in this regime requires accurate predictions of gravitational waveform templates in viable extensions of General Relativity. We concentrate on scalar Gauss-Bonnet gravity, one of the most compelling classes of theories appearing as low-energy limit of quantum gravity paradigms, which introduces quadratic curvature corrections to gravity coupled to a scalar field and allows for black hole solutions with scalar-charge. Focusing on inspiralling black hole binaries, we compute the leading-order corrections due to curvature nonlinearities in the GW and scalar waveforms, showing that the new contributions, beyond merely the effect of scalar field, appear at first post-Newtonian order in GWs. We provide ready-to-implement GW polarizations and phasing. Computing the GW phasing in the Fourier domain, we perform a parameter-space study to quantify the detectability of deviations from General Relativity. Our results lay important foundations for future precision tests of gravity with both parametrized and theory-specific searches. ","Nonlinear curvature effects in gravitational waves from inspiralling
black hole binaries",5,"['New @GRAPPAinstitute paper led by my PhD student Banafsheh Shiralilou on computing gravitational waveforms in subset of quadratic gravity theories beyond General Relativity. ', 'Working in the post-Newtonian approximation, we produce GW waveforms where curvature non-linearities beyond a scalar field first appear for the inspiral phase. Instead of null tests of GR, this approach takes forward modelling. A big step & congratulations to Banafsheh!!!', 'Banafsheh has just started her second year of PhD (most of it has been during the pandemic), her family is in Iran&Turkey, we are so proud of her hard work and determination.', 'Huge kudos to former group member Tanja Hinderer, who just left, it was such a fun collaboration (some photos attached of last night submission)! https://t.co/tVeyeUTmrQ', '@anwesh05 @GRAPPAinstitute Thank you! And of course Tanja, NĂ©stor and Helvi! It was one of those papers where the team spirit was lovely and calm from start to end đđŸ but the science very hard and required true strength and grit and perseverance from Banafsheh (speaking as a former post Newtonian dudette)']",20,12,1088
272,185,1336544400856543236,99270209,Guillermo Valle,"Iâm super excited to release this! What do we want from a generalization theory of deep learning? We propose 7 desiderata (Ds), review how existing bounds do at them, and show that a marginal-likelihood PAC-Bayes bound does better at most Ds The desiderata are: The predictions should scale correctly when changing 1 data complexity 2 training set size 3 architecture 4 optimizer The theory should also be 5 non-vacuous 6 efficiently computable 7 rigorous The more of these you can satisfy the better. Which ones matter more depends on the application To help us study the vast literature on generalization bounds, we classify existing types of bounds according to how many assumptions they make on the data or algorithm (This+Section 4 should also be useful as a tutorial/review the field, we hope) The main conclusion from the review is that we want bounds which are data and algorithm dependent. We prove one such bound, which is a high-probability version of a previous PAC-Bayes bound, and is basically proportional to the marginal likelihood The bound does well at predicting what the error does when changing data complexity It also predicts learning curves (m is training set size) remarkably well, accross datasets and architectures! Here we see that the exponents predicted from the bound in two different ways correlate well with the empirical learning curve exponents. The bound is also able to capture some of the variation in generalization from changing the architecture (among several SOTA computer vision architectures), across datasets We suggest that one of the main reasons our bound works better than previous ones may be because it works in function space which is able to capture the (statistical) properties of the behavior of neural nets, more easily than parameter space-based bounds. Finally, we discuss potential applications of the bound, including to neural architecture search (NAS). Perhaps combining the use of NNGP for NAS , with ideas of using learning curves for NAS. btw @roydanroy this is the paper i've been mentioning to you several times, which has been ""cooming soon"" for quite a while @_vaishnavh I also talked about this paper with u in an email earlier this year @jaschasd @TheGregYang may also be interested as we use NNGPs extensively^^",https://arxiv.org/abs/2012.04115,"Generalization in deep learning has been the topic of much recent theoretical and empirical research. Here we introduce desiderata for techniques that predict generalization errors for deep learning models in supervised learning. Such predictions should 1) scale correctly with data complexity; 2) scale correctly with training set size; 3) capture differences between architectures; 4) capture differences between optimization algorithms; 5) be quantitatively not too far from the true error (in particular, be non-vacuous); 6) be efficiently computable; and 7) be rigorous. We focus on generalization error upper bounds, and introduce a categorisation of bounds depending on assumptions on the algorithm and data. We review a wide range of existing approaches, from classical VC dimension to recent PAC-Bayesian bounds, commenting on how well they perform against the desiderata. We next use a function-based picture to derive a marginal-likelihood PAC-Bayesian bound. This bound is, by one definition, optimal up to a multiplicative constant in the asymptotic limit of large training sets, as long as the learning curve follows a power law, which is typically found in practice for deep learning problems. Extensive empirical analysis demonstrates that our marginal-likelihood PAC-Bayes bound fulfills desiderata 1-3 and 5. The results for 6 and 7 are promising, but not yet fully conclusive, while only desideratum 4 is currently beyond the scope of our bound. Finally, we comment on why this function-based bound performs significantly better than current parameter-based PAC-Bayes bounds. ",Generalization bounds for deep learning,12,"['Iâm super excited to release this! \n\nWhat do we want from a generalization theory of deep learning?\n\nWe propose 7 desiderata (Ds),\n\nreview how existing bounds do at them,\n\nand show that a marginal-likelihood PAC-Bayes bound does better at most Ds\n\n', 'The desiderata are:\n\nThe predictions should scale correctly when changing\n1 data complexity\n2 training set size\n3 architecture\n4 optimizer\n\nThe theory should also be\n5 non-vacuous\n6 efficiently computable\n7 rigorous', 'The more of these you can satisfy the better. Which ones matter more depends on the application', 'To help us study the vast literature on generalization bounds, we classify existing types of bounds according to how many assumptions they make on the data or algorithm\n\n(This+Section 4 should also be useful as a tutorial/review the field, we hope) https://t.co/RMoWkh3LOp', 'The main conclusion from the review is that we want bounds which are data and algorithm dependent.\n\nWe prove one such bound, which is a high-probability version of a previous PAC-Bayes bound, and is basically proportional to the marginal likelihood https://t.co/HIswoYy92h', 'The bound does well at predicting what the error does when changing data complexity https://t.co/bM2Tz0KgW6', 'It also predicts learning curves (m is training set size) remarkably well, accross datasets and architectures! https://t.co/14YHZxvxRK', 'Here we see that the exponents predicted from the bound in two different ways correlate well with the empirical learning curve exponents. https://t.co/euEeFfnAFg', 'The bound is also able to capture some of the variation in generalization from changing the architecture (among several SOTA computer vision architectures), across datasets https://t.co/BQkXZroXJV', 'We suggest that one of the main reasons our bound works better than previous ones may be because it works in function space which is able to capture the (statistical) properties of the behavior of neural nets, more easily than parameter space-based bounds.', 'Finally, we discuss potential applications of the bound, including to neural architecture search (NAS). Perhaps combining the use of NNGP for NAS https://t.co/U3Fynzrs3E, with ideas of using learning curves for NAS.', 'btw @roydanroy this is the paper i\'ve been mentioning to you several times, which has been ""cooming soon"" for quite a while \n@_vaishnavh I also talked about this paper with u in an email earlier this year\n@jaschasd @TheGregYang may also be interested as we use NNGPs extensively^^']",20,12,2339
273,117,1447618499656966148,3422471637,Elias Kammoun,"#new_paper Check out our paper where we present a new model of the Optical/UV/X-ray spectral energy distribution of AGN, within the context of thermal reverberation. The model adopts an iterative interaction between the disc and the corona () 1/n In this context, part of the X-rays irradiating the accretion disc will be emitted as an X-ray reflection. The other part will be absorbed by the disc and re-emitted in the form of thermal radiation with a certain time delay. 2/n We assume the presence of two coronĂŠ, located above and below the disc, responsible of this radiation. We also assume that below a given transition radius (rtrans) all the accretion disc power is transferred to the corona. The transferred fraction is set by the BH spin. 3/n We explored the effects of various parameter on the SED: spin, accretion rate, corona height, transferred power, inclination, etc. The model conserves the energy and the number of photons in the disc-corona system. Here's an example of the effect of the corona height 4/n Finally, we applied the model to the average broadband SED of NGC 5548. 5/n We combined the results of the SED fitting to the ones from time-lag analysis (). We confirmed that the BH in NGC 5548 is rapidly spinning. The model calculates a posteriori the size of the corona for a given accretion rate, height, and X-ray spectrum 6/n Final remark: this model cannot be applied to any simultaneous Optical/UV/X-ray SED. The observed optical/UV spectra are the response of the X-ray activity hours and days before the time of the observations (depending on the time lags). 7/n Final remark Cont'd:Thus, applying the model to observations that are shorter than the time delays may lead to erroneous results. n/n @lmallick_astro It is indeed.. it was kind of surprising when we found out this.",https://arxiv.org/abs/2110.01249,"We develop a new physical model for the broadband spectral energy distribution (SED) of X-ray illuminated accretion discs, that takes into account the mutual interaction of the accretion disc and the X-ray corona, including all relativistic effects. We assume a Keplerian, optically thick and geometrically thin accretion disc and an X-ray source in the lamp-post geometry that emits an isotropic power-law spectrum with a high-energy cut-off. We assume that all the energy that would be released by thermal radiation in the standard disc model in its innermost part, is transported to the corona, effectively cooling the disc in this region. We include the disc heating due to thermalisation of the absorbed part of the disc illumination by X-ray corona. The X-ray reflection from the disc is also included. We compute the X-ray luminosity and the low-energy X-ray cut-off through an iterative process, taking full account of the interplay between the X-ray illumination of the disc and the resulting accretion disc spectrum which enters the corona so that the energy balance is preserved. The corona radius is also computed from the conservation of the photon's number during Comptonization. We discuss the model SEDs and their dependence on system parameters. The disc-corona interaction has profound effects - it constrains the X-ray luminosity and changes the shape and normalisation of the UV/optical blue bump. We use the new code to fit the broad-band SED of a typical Seyfert 1 galaxy, NGC 5548. We infer a high black-hole spin, an intermediate system inclination, and an accretion rate below 10% of Eddington. The X-ray luminosity in this source could be supported by 45-70% of the accretion energy dissipated in the disc. The new model, named KYNSED, is publicly available to be used for fitting AGN SEDs inside the XSPEC spectral analysis tool. ","A physical model for the broadband energy spectrum of X-ray illuminated
accretion discs: fitting the spectral energy distribution of NGC 5548",9,"['#new_paper Check out our paper where we present a new model of the Optical/UV/X-ray spectral energy distribution of AGN, within the context of thermal reverberation. The model adopts an iterative interaction between the disc and the corona () 1/n ', 'In this context, part of the X-rays irradiating the accretion disc will be emitted as an X-ray reflection. The other part will be absorbed by the disc and re-emitted in the form of thermal radiation with a certain time delay. 2/n', 'We assume the presence of two coronĂŠ, located above and below the disc, responsible of this radiation. We also assume that below a given transition radius (rtrans) all the accretion disc power is transferred to the corona. The transferred fraction is set by the BH spin. 3/n https://t.co/Ti4m5EWz1Y', ""We explored the effects of various parameter on the SED: spin, accretion rate, corona height, transferred power, inclination, etc. The model conserves the energy and the number of photons in the disc-corona system. Here's an example of the effect of the corona height 4/n https://t.co/Y0zQWOg83l"", 'Finally, we applied the model to the average broadband SED of NGC 5548. 5/n https://t.co/vwzZnWzRXy', 'We combined the results of the SED fitting to the ones from time-lag analysis (https://t.co/j3tVJZs0eB). We confirmed that the BH in NGC 5548 is rapidly spinning. The model calculates a posteriori the size of the corona for a given accretion rate, height, and X-ray spectrum 6/n https://t.co/pGwuXzPwBr', 'Final remark: this model cannot be applied to any simultaneous Optical/UV/X-ray SED. The observed optical/UV spectra are the response of the X-ray activity hours and days before the time of the observations (depending on the time lags). 7/n', ""Final remark Cont'd:Thus, applying the model to observations that are shorter than the time delays may lead to erroneous results. n/n"", '@lmallick_astro It is indeed.. it was kind of surprising when we found out this.']",21,10,1858
274,53,1039686486881431552,19510090,Julian Togelius,"Say you have an AI method and don't have the resources to test it on all possible benchmark problems. How do you select which games to test it on? In our new paper, we use information theory to describe how to do this. The paper, ""A Continuous Information Gain Measure to Find the Most Discriminatory Problems for AI Benchmarking"", is by @matthew_stephe @DamorinSolusar @Amidos2006 @john_levine Jochen Renz, I and @ChristophSalge The core idea is that you want to test your algorithm on the problems that best separate between existing algorithms, because that's how you can find the most information about your algorithm. Thus information theory. This paper is partly a response to the worrying trend where it is becoming very hard to do AI research without Google-scale resources. For example, you might have GPUs enough to test your new RL algorithm on some ALE games, but not all of them. We tested the methods on the games (and many submitted algorithms) in the @gvgai benchmark set. As a side effect we derived some really nice tables of correlations between GVGAI games, and identified some games that make algorithms behave very differently. But the core idea is applicable to many different types of problems and algorithms, including supervised learning (where there are famously a very large number of benchmark datasets to test on). We hope that this measure can help researchers both doing good science without astronomic resources, and to stop cherry-picking benchmark problems that fit their methods. But for this to happen, there needs to be agreement on which are the most discriminatory problems in a set. @shyamal_chandra @github I guess @DamorinSolusar @matthew_stephe could tell you",https://arxiv.org/abs/1809.02904,"This paper introduces an information-theoretic method for selecting a subset of problems which gives the most information about a group of problem-solving algorithms. This method was tested on the games in the General Video Game AI (GVGAI) framework, allowing us to identify a smaller set of games that still gives a large amount of information about the abilities of different game-playing agents. This approach can be used to make agent testing more efficient. We can achieve almost as good discriminatory accuracy when testing on only a handful of games as when testing on more than a hundred games, something which is often computationally infeasible. Furthermore, this method can be extended to study the dimensions of the effective variance in game design between these games, allowing us to identify which games differentiate between agents in the most complementary ways. ","A Continuous Information Gain Measure to Find the Most Discriminatory
Problems for AI Benchmarking",8,"[""Say you have an AI method and don't have the resources to test it on all possible benchmark problems. How do you select which games to test it on? In our new paper, we use information theory to describe how to do this.\n "", 'The paper, ""A Continuous Information Gain Measure to Find the Most Discriminatory Problems for AI Benchmarking"", is by @matthew_stephe @DamorinSolusar @Amidos2006 @john_levine Jochen Renz, I and @ChristophSalge', ""The core idea is that you want to test your algorithm on the problems that best separate between existing algorithms, because that's how you can find the most information about your algorithm. Thus information theory."", 'This paper is partly a response to the worrying trend where it is becoming very hard to do AI research without Google-scale resources. For example, you might have GPUs enough to test your new RL algorithm on some ALE games, but not all of them.', 'We tested the methods on the games (and many submitted algorithms) in the @gvgai benchmark set. As a side effect we derived some really nice tables of correlations between GVGAI games, and identified some games that make algorithms behave very differently.', 'But the core idea is applicable to many different types of problems and algorithms, including supervised learning (where there are famously a very large number of benchmark datasets to test on).', 'We hope that this measure can help researchers both doing good science without astronomic resources, and to stop cherry-picking benchmark problems that fit their methods. But for this to happen, there needs to be agreement on which are the most discriminatory problems in a set.', '@shyamal_chandra @github I guess @DamorinSolusar @matthew_stephe could tell you']",18,09,1717
275,145,1183752705174626314,48712353,Sungjin Ahn đșđŠ,"Check out our new paper with Jindong Jiang, Sepehr Janghorbani, and @gdm3000. SCALOR for scalable unsupervised sequential object-oriented representation learning via generation. @gdm3000 With moving background @gdm3000 MNIST @gdm3000 Very High Density ",https://arxiv.org/abs/1910.02384,"Scalability in terms of object density in a scene is a primary challenge in unsupervised sequential object-oriented representation learning. Most of the previous models have been shown to work only on scenes with a few objects. In this paper, we propose SCALOR, a probabilistic generative world model for learning SCALable Object-oriented Representation of a video. With the proposed spatially-parallel attention and proposal-rejection mechanisms, SCALOR can deal with orders of magnitude larger numbers of objects compared to the previous state-of-the-art models. Additionally, we introduce a background module that allows SCALOR to model complex dynamic backgrounds as well as many foreground objects in the scene. We demonstrate that SCALOR can deal with crowded scenes containing up to a hundred objects while jointly modeling complex dynamic backgrounds. Importantly, SCALOR is the first unsupervised object representation model shown to work for natural scenes containing several tens of moving objects. ",SCALOR: Generative World Models with Scalable Object Representations,4,"['Check out our new paper with Jindong Jiang, Sepehr Janghorbani, and @gdm3000. SCALOR for scalable unsupervised sequential object-oriented representation learning via generation. \n ', '@gdm3000 With moving background https://t.co/cK7mWuF8yw', '@gdm3000 MNIST https://t.co/raD70vYXG7', '@gdm3000 Very High Density https://t.co/yqH4woPRfF']",19,10,293
276,154,1424992052199186448,1164202716,Magnus Jonsson,"Happy to present our study by Karki et al. in which we demonstrate electrical tuning of plasmonic conducting polymer nanoantennas. The polymeric nanoantennas provide complete and reversible on/off switching as well as possibility for gradual tuning. We utilized the possibility to tune the charge carrier density of conducting polymers by orders of magnitude via their redox state, and could thereby switch the material of the nanoantennas between metallic/plasmonic and dielectric. The system provides convenient electrical control of the nanoantennas, building on our previous demonstration of chemical switching of conducting polymer nanoantennas. #PEDOT #Plasmonics #Metasurfaces ",https://arxiv.org/abs/2108.04045,"Nanostructures of conventional metals offer manipulation of light at the nanoscale but are limited to static behavior due to their fixed material properties. To develop the next frontier of dynamic nanooptics and metasurfaces, we utilize the redox-tunable optical properties of conducting polymers, which were recently shown to be capable of sustaining plasmons in their most conducting oxidized state. Using nanodisks of poly(3,4-ethylenedioxythiophene:sulfate) (PEDOT:Sulf) as a model system, we present the first electrically tunable conducting polymer nanooptical antennas. In addition to repeated on/off switching of the polymeric nanoantennas, we demonstrate the possibility for gradual electrical tuning of their nanooptical response, which was found to be related to the modulation of both density and mobility of the mobile polaronic charge carriers in the polymer. The presented concept takes important steps towards electrically tunable metasurfaces with truly dynamic optical nanoantenna pixels, with not only varying farfield but also tunable nearfield. The work paves the way for applications ranging from tunable flat metaoptics to adaptable smart windows. ",Electrical Tuning of Plasmonic Conducting Polymer Nanoantennas,3,"['Happy to present our study by Karki et al. in which we demonstrate electrical tuning of plasmonic conducting polymer nanoantennas. The polymeric nanoantennas provide complete and reversible on/off switching as well as possibility for gradual tuning. ', 'We utilized the possibility to tune the charge carrier density of conducting polymers by orders of magnitude via their redox state, and could thereby switch the material of the nanoantennas between metallic/plasmonic and dielectric.', 'The system provides convenient electrical control of the nanoantennas, building on our previous demonstration of chemical switching of conducting polymer nanoantennas. #PEDOT #Plasmonics #Metasurfaces\nhttps://t.co/0MVp22eSE6']",21,08,697
277,6,1499761949688668163,14073989,Gustav Markkula,"New preprint / position paper: ""How accurate models of human behavior are needed for human-robot interaction? For automated driving?"" One main takeaway: Focus modelling on those aspects of human behaviour to which interaction outcome (safety, performance, satisfaction) is most sensitive - but finding out which those aspects are is a research question in its own right - needs addressing also for ML models To be presented next week at the ACM/IEEE HRI 2022 Workshop on Modeling Human Behavior in Human-Robot Interactions Really stoked about this workshop overall - come listen if you can! @arashttavakoli Oh glad you liked it! đ",https://arxiv.org/abs/2202.06123,"There are many examples of cases where access to improved models of human behavior and cognition has allowed creation of robots which can better interact with humans, and not least in road vehicle automation this is a rapidly growing area of research. Human-robot interaction (HRI) therefore provides an important applied setting for human behavior modeling - but given the vast complexity of human behavior, how complete and accurate do these models need to be? Here, we outline some possible ways of thinking about this problem, starting from the suggestion that modelers need to keep the right end goal in sight: A successful human-robot interaction, in terms of safety, performance, and human satisfaction. Efforts toward model completeness and accuracy should be focused on those aspects of human behavior to which interaction success is most sensitive. We emphasise that identifying which those aspects are is a difficult scientific objective in its own right, distinct for each given HRI context. We propose and exemplify an approach to formulating a priori hypotheses on this matter, in cases where robots are to be involved in interactions which currently take place between humans, such as in automated driving. Our perspective also highlights some possible risks of overreliance on machine-learned models of human behavior in HRI, and how to mitigate against those risks. ","How accurate models of human behavior are needed for human-robot
interaction? For automated driving?",4,"['New preprint / position paper: ""How accurate models of human behavior are needed for human-robot interaction? For automated driving?""\n\n', 'One main takeaway: Focus modelling on those aspects of human behaviour to which interaction outcome (safety, performance, satisfaction) is most sensitive - but finding out which those aspects are is a research question in its own right - needs addressing also for ML models', 'To be presented next week at the ACM/IEEE HRI 2022 Workshop on Modeling Human Behavior in Human-Robot Interactions\n\nReally stoked about this workshop overall - come listen if you can!\n\nhttps://t.co/hAtpVrjill', '@arashttavakoli Oh glad you liked it! đ']",22,02,644
278,66,1394921931380559872,1238481001304686594,Pablo MartĂnez-MiravĂ©,"New paper today! With @MariamTortola @spastorcarpi @PFdeSalas and Stefano Gariazzo ""Cosmological radiation density with non-standard neutrino-electron interactions"" We study how NSI with electrons alter the picture of neutrino decoupling We address the variation on the effective number of neutrinos in the presence of NSI, including the effect in oscillations, annihilation and scattering between neutrinos and electrons and positrons. We also show that future cosmological data would complement terrestrial experiments (and even provide competitive constraints in some of the couplings)! And most importantly, I really enjoyed learning and working with these great collaborators! đđđđđ",https://arxiv.org/abs/2105.08168,"Neutrino non-standard interactions (NSI) with electrons are known to alter the picture of neutrino decoupling from the cosmic plasma. NSI modify both flavour oscillations through matter effects, and the annihilation and scattering between neutrinos and electrons and positrons in the thermal plasma. In view of the forthcoming cosmological observations, we perform a precision study of the impact of non-universal and flavour-changing NSI on the effective number of neutrinos, $N_{eff}$. We present the variation of $N_{eff}$ arising from the different NSI parameters and discuss the existing degeneracies among them, from cosmology alone and in relation to the current bounds from terrestrial experiments. Even though cosmology is generally less sensitive to NSI than these experiments, we find that future cosmological data would provide competitive and complementary constraints for some of the couplings and their combinations. ","Cosmological radiation density with non-standard neutrino-electron
interactions",3,"['New paper today! With @MariamTortola @spastorcarpi @PFdeSalas and Stefano Gariazzo\n\n""Cosmological radiation density with non-standard neutrino-electron interactions""\n\n\n\nWe study how NSI with electrons alter the picture of neutrino decoupling ', 'We address the variation on the effective number of neutrinos in the presence of NSI, including the effect in oscillations, annihilation and scattering between neutrinos and electrons and positrons. https://t.co/41Ff3ttmZC', 'We also show that future cosmological data would complement terrestrial experiments (and even provide competitive constraints in some of the couplings)!\n\nAnd most importantly, I really enjoyed learning and working with these great collaborators!\n\n đđđđđ']",21,05,709
279,56,1161830953536425984,41848191,Lesandro Ponciano,"In the new paper ""Characterising Volunteers' Task Execution Patterns Across Projects on Multi-Project Citizen Science Platforms"", @thiagomanel and I use GQM and SIM to examine how citizen scientists engage across science projects. #citsci #hcomp Preprint: ",https://arxiv.org/abs/1908.01344,"Citizen science projects engage people in activities that are part of a scientific research effort. On multi-project citizen science platforms, scientists can create projects consisting of tasks. Volunteers, in turn, participate in executing the project's tasks. Such type of platforms seeks to connect volunteers and scientists' projects, adding value to both. However, little is known about volunteer's cross-project engagement patterns and the benefits of such patterns for scientists and volunteers. This work proposes a Goal, Question, and Metric (GQM) approach to analyse volunteers' cross-project task execution patterns and employs the Semiotic Inspection Method (SIM) to analyse the communicability of the platform's cross-project features. In doing so, it investigates what are the features of platforms to foster volunteers' cross-project engagement, to what extent multi-project platforms facilitate the attraction of volunteers to perform tasks in new projects, and to what extent multi-project participation increases engagement on the platforms. Results from analyses on real platforms show that volunteers tend to explore multiple projects, but they perform tasks regularly in just a few of them; few projects attract much attention from volunteers; volunteers recruited from other projects on the platform tend to get more engaged than those recruited outside the platform. System inspection shows that platforms still lack personalised and explainable recommendations of projects and tasks. The findings are translated into useful claims about how to design and manage multi-project platforms. ","Characterising Volunteers' Task Execution Patterns Across Projects on
Multi-Project Citizen Science Platforms",1,"['In the new paper ""Characterising Volunteers\' Task Execution Patterns Across Projects on Multi-Project Citizen Science Platforms"", @thiagomanel and I use GQM and SIM to examine how citizen scientists engage across science projects. #citsci #hcomp\n\nPreprint: ']",19,08,269
280,34,1277480616578166784,2563532985,Prof Anna Watts,"New paper! ""Deep model simulation of polar vortices in gas giant atmospheres"", by Garcia, Chambers & Watts. We've been working on convection in neutron star oceans (rapid rotation, pattern formation etc). Turns out some of our findings may be relevant to the formation of coherent polar cyclonic vortices on gas giants, like those seen on Saturn and Jupiter. Quite an excursion from my normal research track! But it's been really fun. And a fluid is a fluid is a fluid..... @ThomsLrn Thanks! Equatorially asymmetric waves develop from the onset in the selected regime (see also our 2019 PRF paper) which may explain the lack of symmetry in our models. Opposite symmetry is also possible, as equations are invariant with respect to reflections in equatorial plane. Plus the figures are gorgeous. ",https://arxiv.org/abs/2006.14817,"The Cassini and Juno probes have revealed large coherent cyclonic vortices in the polar regions of Saturn and Jupiter, a dramatic contrast from the east-west banded jet structure seen at lower latitudes. Debate has centered on whether the jets are shallow, or extend to greater depths in the planetary envelope. Recent experiments and observations have demonstrated the relevance of deep convection models to a successful explanation of jet structure and cyclonic coherent vortices away from the polar regions have been simulated recently including an additional stratified shallow layer. Here we present new convective models able to produce long-lived polar vortices. Using simulation parameters relevant for giant planet atmospheres we find flow regimes that are in agreement with geostrophic turbulence (GT) theory in rotating convection for the formation of large scale coherent structures via an upscale energy transfer fully three-dimensional. Our simulations generate polar characteristics qualitatively similar to those seen by Juno and Cassini: they match the structure of cyclonic vortices seen on Jupiter; or can account for the existence of a strong polar vortex extending downwards to lower latitudes with a marked spiral morphology and the hexagonal pattern seen on Saturn. Our findings indicate that these vortices can be generated deep in the planetary interior. A transition differentiating these two polar flows regimes is described, interpreted in terms of different force balances and compared with previous shallow atmospheric models which characterised polar vortex dynamics in giant planets. In addition, the heat transport properties are investigated confirming recent scaling laws obtained in the context of reduced models of GT. ",Deep model simulation of polar vortices in gas giant atmospheres,5,"['New paper! ""Deep model simulation of polar vortices in gas giant atmospheres"", by Garcia, Chambers & Watts. ', ""We've been working on convection in neutron star oceans (rapid rotation, pattern formation etc). Turns out some of our findings may be relevant to the formation of coherent polar cyclonic vortices on gas giants, like those seen on Saturn and Jupiter."", ""Quite an excursion from my normal research track! But it's been really fun. And a fluid is a fluid is a fluid....."", '@ThomsLrn Thanks! Equatorially asymmetric waves develop from the\nonset in the selected regime (see also our 2019 PRF paper) which may explain the lack of symmetry in our models. Opposite symmetry is also possible, as equations are invariant with respect to reflections in equatorial plane.', 'Plus the figures are gorgeous. https://t.co/PeAYAVgPif']",20,06,808
281,43,1364114742160351232,1392935011,Ole-Chr. Granmo,"New paper from @cairuia's talented Rupsa Saha, with co-authors @mortengoodwin and @vizadorozhny! She has designed the first Relational #TsetlinMachine, which reasons with relations, variables, and constants. @uiagder #ML #MachineLearning #AI #NLP ",https://arxiv.org/abs/2102.10952,"TMs are a pattern recognition approach that uses finite state machines for learning and propositional logic to represent patterns. In addition to being natively interpretable, they have provided competitive accuracy for various tasks. In this paper, we increase the computing power of TMs by proposing a first-order logic-based framework with Herbrand semantics. The resulting TM is relational and can take advantage of logical structures appearing in natural language, to learn rules that represent how actions and consequences are related in the real world. The outcome is a logic program of Horn clauses, bringing in a structured view of unstructured data. In closed-domain question-answering, the first-order representation produces 10x more compact KBs, along with an increase in answering accuracy from 94.83% to 99.48%. The approach is further robust towards erroneous, missing, and superfluous information, distilling the aspects of a text that are important for real-world understanding. ","A Relational Tsetlin Machine with Applications to Natural Language
Understanding",1,"[""New paper from @cairuia's talented Rupsa Saha, with co-authors @mortengoodwin and @vizadorozhny! She has designed the first Relational #TsetlinMachine, which reasons with relations, variables, and constants. @uiagder #ML #MachineLearning #AI #NLP ""]",21,02,260
282,69,1440428521118048256,3327515352,M.P. Ross,"New paper out today by my colleague Erik Shaw! Places new limits on ultra-light dark matter using a torsion balance. The experiment searched through 491,019 different dark matter masses and was up to two times better than past searches (mass dependent). ",https://arxiv.org/abs/2109.08822,We used a stationary torsion balance with a beryllium-aluminum composition dipole to search for ultra low-mass bosonic dark matter coupled to baryon minus lepton number. We set 95% confidence limits on the coupling constant $g_{\rm B-L}$ for bosons with masses between $10^{-18}$ and $10^{-16}$ eV/$c^2$ with the best performance at $m_{\rm DM} = 8\times 10^{-18}$ eV/$c^2$ constraining $g_{B-L}(\hbar c)^{-1/2} < 1 \times 10^{-25}$. This provides a complimentary limit to equivalence-principle experiments that search for ultra low-mass bosons as force-mediating particles. ,A torsion-balance search for ultra low-mass bosonic dark matter,1,"['New paper out today by my colleague Erik Shaw! Places new limits on ultra-light dark matter using a torsion balance. The experiment searched through 491,019 different dark matter masses and was up to two times better than past searches (mass dependent).\n']",21,09,260
283,178,1397549887521271810,1285216628884602881,Santi Ăvila,"Paper Alert! We study the clustering of HI gas as observed by simulated intensity mapping (HI) experiments. We investigate the effect that the telescope beam and the foreground cleaning have on the observed 2-point correlation function (2PCF). For that, we use the UNITsims () coupled to the Semi-Analytic Galaxy Evolution code (presented in ). We obtain HI from the cold gas, following 2 different prescriptions. We study the HI to halo mass relation and total HI abundace. We create IM pixels. The telescope beam is simulated by a Gaussian smoothing on the angular coordinates. The Foreground cleaning algorithms remove any smooth signal with frecuency, resulting in an exponential damping on the radial cosmological signal. We study the observational effects on different types of 2PCF: - Anisotropic (r_per, r_parallel) - mu-wedges - Radial We find that the BAO is still visible for a SKA-like survey! with @CunningtonSD and others",https://arxiv.org/abs/2105.10454,"We study the clustering of HI intensity maps produced from simulations with a focus on baryonic acoustic oscillations (BAO) and the effects induced by telescope beam smoothing and foreground cleaning. We start by creating a HI catalogue at $z=1.321$ based on the Semi-Analytic Galaxy Evolution (SAGE) model applied to the UNIT simulations. With this catalogue we investigate the relation between model HI and the dark matter haloes and we also study the abundance of HI, $\Omega_{\rm HI}$, predicted by this model. We then create synthetic HI intensity maps with a Nearest-Grid-Point approach. In order to simulate the telescope beam effect, a Gaussian smoothing is applied on the plane perpendicular to the line of sight. The effect of foreground removal methods is simulated by exponentially damping the largest wavelength Fourier modes on the radial direction. We study the anisotropic 2-point correlation function (2PCF) $\xi(r_\perp,r_\parallel)$ and how it is affected by the aforementioned observational effects. In order to better isolate the BAO signal, we study several 2PCF $\mu$-wedges (with a restricted range of orientations $\mu$) tailored to address the systematics effects and we compare them with different definitions of radial 2PCFs. Finally, we discuss our findings in the context of an SKA-like survey, finding a clear BAO signal in most of the estimators here proposed. ","HI intensity mapping correlation function from UNIT simulations: BAO and
observationally induced anisotropy",5,"['Paper Alert! \n\nWe study the clustering of HI gas as observed by simulated intensity mapping (HI) experiments. We investigate the effect that the telescope beam and the foreground cleaning have on the observed 2-point correlation function (2PCF). ', 'For that, we use the UNITsims (https://t.co/teSuxE3yKd) coupled to the Semi-Analytic Galaxy Evolution code (presented in https://t.co/9Axs4ijsCR). \n\nWe obtain HI from the cold gas, following 2 different prescriptions. We study the HI to halo mass relation and total HI abundace.', 'We create IM pixels. \nThe telescope beam is simulated by a Gaussian smoothing on the angular coordinates. \nThe Foreground cleaning algorithms remove any smooth signal with frecuency, resulting in an exponential damping on the radial cosmological signal.', 'We study the observational effects on different types of 2PCF: \n- Anisotropic (r_per, r_parallel)\n- mu-wedges\n- Radial\n\nWe find that the BAO is still visible for a SKA-like survey!', 'with @CunningtonSD and others']",21,05,962
284,3,1402382668910796810,20593623,Maggie Makar,"[1/n] New arXiv paper alert! ML models can take shortcuts, which makes them perform poorly under distribution shift. We used some ideas from causality to get models to use the right signal. W @davisblalock, @packer_ben, Yoni Halpern, & @alexdamour. 𧔠[2/n] Example (based on work by @shiorisagawa, @pangweikoh, et al ): Train a ResNet50 to classify water and land birds using a training distribution where most land birds appear on land backgrounds and vice versa. [3/n] The bird in the foreground is sufficient to classify the images perfectly, but the model learns to use the background as a shortcut. If the foreground/background correlation changes at test time, performance deteriorates. [4/n] Addressing this problem automatically is hard. But if you have labels for the shortcut feature at training time, we show that you can enforce certain causal invariances in the model's representation, using weights and an invariance penalty (based on MMD). It works! [5/n] Importantly, the penalty reduces the complexity of the model's function space without inducing bias (since we know the causal invariances are true). In a simple linear case, the invariance penalty limits the weights of the âshortcutâ factors [6/n] Thus, even when there is no shortcut, the method is more data efficient, achieving better performance on balanced data. [7/n] We're very excited about this work, but it's just the beginning. Lots of opportunities to use causal formalism to import domain knowledge into ML to improve robustness, efficiency, and trustworthiness! Code coming soon! [7.5/n] work also with Dan Moldovan! #errata",https://arxiv.org/abs/2105.06422,"Shortcut learning, in which models make use of easy-to-represent but unstable associations, is a major failure mode for robust machine learning. We study a flexible, causally-motivated approach to training robust predictors by discouraging the use of specific shortcuts, focusing on a common setting where a robust predictor could achieve optimal \emph{iid} generalization in principle, but is overshadowed by a shortcut predictor in practice. Our approach uses auxiliary labels, typically available at training time, to enforce conditional independences implied by the causal graph. We show both theoretically and empirically that causally-motivated regularization schemes (a) lead to more robust estimators that generalize well under distribution shift, and (b) have better finite sample efficiency compared to usual regularization schemes, even when no shortcut is present. Our analysis highlights important theoretical properties of training techniques commonly used in the causal inference, fairness, and disentanglement literatures. Our code is available at this https URL ",Causally motivated Shortcut Removal Using Auxiliary Labels,8,"['[1/n] New arXiv paper alert! ML models can take shortcuts, which makes them perform poorly under distribution shift. We used some ideas from causality to get models to use the right signal. W @davisblalock, @packer_ben, Yoni Halpern, & @alexdamour. 𧔠', '[2/n] Example (based on work by @shiorisagawa, @pangweikoh, et al https://t.co/DwBES7bvzO): Train a ResNet50 to classify water and land birds using a training distribution where most land birds appear on land backgrounds and vice versa. https://t.co/yoinuq4FnG', '[3/n] The bird in the foreground is sufficient to classify the images perfectly, but the model learns to use the background as a shortcut. If the foreground/background correlation changes at test time, performance deteriorates. https://t.co/ux5kRGdoA0', ""[4/n] Addressing this problem automatically is hard. But if you have labels for the shortcut feature at training time, we show that you can enforce certain causal invariances in the model's representation, using weights and an invariance penalty (based on MMD). It works! https://t.co/p3JHWQCXWq"", ""[5/n] Importantly, the penalty reduces the complexity of the model's function space without inducing bias (since we know the causal invariances are true). In a simple linear case, the invariance penalty limits the weights of the âshortcutâ factors https://t.co/jBCqg3lYbd"", '[6/n] Thus, even when there is no shortcut, the method is more data efficient, achieving better performance on balanced data. https://t.co/Q3lPPrPGD3', ""[7/n] We're very excited about this work, but it's just the beginning. Lots of opportunities to use causal formalism to import domain knowledge into ML to improve robustness, efficiency, and trustworthiness! Code coming soon!"", '[7.5/n] work also with Dan Moldovan! #errata']",21,05,1664
285,89,1285270207402061827,53464710,Eric Wong,"1/ New paper on learning perturbation sets for robust machine learning! We study how to characterize real world perturbations in a well-defined set. Paper: Blog post: Code: Joint work with @zicokolter 2/ We define a learned perturbation set over an Lp ball in the latent space of a generator, which uses a latent vector to perturb an example, and is trained on pairs of perturbed examples. The generator captures complex perturbations, and is well-defined over the latent Lp ball. 3/ You may be (rightfully) suspicious of a perturbation set defined by a generative model learned from data. We define concrete, measurable properties of a ""good"" perturbation set, in order to properly evaluate the quality of perturbation sets learned from data. 4/ To learn the generator, we use the conditional variational autoencoder framework. We theoretically prove that training the CVAE objective results in a perturbation set that satisfies these good properties, resulting in a principled approach for learning perturbation sets. 5/ We can now easily leverage methods from Lp robustness to learn robustness to real-world effects captured by a learned perturbation set: simply run Lp approaches in the latent space of the generator! This also gives another reason to care about methods for Lp robustness. 6/ We learn a perturbation set that captures common image corruptions, and another perturbation set that captures lighting changes for scenes in the wild. Common corruptions: Multi-Illumination dataset: @DanHendrycks @murmurmann 7/ We can then train models which are adversarially robust to common corruptions and lighting changes, using PGD adversarial training and randomized smoothing. This results in empirically and certifiably robust models to real-world perturbations. 8/ Finally, models trained with a meaningful learned perturbation set can have non-adversarial benefits as well. For example, for CIFAR10 common corruptions, we can get improved average-case corrupted performance over directly training on the corrupted examples.",https://arxiv.org/abs/2007.08450,"Although much progress has been made towards robust deep learning, a significant gap in robustness remains between real-world perturbations and more narrowly defined sets typically studied in adversarial defenses. In this paper, we aim to bridge this gap by learning perturbation sets from data, in order to characterize real-world effects for robust training and evaluation. Specifically, we use a conditional generator that defines the perturbation set over a constrained region of the latent space. We formulate desirable properties that measure the quality of a learned perturbation set, and theoretically prove that a conditional variational autoencoder naturally satisfies these criteria. Using this framework, our approach can generate a variety of perturbations at different complexities and scales, ranging from baseline spatial transformations, through common image corruptions, to lighting variations. We measure the quality of our learned perturbation sets both quantitatively and qualitatively, finding that our models are capable of producing a diverse set of meaningful perturbations beyond the limited data seen during training. Finally, we leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations, while improving generalization on non-adversarial data. All code and configuration files for reproducing the experiments as well as pretrained model weights can be found at this https URL ",Learning perturbation sets for robust machine learning,8,"['1/ New paper on learning perturbation sets for robust machine learning! We study how to characterize real world perturbations in a well-defined set. \n\nPaper: \nBlog post: \nCode: \n\nJoint work with @zicokolter ', '2/ We define a learned perturbation set over an Lp ball in the latent space of a generator, which uses a latent vector to perturb an example, and is trained on pairs of perturbed examples.\n\nThe generator captures complex perturbations, and is well-defined over the latent Lp ball.', '3/ You may be (rightfully) suspicious of a perturbation set defined by a generative model learned from data. \n\nWe define concrete, measurable properties of a ""good"" perturbation set, in order to properly evaluate the quality of perturbation sets learned from data.', '4/ To learn the generator, we use the conditional variational autoencoder framework. \n\nWe theoretically prove that training the CVAE objective results in a perturbation set that satisfies these good properties, resulting in a principled approach for learning perturbation sets.', '5/ We can now easily leverage methods from Lp robustness to learn robustness to real-world effects captured by a learned perturbation set: simply run Lp approaches in the latent space of the generator! \n\nThis also gives another reason to care about methods for Lp robustness.', '6/ We learn a perturbation set that captures common image corruptions, and another perturbation set that captures lighting changes for scenes in the wild. \n\nCommon corruptions: https://t.co/pbSFBbLYdB\nMulti-Illumination dataset: https://t.co/xbwcA8SFSP\n\n@DanHendrycks @murmurmann', '7/ We can then train models which are adversarially robust to common corruptions and lighting changes, using PGD adversarial training and randomized smoothing. \n\nThis results in empirically and certifiably robust models to real-world perturbations.', '8/ Finally, models trained with a meaningful learned perturbation set can have non-adversarial benefits as well. \n\nFor example, for CIFAR10 common corruptions, we can get improved average-case corrupted performance over directly training on the corrupted examples.']",20,07,2081
286,58,1193946899142905857,3257473185,Marcus D. R. Klarqvist âȘ,"New paper by me, @pshufb, @lemire describing the novel positional population count operation using SIMD instructions. a) The classic popcount operation computes the number of set bits in a machine word b) The positional popcount operation computes the bitwise histogram of an input stream of words Associated code with paper Our best approach uses up to 400 times fewer instructions and is up to 50 times faster than baseline code using only regular (non-SIMD) instructions.",https://arxiv.org/abs/1911.02696,"In several fields such as statistics, machine learning, and bioinformatics, categorical variables are frequently represented as one-hot encoded vectors. For example, given 8 distinct values, we map each value to a byte where only a single bit has been set. We are motivated to quickly compute statistics over such encodings. Given a stream of k-bit words, we seek to compute k distinct sums corresponding to bit values at indexes 0, 1, 2, ..., k-1. If the k-bit words are one-hot encoded then the sums correspond to a frequency histogram. This multiple-sum problem is a generalization of the population-count problem where we seek the sum of all bit values. Accordingly, we refer to the multiple-sum problem as a positional population-count. Using SIMD (Single Instruction, Multiple Data) instructions from recent Intel processors, we describe algorithms for computing the 16-bit position population count using less than half of a CPU cycle per 16-bit word. Our best approach uses up to 400 times fewer instructions and is up to 50 times faster than baseline code using only regular (non-SIMD) instructions, for sufficiently large inputs. ","Efficient Computation of Positional Population Counts Using SIMD
Instructions",4,"['New paper by me, @pshufb, @lemire describing the novel positional population count operation using SIMD instructions. ', 'a) The classic popcount operation computes the number of set bits in a machine word\nb) The positional popcount operation computes the bitwise histogram of an input stream of words https://t.co/SPdSg93qcS', 'Associated code with paper\nhttps://t.co/oJMHOsB6QO', 'Our best approach uses up to 400 times fewer instructions and is up to 50 times faster than baseline code using only regular (non-SIMD) instructions.']",19,11,495
287,142,1224611468517105666,1570476014,Yi-Hsuan Yang,"""Pop Music Transformer: Generating Music with Rhythm and Harmony"" \paper: \demo: \code: ```We propose a new event representation of music that make it easy for models to count the beats``` #TaiwanAILabs @chrisdonahuey Thanks! The tracker works fairly well for pop music. We haven't tried other genres yet.",https://arxiv.org/abs/2002.00212,"A great number of deep learning based models have been recently proposed for automatic music composition. Among these models, the Transformer stands out as a prominent approach for generating expressive classical piano performance with a coherent structure of up to one minute. The model is powerful in that it learns abstractions of data on its own, without much human-imposed domain knowledge or constraints. In contrast with this general approach, this paper shows that Transformers can do even better for music modeling, when we improve the way a musical score is converted into the data fed to a Transformer model. In particular, we seek to impose a metrical structure in the input data, so that Transformers can be more easily aware of the beat-bar-phrase hierarchical structure in music. The new data representation maintains the flexibility of local tempo changes, and provides hurdles to control the rhythmic and harmonic structure of music. With this approach, we build a Pop Music Transformer that composes Pop piano music with better rhythmic structure than existing Transformer models. ","Pop Music Transformer: Beat-based Modeling and Generation of Expressive
Pop Piano Compositions",2,"['""Pop Music Transformer: Generating Music with Rhythm and Harmony"" \n\\paper: \n\\demo: \n\\code: \n\n```We propose a new event representation of music that make it easy for models to count the beats```\n#TaiwanAILabs', ""@chrisdonahuey Thanks! The tracker works fairly well for pop music. We haven't tried other genres yet.""]",20,02,327
288,121,1215314320189317120,2587793395,Jeannette (Jamie) Garcia,"New study up on arXiv this week! Check out our results on quantum chemistry calculations for salts formed during operation in lithium-sulfur batteries with @Daimler. Here we demonstrate a dipole moment calculation using #IBMQ hardware: . @Daimler Also, check out my corresponding blog discussing our study and why weâre exploring chemistry with quantum computers: .",https://arxiv.org/abs/2001.01120,"Quantum chemistry simulations of some industrially relevant molecules are reported, employing variational quantum algorithms for near-term quantum devices. The energies and dipole moments are calculated along the dissociation curves for lithium hydride (LiH), hydrogen sulfide, lithium hydrogen sulfide and lithium sulfide. In all cases we focus on the breaking of a single bond, to obtain information about the stability of the molecular species being investigated. We calculate energies and a variety of electrostatic properties of these molecules using classical simulators of quantum devices, with up to 21 qubits for lithium sulfide. Moreover, we calculate the ground-state energy and dipole moment along the dissociation pathway of LiH using IBM quantum devices. This is the first example, to the best of our knowledge, of dipole moment calculations being performed on quantum hardware. ","Quantum Chemistry Simulations of Dominant Products in Lithium-Sulfur
Batteries",2,"['New study up on arXiv this week! Check out our results on quantum chemistry calculations for salts formed during operation in lithium-sulfur batteries with @Daimler. Here we demonstrate a dipole moment calculation using #IBMQ hardware: .', '@Daimler Also, check out my corresponding blog discussing our study and why weâre exploring chemistry with quantum computers: https://t.co/JjeNqPeeu6.']",20,01,377
289,27,750232123169210368,51169895,Gianluca Stringhini,"New paper (that will be presented at CSET) ""Honey Sheets: What Happens To Leaked Google Spreadsheets?"" @ak1010 indeed. This paper mostly presents a framework, it can be used for many possible scenarios @ak1010 what do you mean with change target? @ak1010 very cool :) @phretor not too much happened actually :P @phretor and for an undergrad thesis, definitely ;)",https://arxiv.org/abs/1607.00801,"Cloud-based documents are inherently valuable, due to the volume and nature of sensitive personal and business content stored in them. Despite the importance of such documents to Internet users, there are still large gaps in the understanding of what cybercriminals do when they illicitly get access to them by for example compromising the account credentials they are associated with. In this paper, we present a system able to monitor user activity on Google spreadsheets. We populated 5 Google spreadsheets with fake bank account details and fake funds transfer links. Each spreadsheet was configured to report details of accesses and clicks on links back to us. To study how people interact with these spreadsheets in case they are leaked, we posted unique links pointing to the spreadsheets on a popular paste site. We then monitored activity in the accounts for 72 days, and observed 165 accesses in total. We were able to observe interesting modifications to these spreadsheets performed by illicit accesses. For instance, we observed deletion of some fake bank account information, in addition to insults and warnings that some visitors entered in some of the spreadsheets. Our preliminary results show that our system can be used to shed light on cybercriminal behavior with regards to leaked online documents. ",Honey Sheets: What Happens to Leaked Google Spreadsheets?,6,"['New paper (that will be presented at CSET) ""Honey Sheets: What Happens To Leaked Google Spreadsheets?"" ', '@ak1010 indeed. This paper mostly presents a framework, it can be used for many possible scenarios', '@ak1010 what do you mean with change target?', '@ak1010 very cool :)', '@phretor not too much happened actually :P', '@phretor and for an undergrad thesis, definitely ;)']",16,07,369
290,88,1095969960625610752,607831012,Hoda Heidari,"Have you ever wondered ""out of the growing list of mathematical formulations of fairness, which one does most closely resemble humans' perception of fairness in a given domain?"" Check out our new paper---just posted on ArXiv: @eredmil1 Thanks a lot Elissa! :) We took a lot of inspiration from your work on human perception of fairness for feature selection.",https://arxiv.org/abs/1902.04783,"Fairness for Machine Learning has received considerable attention, recently. Various mathematical formulations of fairness have been proposed, and it has been shown that it is impossible to satisfy all of them simultaneously. The literature so far has dealt with these impossibility results by quantifying the tradeoffs between different formulations of fairness. Our work takes a different perspective on this issue. Rather than requiring all notions of fairness to (partially) hold at the same time, we ask which one of them is the most appropriate given the societal domain in which the decision-making model is to be deployed. We take a descriptive approach and set out to identify the notion of fairness that best captures \emph{lay people's perception of fairness}. We run adaptive experiments designed to pinpoint the most compatible notion of fairness with each participant's choices through a small number of tests. Perhaps surprisingly, we find that the most simplistic mathematical definition of fairness---namely, demographic parity---most closely matches people's idea of fairness in two distinct application scenarios. This conclusion remains intact even when we explicitly tell the participants about the alternative, more complicated definitions of fairness, and we reduce the cognitive burden of evaluating those notions for them. Our findings have important implications for the Fair ML literature and the discourse on formalizing algorithmic fairness. ","Mathematical Notions vs. Human Perception of Fairness: A Descriptive
Approach to Fairness for Machine Learning",2,"['Have you ever wondered ""out of the growing list of mathematical formulations of fairness, which one does most closely resemble humans\' perception of fairness in a given domain?"" Check out our new paper---just posted on ArXiv: ', '@eredmil1 Thanks a lot Elissa! :) We took a lot of inspiration from your work on human perception of fairness for feature selection.']",19,02,365
291,187,1292715732061626368,786855300322172928,Alkistis Pourtsidou,"Paper alert (with a delay due to holiday season!) -- in led by @CunningtonSD we studied the degeneracy between primordial non-gaussianity (PNG) and foreground removal systematics for intensity mapping experiments [thread]. Foreground removal methods remove some signal (unless one is very conservative, but then the error budget becomes much larger due to the residuals), especially on the large scales, where PNG signatures are expected to lie! With simulated data and MCMC we studied the effects of this on the precision accuracy with which we can probe the f_NL local parameter with a large SKA1-MID @SKA_telescope survey, using FG removal methods that are currently used in real data analyses. If we ignore the possibility of FG removal effects on the signal, we get *extremely biased* (wrong) estimates of f_NL, as shown in the purple contour where our fiducial f_NL = 0. The other contours correspond to the unrealistic cases where FG removal is 0 or perfectly known. To add some realism, we devised a model with 1 free, nuisance parameter, which is marginalised over to account for FG removal properly. We found that it works well, and that we can recover unbiased estimates (but the f_NL uncertainties increase a lot!). This result has implications also for cross-correlations and multi-tracer methods, and it means that further work is required to get the precision we need to be competitive. We are thinking about solutions -- more, hopefully, soon! As an aside, note that we have made suites of simulated data and power spectrum and MCMC codes available at our group's repository -- @CunningtonSD @psahds also see ",https://arxiv.org/abs/2007.12126,"Potential evidence for primordial non-Gaussianity (PNG) is expected to lie in the largest scales mapped by cosmological surveys. Forthcoming 21cm intensity mapping experiments will aim to probe these scales by surveying neutral hydrogen (HI) within galaxies. However, foreground signals dominate the faint 21cm emission, meaning foreground cleaning is required to recover the cosmological signal. The effect this has is to damp the HI power spectrum on the largest scales, especially along the line-of-sight. Whilst there is agreement that this contamination is potentially problematic for probing PNG, it is yet to be fully explored and quantified. In this work we carry out the first forecasts on $f_\text{NL}$ that incorporate simulated foreground maps that are removed using techniques employed in real data. Using an MCMC analysis, we demonstrate that foreground cleaned data recovers hugely biased values ($f_\text{NL} = -102.1_{-7.96}^{+8.39}$ [68% CL]) on our $f_\text{NL}=0$ fiducial input. Introducing a model with fixed parameters for the foreground contamination allows us to recover unbiased results ($f_\text{NL} = -2.94_{-11.9}^{+11.4}$). However, it is not clear that we will have sufficient understanding of foreground contamination to allow for such rigid models. Treating the main parameter $k_\parallel^\text{FG}$ in our foreground model as a nuisance parameter and marginalizing over it, still recovers unbiased results but at the expense of much larger errors ($f_\text{NL} = 0.75^{+40.2}_{-44.5}$), that can only be reduced by imposing the Planck 2018 prior. Our results show that significant progress on understanding and controlling foreground removal effects is necessary in order to study PNG with HI intensity mapping. ","The degeneracy between primordial non-Gaussianity and foregrounds in
21cm intensity mapping experiments",7,"['Paper alert (with a delay due to holiday season!) -- in led by @CunningtonSD we studied the degeneracy between primordial non-gaussianity (PNG) and foreground removal systematics for intensity mapping experiments [thread].', 'Foreground removal methods remove some signal (unless one is very conservative, but then the error budget becomes much larger due to the residuals), especially on the large scales, where PNG signatures are expected to lie!', 'With simulated data and MCMC we studied the effects of this on the precision accuracy with which we can probe the f_NL local parameter with a large SKA1-MID @SKA_telescope survey, using FG removal methods that are currently used in real data analyses.', 'If we ignore the possibility of FG removal effects on the signal, we get *extremely biased* (wrong) estimates of f_NL, as shown in the purple contour where our fiducial f_NL = 0. The other contours correspond to the unrealistic cases where FG removal is 0 or perfectly known. https://t.co/Dugeg7WHzB', 'To add some realism, we devised a model with 1 free, nuisance parameter, which is marginalised over to account for FG removal properly. We found that it works well, and that we can recover unbiased estimates (but the f_NL uncertainties increase a lot!). https://t.co/QIxyRc9uyP', 'This result has implications also for cross-correlations and multi-tracer methods, and it means that further work is required to get the precision we need to be competitive. We are thinking about solutions -- more, hopefully, soon!', ""As an aside, note that we have made suites of simulated data and power spectrum and MCMC codes available at our group's repository https://t.co/FCvccjqwAy -- @CunningtonSD @psahds also see https://t.co/aoXY9jEffy""]",20,07,1659
292,236,1312164325927383040,308306041,Kamal Ndousse,"Big personal milestone! My first ML paper is on arXiv: We propose a simple method that helps RL agents in shared environments learn from one another, and show that the learned social policies improve zero-shot transfer performance in new environments. đ Special thanks to @natashajaques, and to our collaborators @svlevine @douglas_eck! And to @OpenAI -- this work grew out of the Scholars program, which I participated in earlier this year. In the paper, we show that social multi-agent RL can be useful even for non-social tasks! Our SociAPL agents are able to use cues from expert behavior to navigate environments they've never seen before.",https://arxiv.org/abs/2010.00581,"Social learning is a key component of human and animal intelligence. By taking cues from the behavior of experts in their environment, social learners can acquire sophisticated behavior and rapidly adapt to new circumstances. This paper investigates whether independent reinforcement learning (RL) agents in a multi-agent environment can learn to use social learning to improve their performance. We find that in most circumstances, vanilla model-free RL agents do not use social learning. We analyze the reasons for this deficiency, and show that by imposing constraints on the training environment and introducing a model-based auxiliary loss we are able to obtain generalized social learning policies which enable agents to: i) discover complex skills that are not learned from single-agent training, and ii) adapt online to novel environments by taking cues from experts present in the new environment. In contrast, agents trained with model-free RL or imitation learning generalize poorly and do not succeed in the transfer tasks. By mixing multi-agent and solo training, we can obtain agents that use social learning to gain skills that they can deploy when alone, even out-performing agents trained alone from the start. ",Emergent Social Learning via Multi-agent Reinforcement Learning,3,"['Big personal milestone! My first ML paper is on arXiv: \nWe propose a simple method that helps RL agents in shared environments learn from one another, and show that the learned social policies improve zero-shot transfer performance in new environments. đ', 'Special thanks to @natashajaques, and to our collaborators @svlevine @douglas_eck!\nAnd to @OpenAI -- this work grew out of the Scholars program, which I participated in earlier this year.', ""In the paper, we show that social multi-agent RL can be useful even for non-social tasks! Our SociAPL agents are able to use cues from expert behavior to navigate environments they've never seen before.""]",20,10,651
293,127,1141503748323176454,887992016,Luke Metz,"Our work exploring the use of learned optimizers to make more robust image models is on arXiv! We find that in some cases learned optimizers are capable of learning more robustness image classifiers! This was a fun ICML workshop paper using some of the technology we developed in () for new applications! Thanks to my awesome collaborators: @niru_m, Jonathon Shlens, @jaschasd, @ekindogus",https://arxiv.org/abs/1906.03367,"State-of-the art vision models can achieve superhuman performance on image classification tasks when testing and training data come from the same distribution. However, when models are tested on corrupted images (e.g. due to scale changes, translations, or shifts in brightness or contrast), performance degrades significantly. Here, we explore the possibility of meta-training a learned optimizer that can train image classification models such that they are robust to common image corruptions. Specifically, we are interested training models that are more robust to noise distributions not present in the training data. We find that a learned optimizer meta-trained to produce models which are robust to Gaussian noise trains models that are more robust to Gaussian noise at other scales compared to traditional optimizers like Adam. The effect of meta-training is more complicated when targeting a more general set of noise distributions, but led to improved performance on half of held-out corruption tasks. Our results suggest that meta-learning provides a novel approach for studying and improving the robustness of deep learning models. ",Using learned optimizers to make models robust to input noise,2,"['Our work exploring the use of learned optimizers to make more robust image models is on arXiv! We find that in some cases learned optimizers are capable of learning more robustness image classifiers!\n\n ', 'This was a fun ICML workshop paper using some of the technology we developed in (https://t.co/gIiqPsbmQx) for new applications!\nThanks to my awesome collaborators: @niru_m, Jonathon Shlens, @jaschasd, @ekindogus']",19,06,408
294,128,998870539547611136,869896064802934788,Jan Rybizki,#GaiaDR2 precision redetermination of the perihelion parameters for the closest known stellar encounter Gliese710. This is part of our new publication where we find over 20 new encounters that will be closer than 1pc to the sun and assess the completeness. ,http://arxiv.org/abs/1805.07581,"Passing stars may play an important role in the evolution of our solar system. We search for close stellar encounters to the Sun among all 7.2 million stars in Gaia-DR2 that have six-dimensional phase space data. We characterize encounters by integrating their orbits through a Galactic potential and propagating the correlated uncertainties via a Monte Carlo resampling. After filtering to remove spurious data, we find 694 stars that have median (over uncertainties) closest encounter distances within 5 pc, all occurring within 15 Myr from now. 26 of these have at least a 50% chance of coming closer than 1 pc (and 7 within 0.5 pc), all but one of which are newly discovered here. We further confirm some and refute several other previously-identified encounters, confirming suspicions about their data. The closest encounter in the sample is Gl 710, which has a 95% probability of coming closer than 0.08 pc (17 000 AU). Taking mass estimates from Gaia astrometry and multiband photometry for essentially all encounters, we find that Gl 710 also has the largest impulse on the Oort cloud. Using a Galaxy model, we compute the completeness of the Gaia-DR2 encountering sample as a function of perihelion time and distance. Only 15% of encounters within 5 pc occurring within +/- 5 Myr of now have been identified, mostly due to the lack of radial velocities for faint and/or cool stars. Accounting for the incompleteness, we infer the present rate of encounters within 1 pc to be 19.7 +/- 2.2 per Myr, a quantity expected to scale quadratically with the encounter distance out to at least several pc. Spuriously large parallaxes in our sample from imperfect filtering would tend to inflate both the number of encounters found and this inferred rate. The magnitude of this effect is hard to quantify. ",New stellar encounters discovered in the second Gaia data release,1,['#GaiaDR2 precision redetermination of the perihelion parameters for the closest known stellar encounter Gliese710. This is part of our new publication where we find over 20 new encounters that will be closer than 1pc to the sun and assess the completeness. '],18,05,270
295,229,1278980933329260545,1720813753,yappie,"Context Graphs for Legal Reasoning and Argumentation. (arXiv:2007.00732v1 [cs.LO]) We propose a new, structured, logic-based framework for legal reasoning and argumentation: Instead of using a single, unstructured meaning space, theory graphs organize kâŠ",http://arxiv.org/abs/2007.00732,"We propose a new, structured, logic-based framework for legal reasoning and argumentation: Instead of using a single, unstructured meaning space, theory graphs organize knowledge and inference into collections of modular meaning spaces organized by inheritance and interpretation. Context graphs extend theory graphs by attack relations and interpret theories as knowledge contexts of agents in argumentation. We introduce the context graph paradigm by modeling the well-studied case Popov v. Hayashi, concentrating on the role of analogical reasoning in context graphs. ",Context Graphs for Legal Reasoning and Argumentation,1,"['Context Graphs for Legal Reasoning and Argumentation. (arXiv:2007.00732v1 [cs.LO]) \n\nWe propose a new, structured, logic-based framework for legal reasoning and argumentation: Instead of using a single, unstructured meaning space, theory graphs organize kâŠ']",20,07,261
296,79,1426119061671276549,15068044,Daniel Beck,"So we have a new paper, spearheaded by Joe Han, who just finished his Masters under mine and @trevorcohn's supervision. If you're looking into new ways of evaluating diversity in generation you should definitely take a look on what we propose here. (1/n) @trevorcohn This will appear at INLG 2021. We propose a new way to evaluate diversity by grounding it on quality. Our rationale is that a diverse set of generated sentences is only good if it's also of good quality. How to evaluate both jointly? Use multiple references. (2/n) @trevorcohn We assume the ""gold standard"" of diversity is reflected in multiple references. So the goal of diversity is to ""cover"" the reference set. Assuming a sentence-level metric, we turn this into a maximum matching problem. (3/n) @trevorcohn This approach has a number of perks. For instance, a perfect model that generates the reference set exactly will achieve maximum score. But a model that generates N good sentences that are just slight variations of each other will be penalised. (4/n) It is also completely agnostic of whatever sentence-level metric is used, as long as it is bounded. So this can be used for a range of generation tasks. (5/n) One limitation is the reliance on the reference set: if it's not diverse enough then the method will not give preference to more diverse models, even if they give good quality sentences. (6/n) We also don't really test this against an implicit human evaluation of diversity (as quality metrics do). This is something we certainly would love to do in the future. However, it's not clear to us how to even define diversity intrinsically... (7/n) This is why we focus on the reference set. Our intuition is that it gives an ""extrinsic"" measure of diversity. It is not without its drawbacks (as I mention above) but it give us more information compared to other diversity metrics we know about. (8/n) Anyways, check our paper for more info. Happy to discuss more about diversity in generation if you have ideas =). (9/9) PS: yes, we know we messed up the references in the current arXiv version... we will update it shortly... =S",https://arxiv.org/abs/2108.05659,"Text generation from semantic graphs is traditionally performed with deterministic methods, which generate a unique description given an input graph. However, the generation problem admits a range of acceptable textual outputs, exhibiting lexical, syntactic and semantic variation. To address this disconnect, we present two main contributions. First, we propose a stochastic graph-to-text model, incorporating a latent variable in an encoder-decoder model, and its use in an ensemble. Second, to assess the diversity of the generated sentences, we propose a new automatic evaluation metric which jointly evaluates output diversity and quality in a multi-reference setting. We evaluate the models on WebNLG datasets in English and Russian, and show an ensemble of stochastic models produces diverse sets of generated sentences, while retaining similar quality to state-of-the-art models. ",Generating Diverse Descriptions from Semantic Graphs,10,"[""So we have a new paper, spearheaded by Joe Han, who just finished his Masters under mine and @trevorcohn's supervision.\n\nIf you're looking into new ways of evaluating diversity in generation you should definitely take a look on what we propose here. (1/n)\n"", ""@trevorcohn This will appear at INLG 2021.\n\nWe propose a new way to evaluate diversity by grounding it on quality. Our rationale is that a diverse set of generated sentences is only good if it's also of good quality.\n\nHow to evaluate both jointly? Use multiple references. (2/n)"", '@trevorcohn We assume the ""gold standard"" of diversity is reflected in multiple references. So the goal of diversity is to ""cover"" the reference set. Assuming a sentence-level metric, we turn this into a maximum matching problem. (3/n)', '@trevorcohn This approach has a number of perks. For instance, a perfect model that generates the reference set exactly will achieve maximum score. But a model that generates N good sentences that are just slight variations of each other will be penalised. (4/n)', 'It is also completely agnostic of whatever sentence-level metric is used, as long as it is bounded. So this can be used for a range of generation tasks. (5/n)', ""One limitation is the reliance on the reference set: if it's not diverse enough then the method will not give preference to more diverse models, even if they give good quality sentences. (6/n)"", ""We also don't really test this against an implicit human evaluation of diversity (as quality metrics do). This is something we certainly would love to do in the future. However, it's not clear to us how to even define diversity intrinsically... (7/n)"", 'This is why we focus on the reference set. Our intuition is that it gives an ""extrinsic"" measure of diversity. It is not without its drawbacks (as I mention above) but it give us more information compared to other diversity metrics we know about. (8/n)', 'Anyways, check our paper for more info. Happy to discuss more about diversity in generation if you have ideas =). (9/9)', 'PS: yes, we know we messed up the references in the current arXiv version... we will update it shortly... =S']",21,08,2122
297,85,1138318389804322816,826924323113807872,Mohammad Rastegari,"Depth-wise Convolution and Point-wise Convolution showed tremendous benefit for efficient architecture design and now we are presenting a new efficient CNN model DiCENet based on Dimension-wise convolution #deeplearning , #ComputerVision paper: ",https://arxiv.org/abs/1906.03516,"We introduce a novel and generic convolutional unit, DiCE unit, that is built using dimension-wise convolutions and dimension-wise fusion. The dimension-wise convolutions apply light-weight convolutional filtering across each dimension of the input tensor while dimension-wise fusion efficiently combines these dimension-wise representations; allowing the DiCE unit to efficiently encode spatial and channel-wise information contained in the input tensor. The DiCE unit is simple and can be seamlessly integrated with any architecture to improve its efficiency and performance. Compared to depth-wise separable convolutions, the DiCE unit shows significant improvements across different architectures. When DiCE units are stacked to build the DiCENet model, we observe significant improvements over state-of-the-art models across various computer vision tasks including image classification, object detection, and semantic segmentation. On the ImageNet dataset, the DiCENet delivers 2-4% higher accuracy than state-of-the-art manually designed models (e.g., MobileNetv2 and ShuffleNetv2). Also, DiCENet generalizes better to tasks (e.g., object detection) that are often used in resource-constrained devices in comparison to state-of-the-art separable convolution-based efficient networks, including neural search-based methods (e.g., MobileNetv3 and MixNet. Our source code in PyTorch is open-source and is available at this https URL ",DiCENet: Dimension-wise Convolutions for Efficient Networks,1,"['Depth-wise Convolution and Point-wise Convolution showed tremendous benefit for efficient architecture design and now we are presenting a new efficient CNN model DiCENet based on Dimension-wise convolution #deeplearning , #ComputerVision \npaper: ']",19,06,258
298,53,1194216713962872832,938560263334387712,Alexandria Volkening,"Uploaded a new paper on stripe patterns on growing fish fins: (in collaboration with @BjornSandstede and undergraduate students MR Abbott, D Catey, N Chandra, B Dubois, & F Lim). And I got to include a pun at the tail end đ. #zebrafish #tailfins #soPunny ",https://arxiv.org/abs/1911.03758,"As zebrafish develop, black and gold stripes form across their skin due to the interactions of brightly colored pigment cells. These characteristic patterns emerge on the growing fish body, as well as on the anal and caudal fins. While wild-type stripes form parallel to a horizontal marker on the body, patterns on the tailfin gradually extend distally outward. Interestingly, several mutations lead to altered body patterns without affecting fin stripes. Through an exploratory modeling approach, our goal is to help better understand these differences between body and fin patterns. By adapting a prior agent-based model of cell interactions on the fish body, we present an in silico study of stripe development on tailfins. Our main result is a demonstration that two cell types can produce stripes on the caudal fin. We highlight several ways that bone rays, growth, and the body-fin interface may be involved in patterning, and we raise questions for future work related to pattern robustness. ",Modeling stripe formation on growing zebrafish tailfins,1,"['Uploaded a new paper on stripe patterns on growing fish fins: (in collaboration with @BjornSandstede and undergraduate students MR Abbott, D Catey, N Chandra, B Dubois, & F Lim). And I got to include a pun at the tail end đ. #zebrafish #tailfins #soPunny ']",19,11,268
299,67,1285935591230767105,1064262393981820928,Benedikt Diemer,"Paper day #2: Based on the new catalogs from yesterday, we look at the mass functions of splashback and SO masses. The splashback MF turns out to be remarkably universal! The image compares the MF for totally different cosmologies and redshifts. A little background: the first theoretical model of the mass function, Press & Schechter 74, assumed that halos collapse when their progenitor peaks exceed a critical overdensity, delta_c. If delta_c is universal (not dependent on z and cosmology), then the MF is universal too. There are good reasons to believe this should not be the case! Nevertheless, ever since, people have argued about universality. One important aspect is that, for a universal MF, the mass definition would need to capture the ""total mass"" of halos in some physical manner. There have been many claims of (non-)universality. In this paper, we show conclusively that SO mass functions are NOT universal for any definition (R500c, R200c, Rvir, R200m etc). The splashback MFs aren't perfectly universal either, but much more so! In particular, they more or less agree even between crazy self-similar universes and LCDM, plus across redshifts in the presence of dark energy. In my mind (but feel free to challenge me on this!), that indicates that splashback masses are a more physical definition; or at least that they come closer to including the entire mass of the halo in the sense of spherical collapse envisioned by theoretical models. Here's the figure with a legend: @profbradgibson Haha yes Sir! I believe you have everything that's gonna come your way, for now ;)",https://arxiv.org/abs/2007.10346,"The mass function of dark matter halos is one of the most fundamental statistics in structure formation. Many theoretical models (such as Press-Schechter theory) are based on the notion that it could be universal, meaning independent of redshift and cosmology, when expressed in the appropriate variables. However, simulations exhibit persistent non-universalities in the mass functions of the virial mass and other commonly used spherical overdensity definitions. We systematically study the universality of mass functions over a wide range of mass definitions, for the first time including the recently proposed splashback mass, Msp. We confirm that, in LambdaCDM cosmologies, all mass definitions exhibit varying levels of non-universality that increase with peak height and reach between 20% and 500% at the highest masses we can test. Mvir, M200m, and Msp exhibit similar levels of non-universality. There are, however, two regimes where the splashback mass functions are significantly more universal. First, they are universal to 10% at z<2, whereas spherical overdensity definitions experience an evolution due to dark energy. Second, when additionally considering self-similar cosmologies with extreme power spectra, splashback mass functions are remarkably universal (to between 40% and 60%) whereas their spherical overdensity counterparts reach non-universalities between 180% and 450%. These results strongly support the notion that the splashback radius is a physically motivated definition of the halo boundary. We present a simple, universal fitting formula for splashback mass functions that accurately reproduces our simulation data. ",Universal at last? The splashback mass function of dark matter halos,8,"['Paper day #2: \n\nBased on the new catalogs from yesterday, we look at the mass functions of splashback and SO masses. The splashback MF turns out to be remarkably universal! The image compares the MF for totally different cosmologies and redshifts. ', 'A little background: the first theoretical model of the mass function, Press & Schechter 74, assumed that halos collapse when their progenitor peaks exceed a critical overdensity, delta_c. If delta_c is universal (not dependent on z and cosmology), then the MF is universal too.', 'There are good reasons to believe this should not be the case! Nevertheless, ever since, people have argued about universality. \n\nOne important aspect is that, for a universal MF, the mass definition would need to capture the ""total mass"" of halos in some physical manner.', ""There have been many claims of (non-)universality. In this paper, we show conclusively that SO mass functions are NOT universal for any definition (R500c, R200c, Rvir, R200m etc). The splashback MFs aren't perfectly universal either, but much more so!"", 'In particular, they more or less agree even between crazy self-similar universes and LCDM, plus across redshifts in the presence of dark energy.', 'In my mind (but feel free to challenge me on this!), that indicates that splashback masses are a more physical definition; or at least that they come closer to including the entire mass of the halo in the sense of spherical collapse envisioned by theoretical models.', ""Here's the figure with a legend: https://t.co/Ucvwj4k0Sw"", ""@profbradgibson Haha yes Sir! I believe you have everything that's gonna come your way, for now ;)""]",20,07,1613
300,75,1138760461153755136,923132721760649216,kosukekurosawa,Our paper has been accepted for publication in Geophysical Research Letters. We developed a new experimental method for 2-stage H2 gas guns. Our method allows us to do in-situ analysis of impact-generated vapor with a small risk of chemical contamination. ,https://arxiv.org/abs/1906.03913,"Dry lakebeds might constitute large volatile reservoirs on Mars. Hypervelocity impacts onto ancient dry lakebeds would have affected the volatile distribution on Mars. We developed a new experimental method to investigate the response of evaporitic minerals (halite and gypsum) to impact shocks in an open system. This technique does not result in chemical contamination from the operation of the gas gun. The technique is termed the two-valve method and the gun system is located in the Planetary Exploration Research Center, Chiba Institute of Technology, Japan. We detected the vaporization of halite at 31 GPa and devolatilization from gypsum at 11 GPa, suggesting that impact-induced volatile release from dry lakebeds has periodically occurred throughout Martian history. The vaporization of halite deposits might have enhanced the production of perchlorates, which are found globally on Mars. The water loss from gypsum possibly explains the coexisting types of Ca-sulfates found in Gale Crater. ","Shock vaporization/devolatilization of evaporitic minerals, halite and
gypsum, in an open system investigated by a two-stage light gas gun",1,['Our paper has been accepted for publication in Geophysical Research Letters. We developed a new experimental method for 2-stage H2 gas guns. Our method allows us to do in-situ analysis of impact-generated vapor with a small risk of chemical contamination. '],19,06,262
301,66,1007153130310701056,776765039726460929,Carlo Felice Manara,"Another paper today, lead by the great Feng Long, with @GregHerczeg @ilaria_pascucci @danielapai @GijsMulders myself and others: New deeper observations of ChaI disks with @almaobs lead to new detections and to understanding of properties of fainter disks",https://arxiv.org/abs/1806.04826,"ALMA surveys of nearby star-forming regions have shown that the dust mass in the disk is correlated with the stellar mass, but with a large scatter. This scatter could indicate either different evolutionary paths of disks or different initial conditions within a single cluster. We present ALMA Cycle 3 follow-up observations for 14 Class II disks that were low S/N detections or non-detections in our Cycle 2 survey of the $\sim 2$ Myr-old Chamaeleon I star-forming region. With 5 times better sensitivity, we detect millimeter dust continuum emission from six more sources and increase the detection rate to 94\% (51/54) for Chamaeleon I disks around stars earlier than M3. The stellar-disk mass scaling relation reported in \citet{pascucci2016} is confirmed with these updated measurements. Faint outliers in the $F_{mm}$--$M_*$ plane include three non-detections (CHXR71, CHXR30A, and T54) with dust mass upper limits of 0.2 M$_\oplus$ and three very faint disks (CHXR20, ISO91, and T51) with dust masses $\sim 0.5$ M$_\oplus$. By investigating the SED morphology, accretion property and stellar multiplicity, we suggest for the three millimeter non-detections that tidal interaction by a close companion ($<$100 AU) and internal photoevaporation may play a role in hastening the overall disk evolution. The presence of a disk around only the secondary star in a binary system may explain the observed stellar SEDs and low disk masses for some systems. ","An ALMA Survey of faint disks in the Chamaeleon I star-forming region:
Why are some Class II disks so faint?",1,"['Another paper today, lead by the great Feng Long, with @GregHerczeg @ilaria_pascucci @danielapai @GijsMulders myself and others: \nNew deeper observations of ChaI disks with @almaobs lead to new detections and to understanding of properties of fainter disks']",18,06,262
302,39,1364225144810573824,741547129,Britt Lundgren,"Excited to share this new paper on the arXiv today! ""The Geometry of Cold, Metal-Enriched Gas Around Galaxies at zâŒ1.2"" If you have 4 minutes, check out the excellent video summary of this paper made for the KITP Halo21 workshop by my co-author and former student, Samantha Creech! Thanks also to Co-I @gbrammer and amazing former @UncAvl students Matthew Peek & Nathan Kirse!",https://arxiv.org/abs/2102.10117,"We present the first results from a Hubble Space Telescope WFC3/IR program, which obtained direct imaging and grism observations of galaxies near quasar sightlines with a high frequency of uncorrelated foreground Mg II absorption. These highly efficient observations targeted 54 Mg II absorbers along the line of sight to nine quasars at $z_{qso}\sim2$. We find that 89% of the absorbers in the range $0.64< z < 1.6$ can be spectroscopically matched to at least one galaxy with an impact parameter less than 200 kpc and $|\Delta z|/(1+z)<0.006$. We have estimated the star formation rates and measured structural parameters for all detected galaxies with impact parameters in the range 7-200 kpc and star formation rates greater than 1.3 M$_{\odot}$ yr$^{-1}$. We find that galaxies associated with Mg II absorption have significantly higher mean star formation rates and marginally higher mean star formation rate surface densities compared to galaxies with no detected Mg II. Nearly half of the Mg II absorbers match to more than one galaxy, and the mean equivalent width of the Mg II absorption is found to be greater for groups, compared to isolated galaxies. Additionally, we observe a significant redshift evolution in the physical extent of Mg II-absorbing gas around galaxies and evidence of an enhancement of Mg II within 50 degrees of the minor axis, characteristic of outflows, which persists to 80 kpc around the galaxies, in agreement with recent predictions from simulations. ","The Geometry of Cold, Metal-Enriched Gas Around Galaxies at $z\sim1.2$",3,"['Excited to share this new paper on the arXiv today! ""The Geometry of Cold, Metal-Enriched Gas Around Galaxies at zâŒ1.2"" ', 'If you have 4 minutes, check out the excellent video summary of this paper made for the KITP Halo21 workshop by my co-author and former student, Samantha Creech! https://t.co/jyMmE9evLh', 'Thanks also to Co-I @gbrammer and amazing former @UncAvl students Matthew Peek & Nathan Kirse!']",21,02,390
303,20,1143916572924190721,452384386,Sebastien Bubeck,"New paper on black-box complexity of *parallel convex optimization*. We uncover yet again a quadratic acceleration phenomenon: while serial gradient descent is optimal up to depth D (=dim of the problem), for parallel it's true only up to depth sqrt(D)!!! Prescient work by Nemirovski from 1994 (rediscovered recently by Balkanski-Singer ) essentially showed optimality of GD up to //-depth D^{1/3}: a key contribution of our work is to improve this to the optimal D^{1/2} by a new ``Wall function"" construction. From the algo side, Duchi-Bartlett-Wainwright already showed that after D^{1/2} depth one can improve the 1/eps^2 rate of GD to D^{1/4}/eps. We also improve this regime, and leverage our highly smooth acceleration paper to obtain rate D^{1/3}/eps^{2/3}. Conjecture: D^{1/3}/eps^{2/3} is optimal in depth range [D^{1/2}, D]. Seems like fun & difficult convex geometry question! Both highly smooth and parallel acceleration papers are joint work with Qijia Jiang (Stanford grad student), Yin Tat Lee, Yuanzhi Li, and Aaron Sidford. @roydanroy @aminkarbasi I disagree so strongly with that last comment I don't even know where to begin... @aminkarbasi @roydanroy Dan's. I happen to midly (much much more midly) disagree with yours too, just in the sense that I think there *is* a quite significant gain for small accuracies. To be concrete, say you are in d=10^9, and you want to reach accuracy eps=10^-6, then our // alg is 100x improvement.",https://arxiv.org/abs/1906.10655,"A landmark result of non-smooth convex optimization is that gradient descent is an optimal algorithm whenever the number of computed gradients is smaller than the dimension $d$. In this paper we study the extension of this result to the parallel optimization setting. Namely we consider optimization algorithms interacting with a highly parallel gradient oracle, that is one that can answer $\mathrm{poly}(d)$ gradient queries in parallel. We show that in this case gradient descent is optimal only up to $\tilde{O}(\sqrt{d})$ rounds of interactions with the oracle. The lower bound improves upon a decades old construction by Nemirovski which proves optimality only up to $d^{1/3}$ rounds (as recently observed by Balkanski and Singer), and the suboptimality of gradient descent after $\sqrt{d}$ rounds was already observed by Duchi, Bartlett and Wainwright. In the latter regime we propose a new method with improved complexity, which we conjecture to be optimal. The analysis of this new method is based upon a generalized version of the recent results on optimal acceleration for highly smooth convex optimization. ",Complexity of Highly Parallel Non-Smooth Convex Optimization,6,"[""New paper on black-box complexity of *parallel convex optimization*. We uncover yet again a quadratic acceleration phenomenon: while serial gradient descent is optimal up to depth D (=dim of the problem), for parallel it's true only up to depth sqrt(D)!!! "", 'Prescient work by Nemirovski from 1994 (rediscovered recently by Balkanski-Singer https://t.co/yq1VxQScqK) essentially showed optimality of GD up to //-depth D^{1/3}: a key contribution of our work is to improve this to the optimal D^{1/2} by a new ``Wall function"" construction.', 'From the algo side, Duchi-Bartlett-Wainwright already showed that after D^{1/2} depth one can improve the 1/eps^2 rate of GD to D^{1/4}/eps. We also improve this regime, and leverage our highly smooth acceleration paper https://t.co/lU18Ee0d6h to obtain rate D^{1/3}/eps^{2/3}.', 'Conjecture: D^{1/3}/eps^{2/3} is optimal in depth range [D^{1/2}, D]. Seems like fun & difficult convex geometry question! \n\nBoth highly smooth and parallel acceleration papers are joint work with Qijia Jiang (Stanford grad student), Yin Tat Lee, Yuanzhi Li, and Aaron Sidford.', ""@roydanroy @aminkarbasi I disagree so strongly with that last comment I don't even know where to begin..."", ""@aminkarbasi @roydanroy Dan's.\n\nI happen to midly (much much more midly) disagree with yours too, just in the sense that I think there *is* a quite significant gain for small accuracies. To be concrete, say you are in d=10^9, and you want to reach accuracy eps=10^-6, then our // alg is 100x improvement.""]",19,06,1480
304,17,1222805051699318784,130791303,Daniel NĂŒst,"New preprint just out đđđŁ ""The Rockerverse: Packages and Applications for Containerization with R"" @Docker #RockerProject #Containers #rstats #ReproducibleResearch Just in time for #rstudioconf2020 đ Feedback welcome! Big thanks go out to all 20 coauthors: @eddelbuettel @dominicjbennett @rcannood @davclark @daroczig @HoloMarkeD @_ColinFay @ellis_hughes @lopp_sean @benmarwick @skyetetra @heatherklus Hong Ooi @_inundata @noamross Lori Shepherd @niteshturaga Craig Willis @nanxstats C. Van Petegem",https://arxiv.org/abs/2001.10641,"The Rocker Project provides widely used Docker images for R across different application scenarios. This article surveys downstream projects that build upon the Rocker Project images and presents the current state of R packages for managing Docker images and controlling containers. These use cases cover diverse topics such as package development, reproducible research, collaborative work, cloud-based data processing, and production deployment of services. The variety of applications demonstrates the power of the Rocker Project specifically and containerisation in general. Across the diverse ways to use containers, we identified common themes: reproducible environments, scalability and efficiency, and portability across clouds. We conclude that the current growth and diversification of use cases is likely to continue its positive impact, but see the need for consolidating the Rockerverse ecosystem of packages, developing common practices for applications, and exploring alternative containerisation software. ",The Rockerverse: Packages and Applications for Containerization with R,2,"['New preprint just out đđđŁ ""The Rockerverse: Packages and Applications for Containerization with R""\n\n\n\n@Docker #RockerProject #Containers #rstats #ReproducibleResearch\n\nJust in time for #rstudioconf2020 đ\n\nFeedback welcome! ', 'Big thanks go out to all 20 coauthors: @eddelbuettel @dominicjbennett @rcannood @davclark @daroczig @HoloMarkeD @_ColinFay @ellis_hughes @lopp_sean @benmarwick @skyetetra @heatherklus Hong Ooi @_inundata @noamross Lori Shepherd @niteshturaga Craig Willis @nanxstats C. Van Petegem']",20,01,519
305,71,1025179493399318528,1558538456,Rodrigo FernĂĄndez,"Converged mass ejection (and jet) from NS merger accretion disk around (promptly-formed) BH remnant, modeled in 3D GRMHD New paper in collaboration with Sasha Tchekhovskoy and Quataert, Foucart, & Kasen: Sasha's code is publicly available: ",https://arxiv.org/abs/1808.00461,"We investigate the long-term evolution of black hole accretion disks formed in neutron star mergers. These disks expel matter that contributes to an $r$-process kilonova, and can produce relativistic jets powering short gamma-ray bursts. Here we report the results of a three-dimensional, general-relativistic magnetohydrodynamic (GRMHD) simulation of such a disk which is evolved for long enough ($\sim 9$s, or $\sim 6\times 10^5 r_{\rm g}/c$) to achieve completion of mass ejection far from the disk. Our model starts with a poloidal field, and fully resolves the most unstable mode of the magnetorotational instability. We parameterize the dominant microphysics and neutrino cooling effects, and compare with axisymmetric hydrodynamic models with shear viscosity. The GRMHD model ejects mass in two ways: a prompt MHD-mediated outflow and a late-time, thermally-driven wind once the disk becomes advective. The total amount of unbound mass ejected ($0.013M_\odot$, or $\simeq 40\%$ of the initial torus mass) is twice as much as in hydrodynamic models, with higher average velocity ($0.1c$) and a broad electron fraction distribution with a lower average value ($0.16$). Scaling the ejected fractions to a disk mass of $\sim 0.1M_\odot$ can account for the red kilonova from GW170817 but underpredicts the blue component. About $\sim 10^{-3}M_\odot$ of material should undergo neutron freezout and could produce a bright kilonova precursor in the first few hours after the merger. With our idealized initial magnetic field configuration, we obtain a robust jet and sufficient ejecta with Lorentz factor $\sim 1-10$ to (over)produce the non-thermal emission from GW1708107. ","Long-term GRMHD Simulations of Neutron Star Merger Accretion Disks:
Implications for Electromagnetic Counterparts",2,"['Converged mass ejection (and jet) from NS merger accretion disk around (promptly-formed) BH remnant, modeled in 3D GRMHD\n\nNew paper in collaboration with Sasha Tchekhovskoy and Quataert, Foucart, & Kasen:\n\n ', ""Sasha's code is publicly available:\n\nhttps://t.co/rBpJohYan5""]",18,08,260
306,256,1367038919116787713,1100602684317536256,Alvaro M. Alhambra,"A little bit of self-promotion today: We study some definitions of Renyi MI, and show some nice properties like a thermal area law. Potentially some of them can be approximated with TN or other variational methods! Also apologies to all the Shannon theory folks who will realize we (still) did not define the Renyi MI in the""right"" way (with the optimization over the marginal). đ
@markwilde Thanks, that is good to hear! Yes, there is a lot of physics literature with that definition, and you really don't have to look for long to find situations where its for instance negative. @markwilde By the way, the result in your book showing that the ""geometric Renyi MI"" is H_0 for pure states really makes me think that it should be related to TN approximations for mixed states, but we were not able to make this precise. @lukyluke_t Thanks a lot Luca, I definitely missed these! As I understand it, in CFT the analytic continuation to n->1 means you can always calculate the vN entropy right? I'm still baffled that this works so well.",https://arxiv.org/abs/2103.01709,"The mutual information is a measure of classical and quantum correlations of great interest in quantum information. It is also relevant in quantum many-body physics, by virtue of satisfying an area law for thermal states and bounding all correlation functions. However, calculating it exactly or approximately is often challenging in practice. Here, we consider alternative definitions based on R\'enyi divergences. Their main advantage over their von Neumann counterpart is that they can be expressed as a variational problem whose cost function can be efficiently evaluated for families of states like matrix product operators while preserving all desirable properties of a measure of correlations. In particular, we show that they obey a thermal area law in great generality, and that they upper bound all correlation functions. We also investigate their behavior on certain tensor network states and on classical thermal distributions. ",Computable R\'enyi mutual information: Area laws and correlations,5,"['A little bit of self-promotion today: \n\nWe study some definitions of Renyi MI, and show some nice properties like a thermal area law. Potentially some of them can be approximated with TN or other variational methods!', 'Also apologies to all the Shannon theory folks who will realize we (still) did not define the Renyi MI in the""right"" way (with the optimization over the marginal). đ
', ""@markwilde Thanks, that is good to hear! Yes, there is a lot of physics literature with that definition, and you really don't have to look for long to find situations where its for instance negative."", '@markwilde By the way, the result in your book showing that the ""geometric Renyi MI"" is H_0 for pure states really makes me think that it should be related to TN approximations for mixed states, but we were not able to make this precise.', ""@lukyluke_t Thanks a lot Luca, I definitely missed these! As I understand it, in CFT the analytic continuation to n->1 means you can always calculate the vN entropy right? I'm still baffled that this works so well.""]",21,03,1044
307,78,1349267854072442885,1224997525725335552,Dom Emery,New research with @fu_yibin (and my first paper accepted!) We show that soft slender tubes under axial loading and surface tension can develop circumferential buckling modes which compete with the well-known beading instability @KeeleMaths ,https://arxiv.org/abs/2101.04165,"We provide an extension to previous analysis of the localised beading instability of soft slender tubes under surface tension and axial stretching. The primary questions pondered here are: under what loading conditions, if any, can bifurcation into circumferential buckling modes occur, and do such solutions dominate localisation and periodic axial modes? Three distinct boundary conditions are considered; in case 1 the tube's curved surfaces are traction free and under surface tension, whilst in cases 2 and 3 the inner and outer surfaces (respectively) are fixed to prevent radial displacement and surface tension. A linear bifurcation analysis is conducted to determine numerically the existence of circumferential mode solutions. In case 1 we focus on the tensile stress regime given the preference of slender compressed tubes towards Euler buckling over axial wrinkling. We show that tubes under several loading paths are highly sensitive to circumferential modes; in contrast, localised and periodic axial modes are absent, suggesting that the circumferential buckling is dominant by default. In case 2, circumferential mode solutions are associated with negative surface tension values and thus are physically implausible. Circumferential buckling solutions are shown to exist in case 3 for tensile and compressive axial loads, and we demonstrate for multiple loading scenarios their dominance over localisation and periodic axial modes within specific parameter regimes. ","Elasto-capillary circumferential buckling of soft tubes under axial
loading: existence and competition with localised beading and periodic axial
modes",1,['New research with @fu_yibin (and my first paper accepted!) We show that soft slender tubes under axial loading and surface tension can develop circumferential buckling modes which compete with the well-known beading instability @KeeleMaths \n\n '],21,01,254
308,138,1004357171734294528,318288924,Salim Arslan,See our latest preprint to find out how we use graph convolutional networks and class activations to identify brain regions (ROIs) with application to functional connectivity driven sex classification. Now online at w/ @s0f1ra @GlockerBen @DanielRueckert ,https://arxiv.org/abs/1806.01764,"Graph convolutional networks (GCNs) allow to apply traditional convolution operations in non-Euclidean domains, where data are commonly modelled as irregular graphs. Medical imaging and, in particular, neuroscience studies often rely on such graph representations, with brain connectivity networks being a characteristic example, while ultimately seeking the locus of phenotypic or disease-related differences in the brain. These regions of interest (ROIs) are, then, considered to be closely associated with function and/or behaviour. Driven by this, we explore GCNs for the task of ROI identification and propose a visual attribution method based on class activation mapping. By undertaking a sex classification task as proof of concept, we show that this method can be used to identify salient nodes (brain regions) without prior node labels. Based on experiments conducted on neuroimaging data of more than 5000 participants from UK Biobank, we demonstrate the robustness of the proposed method in highlighting reproducible regions across individuals. We further evaluate the neurobiological relevance of the identified regions based on evidence from large-scale UK Biobank studies. ","Graph Saliency Maps through Spectral Convolutional Networks: Application
to Sex Classification with Brain Connectivity",1,['See our latest preprint to find out how we use graph convolutional networks and class activations to identify brain regions (ROIs) with application to functional connectivity driven sex classification. Now online at w/ @s0f1ra @GlockerBen @DanielRueckert '],18,06,268
309,53,1339640488400400385,53836928,Tarun Chitra,"đšâ ïž Paper Alert â ïžđš Q: Have you wondered about math for the following? a) Optimal token qty to emit for yield farming incentives b) Hedging impermanent loss w/ options c) When do LPs not get rekt? A: New paper from moi, @alexhevans, @GuilleAngeris We highlight how to relate hedging costs and optimal yield farming in our final blog post 1. Greeks (â, Î, not @gakonst) are bounded by the curvature of the impact function 2. Curvature of CFMM controls how much you need to pay to avoid đ§ attacks This gives a quantitative answer to ""how much does @SushiSwap need to pay in $SUSHI to convince @UniswapProtocol liquidity for a particular pool to migrate?"" This cost varies from pair to pair and effectively decays to zero when you have a really gigantic pool Exercise for the reader (really @MartinTassy and @_Dave__White_): We generalize the profit conditions for Uniswap to arbitrary CFMMs. Find the proper renormalization (e.g. variance rescaling) to get a similar Kelly-style, continuous limit result for @CurveFinance This fact lets you adjust a CFMM's curvature in order to change the interval for LP profitability Almost all ""limit order"" approximations in CFMMs that I've seen effectively end up being curvature adjustments to adjust the conditions in this tweet đđŸ Which brings me to my final challenge: The @danrobinson (CFMMs) v. @SBF_Alameda (CLOBs) fight would be a infinitely more compelling if you compared đŁ continuous time, discrete price LOB 𧚠discrete time, continuous price CFMMs For LOBs: Market makers adjust liquidity by adding/removing *discrete* price/quantity orders For CFMMs: MMs adjust liquidity by adjusting curvature (e.g. bounds for where they want liquidity to be used, adding/removing liquidity) All to replicate the famous Glosten drawing đđŸ đđŸđđŸđđŸ for feedback/comments over the last 6mo: @ciamac @theyisun @adamlerer @ChiangRei @teo_leibowitz + some folks not on Twitter @CryptoCobain @alexhevans @GuilleAngeris Cobie, your brain is nuts, you never sleep and suck in heaps of information to form some dank, deep tweets I salute you ",https://arxiv.org/abs/2012.08040,"Liquidity and trading activity on constant function market makers (CFMMs) such as Uniswap, Curve, and Balancer has grown significantly in the second half of 2020. Much of the growth of these protocols has been driven by incentivized pools or 'yield farming', which reward participants in crypto assets for providing liquidity to CFMMs. As a result, CFMMs and associated protocols, which were historically very small markets, now constitute the most liquid trading venues for a large number of crypto assets. But what does it mean for a CFMM to be the most liquid market? In this paper, we propose a basic definition of price sensitivity and liquidity. We show that this definition is tightly related to the curvature of a CFMM's trading function and can be used to explain a number of heuristic results. For example, we show that low-curvature markets are good for coins whose market value is approximately fixed and that high-curvature markets are better for liquidity providers when traders have an informational edge. Additionally, the results can also be used to model interacting markets and explain the rise of incentivized liquidity provision, also known as 'yield farming.' ",When does the tail wag the dog? Curvature and market making,9,"['đšâ ïž Paper Alert â ïžđš\n\nQ: Have you wondered about math for the following?\n\na) Optimal token qty to emit for yield farming incentives\nb) Hedging impermanent loss w/ options\nc) When do LPs not get rekt?\n\nA: New paper from moi, @alexhevans, @GuilleAngeris \n\n', 'We highlight how to relate hedging costs and optimal yield farming in our final blog post\n\n1. Greeks (â, Î, not @gakonst) are bounded by the curvature of the impact function\n\n2. Curvature of CFMM controls how much you need to pay to avoid đ§ attacks\n\nhttps://t.co/WI9fbvuARv', 'This gives a quantitative answer to ""how much does @SushiSwap need to pay in $SUSHI to convince @UniswapProtocol liquidity for a particular pool to migrate?""\n\nThis cost varies from pair to pair and effectively decays to zero when you have a really gigantic pool https://t.co/ePovkIMqwD', 'Exercise for the reader (really @MartinTassy and @_Dave__White_): \n\nWe generalize the profit conditions for Uniswap to arbitrary CFMMs. Find the proper renormalization (e.g. variance rescaling) to get a similar Kelly-style, continuous limit result for @CurveFinance https://t.co/MNbrk3hPMJ', 'This fact lets you adjust a CFMM\'s curvature in order to change the interval for LP profitability\n\nAlmost all ""limit order"" approximations in CFMMs that I\'ve seen effectively end up being curvature adjustments to adjust the conditions in this tweet đđŸ\n\nhttps://t.co/AQpS3JRLRZ', 'Which brings me to my final challenge:\n\nThe @danrobinson (CFMMs) v. @SBF_Alameda (CLOBs) fight would be a infinitely more compelling if you compared\n\nđŁ continuous time, discrete price LOB \n𧚠discrete time, continuous price CFMMs https://t.co/LMFujC3jkY', 'For LOBs: Market makers adjust liquidity by adding/removing *discrete* price/quantity orders\n\nFor CFMMs: MMs adjust liquidity by adjusting curvature (e.g. bounds for where they want liquidity to be used, adding/removing liquidity)\n\nAll to replicate the famous Glosten drawing đđŸ https://t.co/wVRHwkhqxj', 'đđŸđđŸđđŸ for feedback/comments over the last 6mo:\n@ciamac @theyisun @adamlerer @ChiangRei @teo_leibowitz + some folks not on Twitter', '@CryptoCobain @alexhevans @GuilleAngeris Cobie, your brain is nuts, you never sleep and suck in heaps of information to form some dank, deep tweets \n\nI salute you https://t.co/GotXV6NQVE']",20,12,2124
310,190,1359517751212138501,1117927926152990720,Ramon Astudillo,Are implicit word node-alignments useful for AMR parsing? They do help in multi-lingual settings!. We propose a multi-lingual AMR alignment method which applied to the stack-Transformer yields best results till date for multi-lingual AMR (EACL2021) Work led by @JanakiSheth while interning at IBM!,https://arxiv.org/abs/2102.02189,"We develop high performance multilingualAbstract Meaning Representation (AMR) sys-tems by projecting English AMR annotationsto other languages with weak supervision. Weachieve this goal by bootstrapping transformer-based multilingual word embeddings, in partic-ular those from cross-lingual RoBERTa (XLM-R large). We develop a novel technique forforeign-text-to-English AMR alignment, usingthe contextual word alignment between En-glish and foreign language tokens. This wordalignment is weakly supervised and relies onthe contextualized XLM-R word embeddings.We achieve a highly competitive performancethat surpasses the best published results forGerman, Italian, Spanish and Chinese. ",Bootstrapping Multilingual AMR with Contextual Word Alignments,2,"['Are implicit word node-alignments useful for AMR parsing? They do help in multi-lingual settings!. We propose a multi-lingual AMR alignment method which applied to the stack-Transformer yields best results till date for multi-lingual AMR (EACL2021) ', 'Work led by @JanakiSheth while interning at IBM!']",21,02,304
311,47,801025701109436417,14699604,James Hensman,"New paper ""Variational Fourier Features for Gaussian Processes"": . 10^6 data in seconds. With @arnosolin & N.Durrande @vallens @arnosolin limitation is input dimensionality, so will work very well for decoding, but perhaps need something else to encode. @geospacedman @arnosolin working on it. Code is here: @geospacedman @arnosolin Here we go: ",https://arxiv.org/abs/1611.06740,"This work brings together two powerful concepts in Gaussian processes: the variational approach to sparse approximation and the spectral representation of Gaussian processes. This gives rise to an approximation that inherits the benefits of the variational approach but with the representational power and computational scalability of spectral representations. The work hinges on a key result that there exist spectral features related to a finite domain of the Gaussian process which exhibit almost-independent covariances. We derive these expressions for Matern kernels in one dimension, and generalize to more dimensions using kernels with specific structures. Under the assumption of additive Gaussian noise, our method requires only a single pass through the dataset, making for very fast and accurate computation. We fit a model to 4 million training points in just a few minutes on a standard laptop. With non-conjugate likelihoods, our MCMC scheme reduces the cost of computation from O(NM2) (for a sparse Gaussian process) to O(NM) per iteration, where N is the number of data and M is the number of features. ",Variational Fourier features for Gaussian processes,4,"['New paper ""Variational Fourier Features for Gaussian Processes"": . 10^6 data in seconds. With @arnosolin & N.Durrande', '@vallens @arnosolin limitation is input dimensionality, so will work very well for decoding, but perhaps need something else to encode.', '@geospacedman @arnosolin working on it. Code is here: https://t.co/mP0Jsl0JHD', '@geospacedman @arnosolin Here we go: https://t.co/zZLs8kp7Oo']",16,11,364
312,184,1376798959793008641,1194794814690086912,Andrea Skolik,"Can we teach a quantum computer to balance a pole? Find out which architectural choices are crucial to making quantum agents succeed at deep Q-learning in our new paper: We specifically show how the choice of observables will make or break Q-learning with PQCs, and how to make an informed choice by what we know about the optimal Q-values.",https://arxiv.org/abs/2103.15084,"Quantum machine learning (QML) has been identified as one of the key fields that could reap advantages from near-term quantum devices, next to optimization and quantum chemistry. Research in this area has focused primarily on variational quantum algorithms (VQAs), and several proposals to enhance supervised, unsupervised and reinforcement learning (RL) algorithms with VQAs have been put forward. Out of the three, RL is the least studied and it is still an open question whether VQAs can be competitive with state-of-the-art classical algorithms based on neural networks (NNs) even on simple benchmark tasks. In this work, we introduce a training method for parametrized quantum circuits (PQCs) that can be used to solve RL tasks for discrete and continuous state spaces based on the deep Q-learning algorithm. We investigate which architectural choices for quantum Q-learning agents are most important for successfully solving certain types of environments by performing ablation studies for a number of different data encoding and readout strategies. We provide insight into why the performance of a VQA-based Q-learning algorithm crucially depends on the observables of the quantum model and show how to choose suitable observables based on the learning task at hand. To compare our model against the classical DQN algorithm, we perform an extensive hyperparameter search of PQCs and NNs with varying numbers of parameters. We confirm that similar to results in classical literature, the architectural choices and hyperparameters contribute more to the agents' success in a RL setting than the number of parameters used in the model. Finally, we show when recent separation results between classical and quantum agents for policy gradient RL can be extended to inferring optimal Q-values in restricted families of environments. ","Quantum agents in the Gym: a variational quantum algorithm for deep
Q-learning",2,"['Can we teach a quantum computer to balance a pole? Find out which architectural choices are crucial to making quantum agents succeed at deep Q-learning in our new paper: ', 'We specifically show how the choice of observables will make or break Q-learning with PQCs, and how to make an informed choice by what we know about the optimal Q-values.']",21,03,354
313,246,1369596736054976514,471165766,Ion Errea,"Today it is an important day in my research career: we release the SSCHA code. You can find the info about the code here (), read the paper related to the code in , and follow it on twitter @SSCHA_code All this started when in my PhD Bruno Rousseau proposed to use the Self-Consistent Harmonic Approximation to study the high-pressure simple cubic phase of Ca. We implemented the idea, but the implementation was too primordial for any other material As a postdoc I moved to Paris to work Francesco Mauriâs group. We developed there the SSCHA and started its first applications on hydrogen-based superconductors Later Raffaello Bianco showed how the SSCHA theory could be expanded to predict second-order phase transitions easily and describe phonon spectral properties These improvements have been crucial to study CDW phase transitions and perform thermal transport calculations on thermoelectric materials Now @mesonepi has taken the lead in the development of the code and brought to it an improved efficiency and the possibility of making structural relaxations in the quantum energy landscape The latter development was crucial to show how quantum effects make stable the high-temperature superconducting LaH10 Now the code is distributed as open source, available to everybody. Your contributions will be welcome and surely will improve the capacity of the code.",https://arxiv.org/abs/2103.03973,"The efficient and accurate calculation of how ionic quantum and thermal fluctuations impact the free energy of a crystal, its atomic structure, and phonon spectrum is one of the main challenges of solid state physics, especially when strong anharmonicy invalidates any perturbative approach. To tackle this problem, we present the implementation on a modular Python code of the stochastic self-consistent harmonic approximation method. This technique rigorously describes the full thermodyamics of crystals accounting for nuclear quantum and thermal anharmonic fluctuations. The approach requires the evaluation of the Born-Oppenheimer energy, as well as its derivatives with respect to ionic positions (forces) and cell parameters (stress tensor) in supercells, which can be provided, for instance, by first principles density-functional-theory codes. The method performs crystal geometry relaxation on the quantum free energy landscape, optimizing the free energy with respect to all degrees of freedom of the crystal structure. It can be used to determine the phase diagram of any crystal at finite temperature. It enables the calculation of phase boundaries for both first-order and second-order phase transitions from the Hessian of the free energy. Finally, the code can also compute the anharmonic phonon spectra, including the phonon linewidths, as well as phonon spectral functions. We review the theoretical framework of the stochastic self-consistent harmonic approximation and its dynamical extension, making particular emphasis on the physical interpretation of the variables present in the theory that can enlighten the comparison with any other anharmonic theory. A modular and flexible Python environment is used for the implementation, which allows for a clean interaction with other packages. We briefly present a toy-model calculation to illustrate the potential of the code. ","The Stochastic Self-Consistent Harmonic Approximation: Calculating
Vibrational Properties of Materials with Full Quantum and Anharmonic Effects",8,"['Today it is an important day in my research career: we release the SSCHA code. You can find the info about the code here (), read the paper related to the code in , and follow it on twitter @SSCHA_code', 'All this started when in my PhD Bruno Rousseau proposed to use the Self-Consistent Harmonic Approximation to study the high-pressure simple cubic phase of Ca. We implemented the idea, but the implementation was too primordial for any other material https://t.co/BNDNge7JkJ', 'As a postdoc I moved to Paris to work Francesco Mauriâs group. We developed there the SSCHA and started its first applications on hydrogen-based superconductors https://t.co/QLPzgta2Ou https://t.co/WbznRxy3vy', 'Later Raffaello Bianco showed how the SSCHA theory could be expanded to predict second-order phase transitions easily and describe phonon spectral properties https://t.co/rXROrJKbqH', 'These improvements have been crucial to study CDW phase transitions and perform thermal transport calculations on thermoelectric materials https://t.co/REWg8LO0Ff https://t.co/nrouwbpX5h', 'Now @mesonepi has taken the lead in the development of the code and brought to it an improved efficiency and the possibility of making structural relaxations in the quantum energy landscape https://t.co/5qfmpNhzMF', 'The latter development was crucial to show how quantum effects make stable the high-temperature superconducting LaH10 https://t.co/aIpRNxa95S', 'Now the code is distributed as open source, available to everybody. Your contributions will be welcome and surely will improve the capacity of the code.']",21,03,1437
314,87,1285297480385650689,193887081,Marc van Zee,"New paper! ""Compositional Generalization in Semantic Parsing: Pre-training vs. Specialized Architectures"". We assess techniques/architectures's effectiveness in improving compositional generalization based on the SCAN and CFQ datasets. () Highlights [1/5]: (1) We provide the most comprehensive summary so far of architectures and techniques that have been applied to SCAN or CFQ [2/5] (2) Pre-training (using T5) helps for compositional generalization, but does not solve it. By combining pre-training with an intermediate representation, we obtain a new SOTA score for CFQ of 42.1% on the MCD-mean split, beating the previous best results of 18.9%. [3/5] (3) For the specialized architectures we evaluated (e.g., Neural Shuffle Exchange Network and CGPS), improvements obtained on one compositional generalization benchmark do not transfer to others. [4/5] (4) Improvements to general-purpose architectures (e.g., LSTM -> Transformer) generally lead to corresponding incremental improvements in compositional settings. [5/5] Cells with a grey background are results we add in our paper; those with a white background are existing results.",https://arxiv.org/abs/2007.08970,"While mainstream machine learning methods are known to have limited ability to compositionally generalize, new architectures and techniques continue to be proposed to address this limitation. We investigate state-of-the-art techniques and architectures in order to assess their effectiveness in improving compositional generalization in semantic parsing tasks based on the SCAN and CFQ datasets. We show that masked language model (MLM) pre-training rivals SCAN-inspired architectures on primitive holdout splits. On a more complex compositional task, we show that pre-training leads to significant improvements in performance vs. comparable non-pre-trained models, whereas architectures proposed to encourage compositional generalization on SCAN or in the area of algorithm learning fail to lead to significant improvements. We establish a new state of the art on the CFQ compositional generalization benchmark using MLM pre-training together with an intermediate representation. ","Compositional Generalization in Semantic Parsing: Pre-training vs.
Specialized Architectures",6,"['New paper! ""Compositional Generalization in Semantic Parsing: Pre-training vs. Specialized Architectures"". We assess techniques/architectures\'s effectiveness in improving compositional generalization based on the SCAN and CFQ datasets. () Highlights [1/5]:', '(1) We provide the most comprehensive summary so far of architectures and techniques that have been applied to SCAN or CFQ [2/5] https://t.co/VfURdjki4C', '(2) Pre-training (using T5) helps for compositional generalization, but does not solve it. By combining pre-training with an intermediate representation, we obtain a new SOTA score for CFQ of 42.1% on the MCD-mean split, beating the previous best results of 18.9%. [3/5]', '(3) For the specialized architectures we evaluated (e.g., Neural Shuffle Exchange Network and CGPS), improvements obtained on one compositional generalization benchmark do not transfer to others. [4/5]', '(4) Improvements to general-purpose architectures (e.g., LSTM -> Transformer) generally lead to corresponding incremental improvements in compositional settings. [5/5]', 'Cells with a grey background are results we add in our paper; those with a white background are existing results.']",20,07,1156
315,92,1103779809430249475,866221331184050176,Maximilian N. GĂŒnther,"New paper on @arxiv today! Led by @ZhuchangZ (MIT), we found the weirdest and fastest rotating M-dwarfs ever seen. Thanks to @NASA_TESS / @TESSatMIT! The paper is in the stellar (not planetary) arxiv, so don't miss it: :) I mean, just look at these weird phase-folded lightcurves... ...and those crazy flares that completely change the pattern for a few rotations... ...and this is what we think might cause these profiles: spots and gas/dust rings (follow-up paper in prep.) ",https://arxiv.org/abs/1903.02061,"We have searched for short periodicities in the light curves of stars with $T_{\rm eff}$ cooler than 4000 K made from 2-minute cadence data obtained in TESS sectors 1 and 2. Herein we report the discovery of 10 rapidly rotating M-dwarfs with highly structured rotational modulation patterns among 10 M dwarfs found to have rotation periods less than 1 day. Star-spot models cannot explain the highly structured periodic variations which typically exhibit between 10 and 40 Fourier harmonics. A similar set of objects was previously reported following K2 observations of the Upper Scorpius association (Stauffer et al. 2017). We examine the possibility that the unusual structured light-curves could stem from absorption by charged dust particles that are trapped in or near the stellar magnetosphere. We also briefly explore the possibilities that the sharp structured features in the lightcurves are produced by extinction by coronal gas, by beaming of the radiation emitted from the stellar surface, or by occultations of spots by a dusty ring that surrounds the star. The latter is perhaps the most promising of these scenarios. Most of the structured rotators display flaring activity, and we investigate changes in the modulation pattern following the largest flares. As part of this study, we also report the discovery of 371 rapidly rotating M-dwarfs with rotational periods below 4 hr, of which the shortest period is 1.63 hr. ","Complex Rotational Modulation of Rapidly Rotating M-Stars Observed with
TESS",4,"[""New paper on @arxiv today! Led by @ZhuchangZ (MIT), we found the weirdest and fastest rotating M-dwarfs ever seen. Thanks to @NASA_TESS / @TESSatMIT! The paper is in the stellar (not planetary) arxiv, so don't miss it: :)"", 'I mean, just look at these weird phase-folded lightcurves... https://t.co/HkFEAqFrm1', '...and those crazy flares that completely change the pattern for a few rotations... https://t.co/fbo5e26mav', '...and this is what we think might cause these profiles: spots and gas/dust rings (follow-up paper in prep.) https://t.co/yuKJlkP56d']",19,03,503
316,248,1369295578849415168,1261860960895004672,Jorge Moreno,"It's paper day. We find that bursty episodes of star formation at high redshift promote the formation of the thick disc in simulated Milky Ways (FIRE zooms). Congrats to the lead author, @AstroBananna, for this fantastic paper! And kudos to all the coauthors! @jbprime @astro_klein @AndrewWetzel @alexbgurvich @PFHopkins_Astro",https://arxiv.org/abs/2103.03888,"We investigate thin and thick stellar disc formation in Milky-Way-mass galaxies using twelve FIRE-2 cosmological zoom-in simulations. All simulated galaxies experience an early period of bursty star formation that transitions to a late-time steady phase of near-constant star formation. Stars formed during the late-time steady phase have more circular orbits and thin-disc-like morphology at $z=0$, whilst stars born during the bursty phase have more radial orbits and thick-disc structure. The median age of thick-disc stars at $z=0$ correlates strongly with this transition time. We also find that galaxies with an earlier transition from bursty to steady star formation have a higher thin-disc fractions at $z=0$. Three of our systems have minor mergers with LMC-size satellites during the thin-disc phase. These mergers trigger short starbursts but do not destroy the thin disc nor alter broad trends between the star formation transition time and thin/thick disc properties. If our simulations are representative of the Universe, then stellar archaeological studies of the Milky Way (or M31) provide a window into past star-formation modes in the Galaxy. Current age estimates of the Galactic thick disc would suggest that the Milky Way transitioned from bursty to steady phase $\sim$6.5 Gyr ago; prior to that time the Milky Way likely lacked a recognisable thin disc. ",The bursty origin of the Milky Way thick disc,2,"[""It's paper day. We find that bursty episodes of star formation at high redshift promote the formation of the thick disc in simulated Milky Ways (FIRE zooms). Congrats to the lead author, @AstroBananna, for this fantastic paper! "", 'And kudos to all the coauthors! @jbprime @astro_klein @AndrewWetzel @alexbgurvich @PFHopkins_Astro']",21,03,340
317,201,1506222618243915779,60607937,F. GĂŒney,"as a follow-up to SLAMP, we propose to disentangle structure and motion for future prediction. we model the stochasticity in the underlying factors generating the pixels, i.e. predict future ego-motion, then conditioned on that, predict the object motion. our method is significantly (40x) faster than VRNN and can handle interesting diverse examples like the one below (yes, turning is an interesting case where the majority of the data is going straight. ",https://arxiv.org/abs/2203.10528,"While stochastic video prediction models enable future prediction under uncertainty, they mostly fail to model the complex dynamics of real-world scenes. For example, they cannot provide reliable predictions for scenes with a moving camera and independently moving foreground objects in driving scenarios. The existing methods fail to fully capture the dynamics of the structured world by only focusing on changes in pixels. In this paper, we assume that there is an underlying process creating observations in a video and propose to factorize it into static and dynamic components. We model the static part based on the scene structure and the ego-motion of the vehicle, and the dynamic part based on the remaining motion of the dynamic objects. By learning separate distributions of changes in foreground and background, we can decompose the scene into static and dynamic parts and separately model the change in each. Our experiments demonstrate that disentangling structure and motion helps stochastic video prediction, leading to better future predictions in complex driving scenarios on two real-world driving datasets, KITTI and Cityscapes. ",Stochastic Video Prediction with Structure and Motion,2,"['as a follow-up to SLAMP, we propose to disentangle structure and motion for future prediction. we model the stochasticity in the underlying factors generating the pixels, i.e. predict future ego-motion, then conditioned on that, predict the object motion.\n ', 'our method is significantly (40x) faster than VRNN and can handle interesting diverse examples like the one below (yes, turning is an interesting case where the majority of the data is going straight. https://t.co/VGF3K7HBU4']",22,03,477
318,19,1354837390188109831,1515424688,Armen Aghajanyan,"I'm happy to present our new paper MUPPET (), arguing for an additional stage between pre-training and fine-tuning, called pre-finetuning which uses massively multi-task learning (~50 tasks) to further refine representations. Recent work has shown gains from MTL/multi-stage fine-tuning, but it can be hard to know which intermediate tasks will best transfer. We show that multi-task supervised tuning is effective if done at scale (# tasks), removing the need to pre-select the best intermediate tasks. Our multi-task set up consists of 46 datasets across 4 task types with close to 5 million supervised samples, using a classification head for each classification dataset, and unified heads for MRC and CommonSense tasks. After solving practical problems (loss scaling, heterogeneous batches, etc) we pre-finetune RoBERTa variants and BART. We first look at fine-tuning MUPPET over datasets available in the pre-finetuning regime. MUPPET unanimously improves over itâs base model for GLUE/SQuAD. MUPPET also improves over itâs base model for sentence prediction, commonsense, and summarization tasks. To show that MUPPET representations are more generalizable, we also measure MUPPET performance over datasets not available in the pre-finetuning regime. MUPPET once again improves consistently over itâs only pre-trained counterparts. MTL historically has given inconclusive results. So why does MUPPET work? Turns out scale is fundamental for MTL. There exists a critical # of tasks (~15) under which pre-finetuning degrades representations. But over this critical point linearly improves representations. As another side-effect, MUPPET variants of pre-trained models show much better data-efficiency for downstream fine-tuning. This was joint work with great authors Anchit Gupta, @AkshatS07, Xilun Chen, @LukeZettlemoyer, @sonalsgupta",https://arxiv.org/abs/2101.11038,"We propose pre-finetuning, an additional large-scale learning stage between language model pre-training and fine-tuning. Pre-finetuning is massively multi-task learning (around 50 datasets, over 4.8 million total labeled examples), and is designed to encourage learning of representations that generalize better to many different tasks. We show that pre-finetuning consistently improves performance for pretrained discriminators (e.g.~RoBERTa) and generation models (e.g.~BART) on a wide range of tasks (sentence prediction, commonsense reasoning, MRC, etc.), while also significantly improving sample efficiency during fine-tuning. We also show that large-scale multi-tasking is crucial; pre-finetuning can hurt performance when few tasks are used up until a critical point (usually above 15) after which performance improves linearly in the number of tasks. ",Muppet: Massive Multi-task Representations with Pre-Finetuning,9,"[""I'm happy to present our new paper MUPPET (), arguing for an additional stage between pre-training and fine-tuning, called pre-finetuning which uses massively multi-task learning (~50 tasks) to further refine representations."", 'Recent work has shown gains from MTL/multi-stage fine-tuning, but it can be hard to know which intermediate tasks will best transfer. We show that multi-task supervised tuning is effective if done at scale (# tasks), removing the need to pre-select the best intermediate tasks.', 'Our multi-task set up consists of 46 datasets across 4 task types with close to 5 million supervised samples, using a classification head for each classification dataset, and unified heads for MRC and CommonSense tasks. https://t.co/4Nw4oQlbRo', 'After solving practical problems (loss scaling, heterogeneous batches, etc) we pre-finetune RoBERTa variants and BART. We first look at fine-tuning MUPPET over datasets available in the pre-finetuning regime. MUPPET unanimously improves over itâs base model for GLUE/SQuAD. https://t.co/UtrMe18vnW', 'MUPPET also improves over itâs base model for sentence prediction, commonsense, and summarization tasks. https://t.co/wTNwum3BB5', 'To show that MUPPET representations are more generalizable, we also measure MUPPET performance over datasets not available in the pre-finetuning regime. MUPPET once again improves consistently over itâs only pre-trained counterparts. https://t.co/lrYvGnZtBD', 'MTL historically has given inconclusive results. So why does MUPPET work? Turns out scale is fundamental for MTL. There exists a critical # of tasks (~15) under which pre-finetuning degrades representations. But over this critical point linearly improves representations. https://t.co/AJmjtfHvxp', 'As another side-effect, MUPPET variants of pre-trained models show much better data-efficiency for downstream fine-tuning.', 'This was joint work with great authors Anchit Gupta, @AkshatS07, Xilun Chen, @LukeZettlemoyer, @sonalsgupta']",21,01,1880
319,177,1366621774738124801,129155606,Shahroz Tariq,"Happy to share our new #preprint on #arxiv âAm I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attackâ Paper: It is an extensive #measurement and #evaluation of Commercial Face Recognition #WebAPIs against #Deepfakes. We also propose a preliminary defense mechanism against the #DeepfakeImpersonationAttack and compare several defense strategies. Finally, we are also releasing two new #DeepfakeDataset for evaluation. #impersonationattack #facerecognition",http://arxiv.org/abs/2103.00847,"Recently, significant advancements have been made in face recognition technologies using Deep Neural Networks. As a result, companies such as Microsoft, Amazon, and Naver offer highly accurate commercial face recognition web services for diverse applications to meet the end-user needs. Naturally, however, such technologies are threatened persistently, as virtually any individual can quickly implement impersonation attacks. In particular, these attacks can be a significant threat for authentication and identification services, which heavily rely on their underlying face recognition technologies' accuracy and robustness. Despite its gravity, the issue regarding deepfake abuse using commercial web APIs and their robustness has not yet been thoroughly investigated. This work provides a measurement study on the robustness of black-box commercial face recognition APIs against Deepfake Impersonation (DI) attacks using celebrity recognition APIs as an example case study. We use five deepfake datasets, two of which are created by us and planned to be released. More specifically, we measure attack performance based on two scenarios (targeted and non-targeted) and further analyze the differing system behaviors using fidelity, confidence, and similarity metrics. Accordingly, we demonstrate how vulnerable face recognition technologies from popular companies are to DI attack, achieving maximum success rates of 78.0% and 99.9% for targeted (i.e., precise match) and non-targeted (i.e., match with any celebrity) attacks, respectively. Moreover, we propose practical defense strategies to mitigate DI attacks, reducing the attack success rates to as low as 0% and 0.02% for targeted and non-targeted attacks, respectively. ","Am I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web
APIs under Deepfake Impersonation Attack",2,"['Happy to share our new #preprint on #arxiv âAm I a Real or Fake Celebrity? Measuring Commercial Face Recognition Web APIs under Deepfake Impersonation Attackâ \nPaper: \n\nIt is an extensive #measurement and #evaluation of Commercial Face Recognition #WebAPIs', 'against #Deepfakes. We also propose a preliminary defense mechanism against the #DeepfakeImpersonationAttack and compare several defense strategies. Finally, we are also releasing two new #DeepfakeDataset for evaluation. \n\n#impersonationattack #facerecognition']",21,03,520
320,72,1470674171545927680,1381477369903513601,Ilker Birbil @ UvA,"Weâve just released our new meta-learning algorithm: LESS - LEarning with Subset Stacking LESS is flexible, fast, and accurate. If you get a chance to try it, please let us know how it works for you. Paper: Code & Tutorials: The current version of LESS works for regression, and we will hopefully add classification by early 2022. @danial_esm We have tried the straightforward parallelisation and reported our results. The idea about using âlocal agentsâ sounds interesting. ",https://arxiv.org/abs/2112.06251,"We propose a new algorithm that learns from a set of input-output pairs. Our algorithm is designed for populations where the relation between the input variables and the output variable exhibits a heterogeneous behavior across the predictor space. The algorithm starts with generating subsets that are concentrated around random points in the input space. This is followed by training a local predictor for each subset. Those predictors are then combined in a novel way to yield an overall predictor. We call this algorithm ""LEarning with Subset Stacking"" or LESS, due to its resemblance to method of stacking regressors. We compare the testing performance of LESS with the state-of-the-art methods on several datasets. Our comparison shows that LESS is a competitive supervised learning method. Moreover, we observe that LESS is also efficient in terms of computation time and it allows a straightforward parallel implementation. ",Learning with Subset Stacking,3,"['Weâve just released our new meta-learning algorithm:\n\nLESS - LEarning with Subset Stacking\n\nLESS is flexible, fast, and accurate. If you get a chance to try it, please let us know how it works for you. \n\nPaper: \nCode & Tutorials: ', 'The current version of LESS works for regression, and we will hopefully add classification by early 2022.', '@danial_esm We have tried the straightforward parallelisation and reported our results. The idea about using âlocal agentsâ sounds interesting. https://t.co/f4XrVydE7J']",21,12,503
321,20,1420910060721512448,746440524052082688,Nicolas Delfosse,"Check out our new paper on using soft information to improve quantum error correction with Chris Pattison, Michael Beverland, @themarcusps. That was great to have Chris as an intern in our group last Fall! Why should you read this paper? 1/ You want better quantum error correction. 2/ Your measurement outcomes look continuous and not only +1 or -1 like in most quantum error correction papers. 3/ You want to understand how to to optimize the measurement time of your physical qubits. I am especially excited about our study of the optimal measurement time. Have a look at Figure 12.",https://arxiv.org/abs/2107.13589,"The typical model for measurement noise in quantum error correction is to randomly flip the binary measurement outcome. In experiments, measurements yield much richer information - e.g., continuous current values, discrete photon counts - which is then mapped into binary outcomes by discarding some of this information. In this work, we consider methods to incorporate all of this richer information, typically called soft information, into the decoding of quantum error correction codes, and in particular the surface code. We describe how to modify both the Minimum Weight Perfect Matching and Union-Find decoders to leverage soft information, and demonstrate these soft decoders outperform the standard (hard) decoders that can only access the binary measurement outcomes. Moreover, we observe that the soft decoder achieves a threshold 25\% higher than any hard decoder for phenomenological noise with Gaussian soft measurement outcomes. We also introduce a soft measurement error model with amplitude damping, in which measurement time leads to a trade-off between measurement resolution and additional disturbance of the qubits. Under this model we observe that the performance of the surface code is very sensitive to the choice of the measurement time - for a distance-19 surface code, a five-fold increase in measurement time can lead to a thousand-fold increase in logical error rate. Moreover, the measurement time that minimizes the physical error rate is distinct from the one that minimizes the logical performance, pointing to the benefits of jointly optimizing the physical and quantum error correction layers. ",Improved quantum error correction using soft information,3,"['Check out our new paper on using soft information to improve quantum error correction with Chris Pattison, Michael Beverland, @themarcusps.\nThat was great to have Chris as an intern in our group last Fall!\n\n', 'Why should you read this paper?\n1/ You want better quantum error correction.\n2/ Your measurement outcomes look continuous and not only +1 or -1 like in most quantum error correction papers.\n3/ You want to understand how to to optimize the measurement time of your physical qubits.', 'I am especially excited about our study of the optimal measurement time. Have a look at Figure 12.']",21,07,592
322,167,1414375806885842948,1001049754787368960,Dr. Yu-Dai Tsai,"Our new paper is out. ""Asteroid g-2 Experiments!"" This is, to our knowledge, the first time asteroid astronomy was used to study new physics. Comments and suggestions are more than welcome. This is the beginning of a new research direction! @lucavisinelli @SunnyVagnozzi",https://arxiv.org/abs/2107.04038,"We study for the first time the possibility of probing long-range fifth forces utilizing asteroid astrometric data, via the fifth force-induced orbital precession. We examine nine Near-Earth Object (NEO) asteroids whose orbital trajectories are accurately determined via optical and radar astrometry. Focusing on a Yukawa-type potential mediated by a new gauge field (dark photon) or a baryon-coupled scalar, we estimate the sensitivity reach for the fifth-force coupling strength and mediator mass in the mass range $m \simeq 10^{-21}-10^{-15}\,{\rm eV}$. Our estimated sensitivity is comparable to leading limits from torsion balance experiments, potentially exceeding these in a specific mass range. The fifth forced-induced precession increases with the orbital semi-major axis in the small $m$ limit, motivating the study of objects further away from the Sun. We discuss future exciting prospects for extending our study to more than a million asteroids (including NEOs, main-belt asteroids, Hildas, and Jupiter Trojans), as well as trans-Neptunian objects and exoplanets. ",Asteroid astrometry as a fifth-force and ultralight dark sector probe,2,"['Our new paper is out. ""Asteroid g-2 Experiments!""\n\nThis is, to our knowledge, the first time asteroid astronomy was used to study new physics. Comments and suggestions are more than welcome. This is the beginning of a new research direction!', '@lucavisinelli @SunnyVagnozzi']",21,07,277
323,78,1306690324215996416,1008944276431036416,Boris Ivanovic,"New paper up on arXiv with Amine Elhafsi, Guy Rosman, @adnothing, @MarcoPavoneSU!! In it, we propose a new multi-agent trajectory forecasting output representation that is much more amenable to downstream planning and control algorithms. Check it out at ! ",https://arxiv.org/abs/2009.07517,"Reasoning about human motion is a core component of modern human-robot interactive systems. In particular, one of the main uses of behavior prediction in autonomous systems is to inform robot motion planning and control. However, a majority of planning and control algorithms reason about system dynamics rather than the predicted agent tracklets (i.e., ordered sets of waypoints) that are commonly output by trajectory forecasting methods, which can hinder their integration. Towards this end, we propose Mixtures of Affine Time-varying Systems (MATS) as an output representation for trajectory forecasting that is more amenable to downstream planning and control use. Our approach leverages successful ideas from probabilistic trajectory forecasting works to learn dynamical system representations that are well-studied in the planning and control literature. We integrate our predictions with a proposed multimodal planning methodology and demonstrate significant computational efficiency improvements on a large-scale autonomous driving dataset. ","MATS: An Interpretable Trajectory Forecasting Representation for
Planning and Control",1,"['New paper up on arXiv with Amine Elhafsi, Guy Rosman, @adnothing, @MarcoPavoneSU!! In it, we propose a new multi-agent trajectory forecasting output representation that is much more amenable to downstream planning and control algorithms. Check it out at ! ']",20,09,268
324,206,1385077064995270656,844013017830379522,Nayeon Lee,đšHappy to share 2 papers accepted in #NAACL2021 to tackle misinformation đđ©ââïžââ
We propose an UnifiedM2 model that unifies multiple domains of misinfo to learn a richer representation that aids effective few-shot learning of unseen misinfo tasks.1/n Many thanks to all the co-authors đ @belindazli @sinongwang @pascalefung @gabema @scottyih @MadianKhabsa,https://arxiv.org/abs/2104.05243,"In this paper, we introduce UnifiedM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup. The model is trained to handle four tasks: detecting news bias, clickbait, fake news, and verifying rumors. By grouping these tasks together, UnifiedM2learns a richer representation of misinformation, which leads to state-of-the-art or comparable performance across all tasks. Furthermore, we demonstrate that UnifiedM2's learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets and model's generalizability to unseen events. ",On Unifying Misinformation Detection,2,"['đšHappy to share 2 papers accepted in #NAACL2021 to tackle misinformation đđ©\u200dâïžââ
\n\nWe propose an UnifiedM2 model that unifies multiple domains of misinfo to learn a richer representation that aids effective few-shot learning of unseen misinfo tasks.1/n\n\n ', 'Many thanks to all the co-authors đ @belindazli @sinongwang @pascalefung @gabema @scottyih @MadianKhabsa']",21,04,377
325,35,875287104288411648,1069568448,Steve Crawford,New paper by @solohery_astro on the arXiv today looking at the stellar to dynamical mass for LCBGs in clusters @solohery_astro These star-bursting galaxies at intermediate redshift show smaller dynamical to stellar mass ratio in galaxy clusters than the field ,https://arxiv.org/abs/1706.04534,"We investigate the stellar masses of the class of star-forming objects known as Luminous Compact Blue Galaxies (LCBGs) by studying a sample of galaxies in the distant cluster MS$~$0451.6-0305 at $z\approx0.54$ with ground-based multicolor imaging and spectroscopy. For a sample of 16 spectroscopically-confirmed cluster LCBGs (colour $B-V < 0.5$, surface brightness $\mu_B < 21$ mag arcsec$^{-2}$, and magnitude $M_B < -18.5$), we measure stellar masses by fitting spectral energy distribution (SED) models to multiband photometry, and compare with dynamical masses (determined from velocity dispersion between 10 $<$ $\sigma_v (\rm km~ s^{-1})$ $<$ 80), we previously obtained from their emission-line spectra. We compare two different stellar population models that measure stellar mass in star-bursting galaxies, indicating correlations between the stellar age, extinction, and stellar mass derived from the two different SED models. The stellar masses of cluster LCBGs are distributed similarly to those of field LCBGs, but the cluster LCBGs show lower dynamical-to-stellar mass ratios ($\rm M_{dyn}/M_{\ast} = 2.6$) than their field LCBG counterparts ($\rm M_{dyn}/M_{\ast}=4.8$), echoing trends noted previously in low-redshift dwarf elliptical galaxies. Within this limited sample, the specific star formation rate declines steeply with increasing mass, suggesting that these cluster LCBGs have undergone vigorous star formation. ","Star-forming Galaxies in Intermediate Redshift Clusters: Stellar vs.
Dynamical Masses of Luminous Compact Blue Galaxies",2,"['New paper by @solohery_astro on the arXiv today looking at the stellar to dynamical mass for LCBGs in clusters ', '@solohery_astro These star-bursting galaxies at intermediate redshift show smaller dynamical to stellar mass ratio in galaxy clusters than the field https://t.co/ZIKVp08kXM']",17,06,273
326,130,1466707506072334339,561899047,Aki Vehtari,"New review paper on prior elicitation: Prior knowledge elicitation: The past, present, and future with @MikkolaPetrus, @aloctavodia, @suyoghc, M Hartmann, @OriolAbril, @somewhaton, @henri_pesonen, J Corander, I, @samikaski, @paulbuerkner, A Klami; @FCAI_fi @JessicaHullman @MikkolaPetrus @aloctavodia @suyoghc @OriolAbril @somewhaton @henri_pesonen @samikaski @paulbuerkner @FCAI_fi @YeaseulKim Thanks! I was aware of your great work on visualization (and we're citing one), but clearly we missed/forgot the connection to prior elicitation. We'll add these, too",https://arxiv.org/abs/2112.01380,"Specification of the prior distribution for a Bayesian model is a central part of the Bayesian workflow for data analysis, but it is often difficult even for statistical experts. Prior elicitation transforms domain knowledge of various kinds into well-defined prior distributions, and offers a solution to the prior specification problem, in principle. In practice, however, we are still fairly far from having usable prior elicitation tools that could significantly influence the way we build probabilistic models in academia and industry. We lack elicitation methods that integrate well into the Bayesian workflow and perform elicitation efficiently in terms of costs of time and effort. We even lack a comprehensive theoretical framework for understanding different facets of the prior elicitation problem. Why are we not widely using prior elicitation? We analyze the state of the art by identifying a range of key aspects of prior knowledge elicitation, from properties of the modelling task and the nature of the priors to the form of interaction with the expert. The existing prior elicitation literature is reviewed and categorized in these terms. This allows recognizing under-studied directions in prior elicitation research, finally leading to a proposal of several new avenues to improve prior elicitation methodology. ","Prior knowledge elicitation: The past, present, and future",2,"['New review paper on prior elicitation: Prior knowledge elicitation: The past, present, and future with @MikkolaPetrus, @aloctavodia, @suyoghc, M Hartmann, @OriolAbril, @somewhaton, @henri_pesonen, J Corander, I, @samikaski, @paulbuerkner, A Klami; @FCAI_fi ', ""@JessicaHullman @MikkolaPetrus @aloctavodia @suyoghc @OriolAbril @somewhaton @henri_pesonen @samikaski @paulbuerkner @FCAI_fi @YeaseulKim Thanks! I was aware of your great work on visualization (and we're citing one), but clearly we missed/forgot the connection to prior elicitation. We'll add these, too""]",21,12,575
327,72,1373984522820194304,15612654,Alan Stern,"#PI_Daily Even w/all PIâs have to do, they find time to publish new scientific results from their missions! My just accepted @NewHorizons2015 paper on KBO Arrokothâs bright neck (the ring-like feature at the joint between itâs 2 lobes) is now available at ",https://arxiv.org/abs/2103.10780,"One of the most striking and curious features of the small Kuiper Belt Object (KB), Arrokoth, explored by New Horizons, is the bright, annular neck it exhibits at the junction between its two lobes. Here we summarize past reported findings regarding the properties of this feature and report new findings regarding its dimensions, reflectivity and color, shape profile, and its lack of identifiable craters. We conclude by enumerating possible origin scenarios for this unusual feature. New results include a new measurement of the observed neck area of 8+/-1.5 km2, a total neck surface area of 32 km2, a 12.5:1 ratio of neck circumference to height, a normal reflectance histogram of the observed neck, and the fact that no significant (i.e., >2 sigma) color units were identified, meaning the neck's color is generally spatially uniform at the 1.5 km/pixel scale of the best color images. Although several origin hypotheses for the bright material in the neck are briefly discussed, none can be conclusively demonstrated to be the actual origin mechanism at this time; some future tests are identified. ","Some New Results and Perspectives Regarding the Kuiper Belt Object
Arrokoth's Remarkable, Bright Neck",1,"['#PI_Daily Even w/all PIâs have to do, they find time to publish new scientific results from their missions! My just accepted @NewHorizons2015 paper on KBO Arrokothâs bright neck (the ring-like feature at the joint between itâs 2 lobes) is now available at ']",21,03,269
328,8,1112737652602601477,70874545,Josh Lothringer,"New paper on the arXiv today: ! We explore the wacky world of ultra-hot Jupiters further by looking at how they change when around different type stars. This should be important since ultra-hot Jupiters are the most highly irradiated gaseous planets and are often around earlier-type host stars (e.g., KELT-9, the host star of the hottest Jovian exoplanet, is an A0, emitting ~50% of its flux in the UV). We find that planets of the same Teq around earlier-type host stars will have stronger, steeper inversions b/c of more short-wavelength (<0.5 mu) absorption. Additionally, treating the star in NLTE also affects the modeled planet b/c of metal line depths. This trend should be readily detectable by future observations. Such observations will see more muted transit spectra in planets around earlier-type host stars (b/c more H-) and larger brightness temperature variations in emission spectra. Lastly, we took a closer look at what is making up the absorption at short-wavelengths, responsible for heating the atmosphere. Here we show that Fe, Fe+, Mg, and C are responsible for absorbing all the irradiation. Na, Ca, and K are important deeper. If you're in Tucson, you can hear me talk about this *today* at the Theoretical Astrophysics Program graduate research prize colloquium at 3:30 in Kuiper 312!",https://arxiv.org/abs/1903.12183,"Ultra-hot Jupiters are the most highly irradiated gas giant planets, with equilibrium temperatures from 2000 to over 4000 K. Ultra-hot Jupiters are amenable to characterization due to their high temperatures, inflated radii, and short periods, but their atmospheres are atypical for planets in that the photosphere possesses large concentrations of atoms and ions relative to molecules. Here we evaluate how the atmospheres of these planets respond to irradiation by stars of different spectral type. We find that ultra-hot Jupiters exhibit temperature inversions that are sensitive to the spectral type of the host star. The slope and temperature range across the inversion both increase as the host star effective temperature increases due to enhanced absorption at short wavelengths and low pressures. The steep temperature inversions in ultra-hot Jupiters around hot stars result in increased thermal dissociation and ionization compared to similar planets around cooler stars. The resulting increase in H$^{-}$ opacity leads to a transit spectrum that has muted absorption features. The emission spectrum, however, exhibits a large contrast in brightness temperature, a signature that will be detectable with both secondary eclipse observations and high-dispersion spectroscopy. We also find that the departures from local thermodynamic equilibrium in the stellar atmosphere can affect the degree of heating caused by atomic metals in the planet's upper atmosphere. Additionally, we further quantify the significance of heating by different opacity sources in ultra-hot Jupiter atmospheres. ","The Influence of Host Star Spectral Type on Ultra-Hot Jupiter
Atmospheres",6,"['New paper on the arXiv today: ! We explore the wacky world of ultra-hot Jupiters further by looking at how they change when around different type stars.', 'This should be important since ultra-hot Jupiters are the most highly irradiated gaseous planets and are often around earlier-type host stars (e.g., KELT-9, the host star of the hottest Jovian exoplanet, is an A0, emitting ~50% of its flux in the UV).', 'We find that planets of the same Teq around earlier-type host stars will have stronger, steeper inversions b/c of more short-wavelength (<0.5 mu) absorption. Additionally, treating the star in NLTE also affects the modeled planet b/c of metal line depths. https://t.co/TXYtKd9OXb', 'This trend should be readily detectable by future observations. Such observations will see more muted transit spectra in planets around earlier-type host stars (b/c more H-) and larger brightness temperature variations in emission spectra. https://t.co/JPQYtRc8Jd', 'Lastly, we took a closer look at what is making up the absorption at short-wavelengths, responsible for heating the atmosphere. Here we show that Fe, Fe+, Mg, and C are responsible for absorbing all the irradiation. Na, Ca, and K are important deeper. https://t.co/iBb0VveBKw', ""If you're in Tucson, you can hear me talk about this *today* at the Theoretical Astrophysics Program graduate research prize colloquium at 3:30 in Kuiper 312!""]",19,03,1341
329,228,1510929114986909696,261865146,Dr Sofia Qvarfort,New article on the @arxiv! We show how time-dependent modulations of the trapping potential can lead to cooling and propose a measurement protocol for cooling down arbitrary quantum states to near the ground-state: Great collab with Sreenath Manikandan! ,https://arxiv.org/abs/2204.00476,We propose a cooling protocol that uses phase-preserving quantum measurements and phase-dependent modulations of the trapping potential at parametric resonance to cool a quantum oscillator to near its quantum-mechanical ground-state. The sequential measurements and feedback provide a definite phase reference and stabilize the oscillator in the long-time limit. The protocol is robust against moderate amounts of dissipation and phase errors in the feedback loop. Our work has implications for the cooling of mechanical resonators and the integration of quantum refrigerators into quantum circuits. ,"Cooling through parametric modulations and phase-preserving quantum
measurements",1,['New article on the @arxiv! We show how time-dependent modulations of the trapping potential can lead to cooling and propose a measurement protocol for cooling down arbitrary quantum states to near the ground-state:\n\nGreat collab with Sreenath Manikandan! '],22,04,267
330,142,1237327422044979200,2656744506,Kaspar MĂ€rtens đđ,"Excited to share our #AISTATS2020 paper with @cwcyau, where we propose the BasisVAE: a VAE framework for clustering features (allowing for scale and/or translation invariance). Paper: PyTorch implementation: Our goal is to have a joint model for 1) dimensionality reduction (inferring latent z) and 2) clustering features. Specifically, based on the mappings z -> data, we want to find features that have a similar shape (scale invariance) and those that are shifted (translation inv). We propose to achieve this within the VAE framework. Specifically, we modify the decoder network by embedding a probabilistic clustering prior within the last layer of the decoder. This lets us learn a one-hot basis function representation as part of the decoder network. Training illustrated on a toy example: (A) scale-invariant BasisVAE on the left, and (B) scale-and-translation-invariant on the right. Due to the scalability of VAEs, we can use this on large-scale data sets, e.g. single-cell gene expression. On a mouse spermatogenesis data set (Ernst et al 2019) we show how BasisVAE lets us jointly learn a one-dimensional representation and cluster genes with similar shapes. ",https://arxiv.org/abs/2003.03462,"Variational Autoencoders (VAEs) provide a flexible and scalable framework for non-linear dimensionality reduction. However, in application domains such as genomics where data sets are typically tabular and high-dimensional, a black-box approach to dimensionality reduction does not provide sufficient insights. Common data analysis workflows additionally use clustering techniques to identify groups of similar features. This usually leads to a two-stage process, however, it would be desirable to construct a joint modelling framework for simultaneous dimensionality reduction and clustering of features. In this paper, we propose to achieve this through the BasisVAE: a combination of the VAE and a probabilistic clustering prior, which lets us learn a one-hot basis function representation as part of the decoder network. Furthermore, for scenarios where not all features are aligned, we develop an extension to handle translation-invariant basis functions. We show how a collapsed variational inference scheme leads to scalable and efficient inference for BasisVAE, demonstrated on various toy examples as well as on single-cell gene expression data. ","BasisVAE: Translation-invariant feature-level clustering with
Variational Autoencoders",5,"['Excited to share our #AISTATS2020 paper with @cwcyau, where we propose the BasisVAE: a VAE framework for\xa0clustering features (allowing for scale and/or translation invariance).\xa0\n\nPaper: \n\nPyTorch implementation: ', 'Our goal is to have a joint model for \n1) dimensionality reduction (inferring latent z) and \n2) clustering features. \nSpecifically, based on the mappings z -> data, we want to find features that have a similar shape (scale invariance) and those that are shifted (translation inv). https://t.co/zrHD7O49T4', 'We propose to achieve this within the VAE framework. Specifically, we modify the decoder network by embedding a probabilistic clustering prior within the last layer of the decoder. This lets us learn a one-hot basis function representation as part of the decoder network. https://t.co/p7Juamjh4S', 'Training illustrated on a toy example: (A) scale-invariant BasisVAE on the left, and (B) scale-and-translation-invariant on the right. https://t.co/U8hUtJIf2c', 'Due to the scalability of VAEs, we can use this on large-scale data sets, e.g. single-cell gene expression. On a mouse spermatogenesis data set (Ernst et al 2019) we show how BasisVAE lets us jointly learn a one-dimensional representation and cluster genes with similar shapes. https://t.co/75hUQVOqwk']",20,03,1223
331,208,1412662575435636736,217771939,Michael Szell,"đšNew Preprint! Growing Urban Bicycle Networks Explore at We study the limitations of growing đČđžïž Main finding: Cities must invest 1) with right growth strategy and 2) *persistently*, to overcome a critical mass. 𧔠Most cities on the planet have no infrastructure for safe cycling. Therefore we grow bike networks from scratch, using only 1) the street network, 2) arbitrary points of interest. Minimal data needed so this works everywhere. We find a ""phase transition"" at a critical threshold: During growth, some quality metrics *decrease* until a critical point is reached. This point depends on the growth strategy. Unfortunately, cities follow the *worst* growth strategy: random. It wastes investments with bad connectedness. Also invites objections: ""We already built many bike tracks but nobody is using them, so why build more?"" It's not a network's length that matters but how you grow it. Comparing our synthetic growth with the well developed Copenhagen network reveals 80% overlap - we recreate reality! This could find ""missing links"" (in future work..) However, as of now, this is NOT concrete recommendation for new infrastructure - only after refinements. How does this bike network growth affect the car network? Mostly by decreasing its directness. Is this good or bad? -How we decide such questions will determine our quality of life in cities, and generally the fate of the planet... (img by @TUMInitiative) Research with S. Mimar, T. Perlman, @Ghoshal_G, @robysinatra. Paper: Code: developed by my awesome MSc students Cecilia L. Kolding Andersen and Morten Lynghede. Explore 62 cities & Download 1000+ videos! happy@feedback! @StreetsblogUSA @gboeing @CyclingEmbassy @giulio_mattioli @EuCyclistsFed @TheWarOnCars @robinlovelace @martikagv @andershartmann @cyklistforbund @BaldwinMatthew_ @npalomin @CyclingScience1 @Sust_Mobility @VCOE_AT @NACTO @openstreetmap @LitmanVTPI @altafieldnotes",https://arxiv.org/abs/2107.02185,"Cycling is a promising solution to unsustainable urban transport systems. However, prevailing bicycle network development follows a slow and piecewise process, without taking into account the structural complexity of transportation networks. Here we explore systematically the topological limitations of urban bicycle network development. For 62 cities we study different variations of growing a synthetic bicycle network between an arbitrary set of points routed on the urban street network. We find initially decreasing returns on investment until a critical threshold, posing fundamental consequences to sustainable urban planning: Cities must invest into bicycle networks with the right growth strategy, and persistently, to surpass a critical mass. We also find pronounced overlaps of synthetically grown networks in cities with well-developed existing bicycle networks, showing that our model reflects reality. Growing networks from scratch makes our approach a generally applicable starting point for sustainable urban bicycle network planning with minimal data requirements. ",Growing Urban Bicycle Networks,8,"['đšNew Preprint! Growing Urban Bicycle Networks\n\nExplore at \n\nWe study the limitations of growing đČđžïž \nMain finding: Cities must invest 1) with right growth strategy and 2) *persistently*, to overcome a critical mass. 𧔠', 'Most cities on the planet have no infrastructure for safe cycling. Therefore we grow bike networks from scratch, using only 1) the street network, 2) arbitrary points of interest. Minimal data needed so this works everywhere. https://t.co/EhhokHlunS', 'We find a ""phase transition"" at a critical threshold: During growth, some quality metrics *decrease* until a critical point is reached. This point depends on the growth strategy. https://t.co/xNPCI1221E', 'Unfortunately, cities follow the *worst* growth strategy: random. It wastes investments with bad connectedness. Also invites objections: ""We already built many bike tracks but nobody is using them, so why build more?"" \nIt\'s not a network\'s length that matters but how you grow it. https://t.co/TDrRRMMj5G', 'Comparing our synthetic growth with the well developed Copenhagen network reveals 80% overlap - we recreate reality! This could find ""missing links"" (in future work..) However, as of now, this is NOT concrete recommendation for new infrastructure - only after refinements. https://t.co/0wZisFHKmA', 'How does this bike network growth affect the car network? Mostly by decreasing its directness. \nIs this good or bad? -How we decide such questions will determine our quality of life in cities, and generally the fate of the planet...\n\n(img by @TUMInitiative) https://t.co/k12nCn3wBk', 'Research with S. Mimar, T. Perlman, @Ghoshal_G, @robysinatra. \nPaper: https://t.co/GYXk4CrXyZ\nCode: https://t.co/x1TLn1lgud\n\nhttps://t.co/xOIplgCcUO developed by my awesome MSc students Cecilia L. Kolding Andersen and Morten Lynghede. Explore 62 cities & Download 1000+ videos! https://t.co/6EbOZvRuMu', 'happy@feedback! @StreetsblogUSA @gboeing @CyclingEmbassy @giulio_mattioli @EuCyclistsFed @TheWarOnCars @robinlovelace @martikagv @andershartmann @cyklistforbund @BaldwinMatthew_ @npalomin \n@CyclingScience1 @Sust_Mobility @VCOE_AT @NACTO @openstreetmap @LitmanVTPI @altafieldnotes']",21,07,1994
332,170,1268906731003957248,237918251,Wei Xu,"Our new work on automatically extracting COVID-19 events from Twitter. We annotated 7,500 tweets annotated with event QA/slot-filling information. (e.g., Who tested positive? Who is promoting a cure for coronavirus?) #nlproc Paper: @srchvrs Yes, these are simple baselines. BERT is not trained on Twitter data. So a lot of room for people to explore. Exact span match is also tricky. @srchvrs There are a lot of nominals and nested spans in the dataset. I would expect larger training data will improve F1 quite a bit, having better Twitter-based BERT may help some too. @alvations We probably canât share the user data. We are only going to release the Tweet IDs and event annotations. @yanaiela @yoavgo We tried BERTweet. It doesnât help much, but it was trained on tweets preprocessed by NLTK TweeTokenizer before BPE. We also tried , which was trained on smaller amount of tweets. @srchvrs We tried this one. It doesnât help much, but it was trained on tweets preprocessed by NLTK TweeTokenizer before BPE. We also tried , which was trained on smaller amount of tweets. I donât think we can rule out Twitter-specific BERT would not help just yet. @srchvrs The F-scores reported in the paper are on extract span matches. So itâs kind of an overly strict metric for this task. @yoavgo Missed ""close_contact"" for tested positive: ""Jazz star Donovan Mitchell has tested positive for coronavirus, league sources tell ESPN. Jazz players privately say that <E> Rudy Gobert </E> had been careless and borderline caviler in the locker room touching other players ..."" @yoavgo Over-prediction of the ""employer"" for tested positive: ""<E> Lee Health </E> confirmed Friday night that a patient who tested positive for Covid-19 after arriving at Gulf Coast Medical Center later died."" @yoavgo Mismatched span for the ""employer"" for tested positive: ""Brandon from Crown the <E> Empire </E> tested positive for Coronavirus"" (Crown the Empire is a band.) @yoavgo sounds interesting. haven't tried this.",https://arxiv.org/abs/2006.02567,"In this paper, we present a manually annotated corpus of 10,000 tweets containing public reports of five COVID-19 events, including positive and negative tests, deaths, denied access to testing, claimed cures and preventions. We designed slot-filling questions for each event type and annotated a total of 31 fine-grained slots, such as the location of events, recent travel, and close contacts. We show that our corpus can support fine-tuning BERT-based classifiers to automatically extract publicly reported events and help track the spread of a new disease. We also demonstrate that, by aggregating events extracted from millions of tweets, we achieve surprisingly high precision when answering complex queries, such as ""Which organizations have employees that tested positive in Philadelphia?"" We will release our corpus (with user-information removed), automatic extraction models, and the corresponding knowledge base to the research community. ",Extracting a Knowledge Base of COVID-19 Events from Social Media,11,"['Our new work on automatically extracting COVID-19 events from Twitter. We annotated 7,500 tweets annotated with event QA/slot-filling information. (e.g., Who tested positive? Who is promoting a cure for coronavirus?) #nlproc\n\nPaper: ', '@srchvrs Yes, these are simple baselines. BERT is not trained on Twitter data. So a lot of room for people to explore. Exact span match is also tricky.', '@srchvrs There are a lot of nominals and nested spans in the dataset. I would expect larger training data will improve F1 quite a bit, having better Twitter-based BERT may help some too. https://t.co/MzL7PV2ksL', '@alvations We probably canât share the user data. We are only going to release the Tweet IDs and event annotations.', '@yanaiela @yoavgo We tried BERTweet. It doesnât help much, but it was trained on tweets preprocessed by NLTK TweeTokenizer before BPE. We also tried https://t.co/Iw7kBdxUJo, which was trained on smaller amount of tweets.', '@srchvrs We tried this one. It doesnât help much, but it was trained on tweets preprocessed by NLTK TweeTokenizer before BPE. We also tried https://t.co/Iw7kBdxUJo, which was trained on smaller amount of tweets. I donât think we can rule out Twitter-specific BERT would not help just yet.', '@srchvrs The F-scores reported in the paper are on extract span matches. So itâs kind of an overly strict metric for this task.', '@yoavgo Missed ""close_contact"" for tested positive:\n\n""Jazz star Donovan Mitchell has tested positive for coronavirus, league sources tell ESPN. Jazz players privately say that <E> Rudy Gobert </E> had been careless and borderline caviler in the locker room touching other players ...""', '@yoavgo Over-prediction of the ""employer"" for tested positive:\n\n""<E> Lee Health </E> confirmed Friday night that a patient who tested positive for Covid-19 after arriving at Gulf Coast Medical Center later died.""', '@yoavgo Mismatched span for the ""employer"" for tested positive:\n\n""Brandon from Crown the <E> Empire </E> tested positive for Coronavirus""\n\n(Crown the Empire is a band.)', ""@yoavgo sounds interesting. haven't tried this.""]",20,06,2058
333,17,765451802213027840,17055506,Martin Kleppmann,"New paper by @arberesford and me: âA Conflict-Free Replicated JSON Datatypeâ â we figured out a JSON CRDT @bensummers @arberesford It assumes that every edit operation is recorded, so you canât quite diff two arbitrary documents. @drsm79 If I read CouchDB docs right, you get âlatestâ version by default, have to specifically ask for multi-version if you want to merge? @samstokes @arberesford Yes! Actually I first tried to make Avro work, but the schema checking made it even more complex. Working on it. @tsantero @arberesford Work in progress, will post an update once the code is a bit more respectable. @hpgrahsl @arberesford Has some similarity. Riakâs CRDTs are state-based and donât support ordered lists, only maps. Ours is op-based. @rymohr @arberesford Fair point, itâs not ideal, but it satisfies the conventional definition of âconflict-freeâ as in CRDTs. @rymohr @arberesford OT suffers from exactly the same issue â with concurrent assignment it must either preserve both values or use LWW @rymohr @arberesford With regard to individual field assignment, I believe so. They resolve conflicts on maps, lists, and collaborative str.",http://arxiv.org/abs/1608.03960,"Many applications model their data in a general-purpose storage format such as JSON. This data structure is modified by the application as a result of user input. Such modifications are well understood if performed sequentially on a single copy of the data, but if the data is replicated and modified concurrently on multiple devices, it is unclear what the semantics should be. In this paper we present an algorithm and formal semantics for a JSON data structure that automatically resolves concurrent modifications such that no updates are lost, and such that all replicas converge towards the same state (a conflict-free replicated datatype or CRDT). It supports arbitrarily nested list and map types, which can be modified by insertion, deletion and assignment. The algorithm performs all merging client-side and does not depend on ordering guarantees from the network, making it suitable for deployment on mobile devices with poor network connectivity, in peer-to-peer networks, and in messaging systems with end-to-end encryption. ",A Conflict-Free Replicated JSON Datatype,9,"['New paper by @arberesford and me: âA Conflict-Free Replicated JSON Datatypeâ â we figured out a JSON CRDT ', '@bensummers @arberesford It assumes that every edit operation is recorded, so you canât quite diff two arbitrary documents.', '@drsm79 If I read CouchDB docs right, you get âlatestâ version by default, have to specifically ask for multi-version if you want to merge?', '@samstokes @arberesford Yes! Actually I first tried to make Avro work, but the schema checking made it even more complex. Working on it.', '@tsantero @arberesford Work in progress, will post an update once the code is a bit more respectable.', '@hpgrahsl @arberesford Has some similarity. Riakâs CRDTs are state-based and donât support ordered lists, only maps. Ours is op-based.', '@rymohr @arberesford Fair point, itâs not ideal, but it satisfies the conventional definition of âconflict-freeâ as in CRDTs.', '@rymohr @arberesford OT suffers from exactly the same issue â with concurrent assignment it must either preserve both values or use LWW', '@rymohr @arberesford With regard to individual field assignment, I believe so. They resolve conflicts on maps, lists, and collaborative str.']",16,08,1153
334,305,1318925781901324299,88806960,Dr. Vivienne Baldassare,"So excited to be part of the Young Supernova Experiment- a new time domain survey using the Pan-STARRS telescopes. While I'm excited for all the AGN science, we'll also find lots of young supernovae (duh), TDEs and other transients. Stay tuned đ„ł You can also learn more on our website: ",https://arxiv.org/abs/2010.09724,"Time domain science has undergone a revolution over the past decade, with tens of thousands of new supernovae (SNe) discovered each year. However, several observational domains, including SNe within days or hours of explosion and faint, red transients, are just beginning to be explored. Here, we present the Young Supernova Experiment (YSE), a novel optical time-domain survey on the Pan-STARRS telescopes. Our survey is designed to obtain well-sampled $griz$ light curves for thousands of transient events up to $z \approx 0.2$. This large sample of transients with 4-band light curves will lay the foundation for the Vera C. Rubin Observatory and the Nancy Grace Roman Space Telescope, providing a critical training set in similar filters and a well-calibrated low-redshift anchor of cosmologically useful SNe Ia to benefit dark energy science. As the name suggests, YSE complements and extends other ongoing time-domain surveys by discovering fast-rising SNe within a few hours to days of explosion. YSE is the only current four-band time-domain survey and is able to discover transients as faint $\sim$21.5 mag in $gri$ and $\sim$20.5 mag in $z$, depths that allow us to probe the earliest epochs of stellar explosions. YSE is currently observing approximately 750 square degrees of sky every three days and we plan to increase the area to 1500 square degrees in the near future. When operating at full capacity, survey simulations show that YSE will find $\sim$5000 new SNe per year and at least two SNe within three days of explosion per month. To date, YSE has discovered or observed 8.3% of the transient candidates reported to the International Astronomical Union in 2020. We present an overview of YSE, including science goals, survey characteristics and a summary of our transient discoveries to date. ","The Young Supernova Experiment: Survey Goals, Overview, and Operations",2,"[""So excited to be part of the Young Supernova Experiment- a new time domain survey using the Pan-STARRS telescopes. While I'm excited for all the AGN science, we'll also find lots of young supernovae (duh), TDEs and other transients. Stay tuned đ„ł "", 'You can also learn more on our website: https://t.co/W7H4yFWoDI']",20,10,299
335,64,1217009522046357504,80569756,Andreas Brunthaler,"A new paper about the proper motion of the supermassive black hole at the center for our Galaxy: The observations span 18 years now and limit the motion to < 1 km/s. This corresponds to a lower limit of the mass of 1 Million Solar masses (Msol) (1/2) Combined with the size from EHT observations, this gives a density that is within a factor of 3 of the of the General Relativity limit for a black hole. Furthermore, intermediate mass black holes more massive than 30,000 Msol between 0.003 and 0.1 pc are excluded. (2/2)",http://arxiv.org/abs/2001.04386,"We report measurements with the Very Long Baseline Array of the proper motion of Sgr A* relative to two extragalactic radio sources spanning 18 years. The apparent motion of Sgr A* is -6.411 +/- 0.008 mas/yr along the Galactic plane and -0.219 +/- 0.007 mas/yr toward the North Galactic Pole. This apparent motion can almost entirely be attributed to the effects of the Sun's orbit about the Galactic center. Removing these effects yields residuals of -0.58 +/- 2.23 km/s in the direction of Galactic rotation and -0.85 +/- 0.75 km/s toward the North Galactic Pole. A maximum-likelihood analysis of the motion, both in the Galactic plane and perpendicular to it, expected for a massive object within the Galactic center stellar cluster indicates that the radiative source, Sgr A*, contains more than about 25% of the gravitational mass of 4 x 10^6 Msun deduced from stellar orbits. The intrinsic size of Sgr A* is comparable to its Schwarzschild radius, and the implied mass density of >4 x 10^23 Msun/pc^-3 very close to that expected for a black hole, providing overwhelming evidence that it is indeed a super-massive black hole. Finally, the existence of ""intermediate-mass"" black holes more massive than 3 x 10^4 Msun between approximately 0.003 and 0.1 pc from Sgr A*are excluded. ","The Proper Motion of Sagittarius A*: III. The Case for a Supermassive
Black Hole",2,"['A new paper about the proper motion of the supermassive black hole at the center for our Galaxy: \n\n\n\nThe observations span 18 years now and limit the motion to < 1 km/s. This corresponds to a lower limit of the mass of 1 Million Solar masses (Msol) (1/2)', 'Combined with the size from EHT observations, this gives a density that is within a factor of 3 of the of the General Relativity limit for a black hole. \nFurthermore, intermediate mass black holes more massive than 30,000 Msol between 0.003 and 0.1 pc are excluded. (2/2)']",20,01,532
336,31,1309522438032486402,768092862,Thomas G. Dietterich,"New paper: @lukasruff led our team in this attempt to unify various perspectives on deep anomaly detection within a probabilistic framework. Jacob Kauffmann, @robvdm , GrĂ©goire Montavon, @WojciechSamek , @KloftMarius , and Klaus-Robert MĂŒller. ""A Unifying Review of Deep and Shallow Anomaly Detection"" ",https://arxiv.org/abs/2009.11732,"Deep learning approaches to anomaly detection have recently improved the state of the art in detection performance on complex datasets such as large collections of images or text. These results have sparked a renewed interest in the anomaly detection problem and led to the introduction of a great variety of new methods. With the emergence of numerous such methods, including approaches based on generative models, one-class classification, and reconstruction, there is a growing need to bring methods of this field into a systematic and unified perspective. In this review we aim to identify the common underlying principles as well as the assumptions that are often made implicitly by various methods. In particular, we draw connections between classic 'shallow' and novel deep approaches and show how this relation might cross-fertilize or extend both directions. We further provide an empirical assessment of major existing methods that is enriched by the use of recent explainability techniques, and present specific worked-through examples together with practical advice. Finally, we outline critical open challenges and identify specific paths for future research in anomaly detection. ",A Unifying Review of Deep and Shallow Anomaly Detection,2,"['New paper: \n@lukasruff led our team in this attempt to unify various perspectives on deep anomaly detection within a probabilistic framework. Jacob Kauffmann, @robvdm\n, GrĂ©goire Montavon, @WojciechSamek\n, @KloftMarius\n, and Klaus-Robert MĂŒller.', '""A Unifying Review of Deep and Shallow Anomaly Detection"" https://t.co/BO6xu6C1uT https://t.co/dpTL77UPIG']",20,09,322
337,72,1115604309993857026,7773042,Yasser Souri,"""Weakly Supervised Action Segmentation Using Mutual Consistency"" A new paper by me, @MohsenFyz and our advisor, Juergen Gall. We propose a new approach for action segmentation using transcripts as the weak supervision. Our network produces two redundant representations for action segmentation (frame-level and segment-level representations) and during training requires them to match each other using a new loss function that we term MuCon. We also have a transcript prediction loss (as in seq2seq) The approach is fast during training: doesn't require Viterbi or pseudo ground truth generation like many of the previous works. It is also differentiable, so no heuristic updates of parameters, only SGD. At test time we predict both representations and fuse them for smoothness. Unlike current state-of-the-art methods that are truly only able to perform action alignment at test time and mimic action segmentation by choosing one of the training transcripts, we are able to perform action segmentation at test time. Our work is currently under review. Code is ""coming soon""! We would really appreciate your feedback. souri@iai.uni-bonn.de or tweet at me.",https://arxiv.org/abs/1904.03116,"Action segmentation is the task of predicting the actions for each frame of a video. As obtaining the full annotation of videos for action segmentation is expensive, weakly supervised approaches that can learn only from transcripts are appealing. In this paper, we propose a novel end-to-end approach for weakly supervised action segmentation based on a two-branch neural network. The two branches of our network predict two redundant but different representations for action segmentation and we propose a novel mutual consistency (MuCon) loss that enforces the consistency of the two redundant representations. Using the MuCon loss together with a loss for transcript prediction, our proposed approach achieves the accuracy of state-of-the-art approaches while being $14$ times faster to train and $20$ times faster during inference. The MuCon loss proves beneficial even in the fully supervised setting. ",Fast Weakly Supervised Action Segmentation Using Mutual Consistency,5,"['""Weakly Supervised Action Segmentation Using Mutual Consistency""\nA new paper by me, @MohsenFyz and our advisor, Juergen Gall.\n\nWe propose a new approach for action segmentation using transcripts as the weak supervision. ', 'Our network produces two redundant representations for action segmentation (frame-level and segment-level representations) and during training requires them to match each other using a new loss function that we term MuCon.\nWe also have a transcript prediction loss (as in seq2seq)', ""The approach is fast during training: doesn't require Viterbi or pseudo ground truth generation like many of the previous works.\nIt is also differentiable, so no heuristic updates of parameters, only SGD.\nAt test time we predict both representations and fuse them for smoothness."", 'Unlike current state-of-the-art methods that are truly only able to perform action alignment at test time and mimic action segmentation by choosing one of the training transcripts, we are able to perform action segmentation at test time.', 'Our work is currently under review. Code is ""coming soon""!\nWe would really appreciate your feedback.\nsouri@iai.uni-bonn.de or tweet at me.']",19,04,1170
338,23,1057077610549821440,3301643341,Roger Grosse,"Think you understand weight decay? Think again. We found three distinct mechanisms by which it achieves a regularization effect, depending on the architecture and optimization algorithm. New paper w/ @Guodzh, Chaoqi Wang, and Bowen Xu. It's tempting in deep learning to just do what works, but it's important to sometimes dig deeper to track down what's really happening. In this case, the answer often had to do with the nuts and bolts of the scales of parameters and how this interacts with optimization hyperparameters. Not the stuff we usually write papers about, but it seems to matter a lot. @ogrisel @Guodzh Each K-FAC update is about 50% more expensive than SGD or Adam. So you can mentally adjust the plots based on that. But note that our hyperparameters were chosen based on final validation accuracy, so they might not be optimal for earlier in training. @AmirRosenfeld @Guodzh Yeah, I guess one way to think about it is that learning rate schedules have a huge impact on final performance, and with BN, the WD parameter gives you a knob with which to tune the effective learning rate decay schedule.",https://arxiv.org/abs/1810.12281,"Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of $L_2$ regularization. Literal weight decay has been shown to outperform $L_2$ regularization for optimizers for which they differ. We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures. We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input-output Jacobian norm, and (3) reducing the effective damping coefficient for second-order optimization. Our results provide insight into how to improve the regularization of neural networks. ",Three Mechanisms of Weight Decay Regularization,5,"['Think you understand weight decay? Think again. We found three distinct mechanisms by which it achieves a regularization effect, depending on the architecture and optimization algorithm. New paper w/ @Guodzh, Chaoqi Wang, and Bowen Xu.\n\n', ""It's tempting in deep learning to just do what works, but it's important to sometimes dig deeper to track down what's really happening."", 'In this case, the answer often had to do with the nuts and bolts of the scales of parameters and how this interacts with optimization hyperparameters. Not the stuff we usually write papers about, but it seems to matter a lot.', '@ogrisel @Guodzh Each K-FAC update is about 50% more expensive than SGD or Adam. So you can mentally adjust the plots based on that. But note that our hyperparameters were chosen based on final validation accuracy, so they might not be optimal for earlier in training.', '@AmirRosenfeld @Guodzh Yeah, I guess one way to think about it is that learning rate schedules have a huge impact on final performance, and with BN, the WD parameter gives you a knob with which to tune the effective learning rate decay schedule.']",18,10,1119
339,100,1201454333180698624,1200003750162829312,Aarynn Carter,"Hey Twitter! Check out arXiv () for my first paper which shows off the latest @NASA_TESS, @NasaHubble and @ESO VLT data for the hot Jupiter exoplanet WASP-6b. We find Na, K, and H2O in its atmosphere and account for some pesky stellar activity effects. @AstroJake Thanks Jake!",https://arxiv.org/abs/1911.12628,"We present new observations of the transmission spectrum of the hot Jupiter WASP-6b both from the ground with the Very Large Telescope (VLT) FOcal Reducer and Spectrograph (FORS2) from 0.45-0.83 $\mu$m, and space with the Transiting Exoplanet Survey Satellite (TESS) from 0.6-1.0 $\mu$m and the Hubble Space Telescope (HST) Wide Field Camera 3 from 1.12-1.65 $\mu$m. Archival data from the HST Space Telescope Imaging Spectrograph (STIS) and Spitzer is also reanalysed on a common Gaussian process framework, of which the STIS data show a good overall agreement with the overlapping FORS2 data. We also explore the effects of stellar heterogeneity on our observations and its resulting implications towards determining the atmospheric characteristics of WASP-6b. Independent of our assumptions for the level of stellar heterogeneity we detect Na I, K I and H$_2$O absorption features and constrain the elemental oxygen abundance to a value of [O/H] $\simeq -0.9\pm0.3$ relative to solar. In contrast, we find that the stellar heterogeneity correction can have significant effects on the retrieved distributions of the [Na/H] and [K/H] abundances, primarily through its degeneracy with the sloping optical opacity of scattering haze species within the atmosphere. Our results also show that despite this presence of haze, WASP-6b remains a favourable object for future atmospheric characterisation with upcoming missions such as the James Webb Space Telescope. ","Detection of Na, K and H$_2$O in the hazy atmosphere of WASP-6b",2,"['Hey Twitter! Check out arXiv () for my first paper which shows off the latest @NASA_TESS, @NasaHubble and @ESO VLT data for the hot Jupiter exoplanet WASP-6b. We find Na, K, and H2O in its atmosphere and account for some pesky stellar activity effects. ', '@AstroJake Thanks Jake!']",19,11,289
340,0,1161661512156684289,1158385581476515840,Allyson Ettinger,New paper up! âWhat BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language modelsâ . Tests that have elicited informative patterns in the brain prove quite useful for clarifying strengths and limitations of BERT pre-training.,https://arxiv.org/abs/1907.13528,"Pre-training by language modeling has become a popular and successful approach to NLP tasks, but we have yet to understand exactly what linguistic capacities these pre-training processes confer upon models. In this paper we introduce a suite of diagnostics drawn from human language experiments, which allow us to ask targeted questions about the information used by language models for generating predictions in context. As a case study, we apply these diagnostics to the popular BERT model, finding that it can generally distinguish good from bad completions involving shared category or role reversal, albeit with less sensitivity than humans, and it robustly retrieves noun hypernyms, but it struggles with challenging inferences and role-based event prediction -- and in particular, it shows clear insensitivity to the contextual impacts of negation. ","What BERT is not: Lessons from a new suite of psycholinguistic
diagnostics for language models",1,['New paper up! âWhat BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language modelsâ . Tests that have elicited informative patterns in the brain prove quite useful for clarifying strengths and limitations of BERT pre-training.'],19,07,259
341,12,1399879536947331074,363058714,Niladri Chatterji,"New paper with @aldopacchiano, Peter Bartlett and Michael Jordan called ""On the Theory of Reinforcement Learning with Once-per-Episode Feedback"": . It was very interesting for us to think about non-Markovian reward models for reinforcement learning (1/2.) Very eager to hear any comments and criticism about our setting and results! (2/2.)",http://arxiv.org/abs/2105.14363,"We study a theory of reinforcement learning (RL) in which the learner receives binary feedback only once at the end of an episode. While this is an extreme test case for theory, it is also arguably more representative of real-world applications than the traditional requirement in RL practice that the learner receive feedback at every time step. Indeed, in many real-world applications of reinforcement learning, such as self-driving cars and robotics, it is easier to evaluate whether a learner's complete trajectory was either ""good"" or ""bad,"" but harder to provide a reward signal at each step. To show that learning is possible in this more challenging setting, we study the case where trajectory labels are generated by an unknown parametric model, and provide a statistically and computationally efficient algorithm that achieves sub-linear regret. ",On the Theory of Reinforcement Learning with Once-per-Episode Feedback,2,"['New paper with @aldopacchiano, Peter Bartlett and Michael Jordan called ""On the Theory of Reinforcement Learning with Once-per-Episode Feedback"": . It was very interesting for us to think about non-Markovian reward models for reinforcement learning (1/2.)', 'Very eager to hear any comments and criticism about our setting and results! (2/2.)']",21,05,345
342,131,1201908198350811136,3119778197,Hanie Sedghi,"Excited to share our latest work on generalization in DL w/ Niladri Chatterji & @bneyshabur We study the phenomenon that some modules of DNNs are more critical than others: rewinding their values back to initialization, strongly harms performance.(1/3) Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called module criticality, based on the shape of the valleys that connects the initial and final values of the module parameters. (2/3) We show that module criticality is able to explain the superior generalization performance of some architectures over others, whereas earlier measures fail to do so. (3/3)",https://arxiv.org/abs/1912.00528,"We study the phenomenon that some modules of deep neural networks (DNNs) are more critical than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network's performance. Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called module criticality, based on the shape of the valleys that connects the initial and final values of the module parameters. We formulate how generalization relates to the module criticality, and show that this measure is able to explain the superior generalization performance of some architectures over others, whereas earlier measures fail to do so. ","The intriguing role of module criticality in the generalization of deep
networks",3,"['Excited to share our latest work on generalization in DL w/ Niladri Chatterji & @bneyshabur\nWe study the phenomenon that some modules of DNNs are more critical than others: rewinding their values back to initialization, strongly harms performance.(1/3)', 'Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called module criticality, based on the shape of the valleys that connects the initial and final values of the module parameters. (2/3)', 'We show that module criticality is able to explain the superior generalization performance of some architectures over others, whereas earlier measures fail to do so. (3/3)']",19,12,681
343,171,1400516024437411843,1188224435364327424,Belinda Li,"Do neural language models (trained on text alone!) construct representations of meaning? In a new #ACL2021NLP paper, we find that LM representations implicitly model *entities and situations* as they evolve through a discourse. 1/ These representations roughly correspond to linguistic models of dynamic semantics---they update as new sentences are added to the discourse, to reflect changes to the underlying state. 2/ Specifically, we train BART and T5 models on text transcripts in two datasets (Alchemy and Textworld). We find a linear probe can map encoder representations of text to the truth values of propositions about world that the text describes. 3/ Furthermore, the representations are local: in Alchemy, the probe can tell the final state of an entity by looking at the first appearance of an entity in a discourse. 4/ Finally, the representations can be manipulated to affect downstream generation: by replacing a small set of LM encodings, we can induce the decoder to generate text consistent with the *modified* state representation. 5/ Of course, LMs still make lots of semantic errors, and our experiments only look at a tiny slice of semantics. The datasets we test on are far simpler than, e.g., those featured in the thought experiments of Bender&Koller. Even here, probe accuracies are far from perfect. 6/ However, these experiments suggest that itâs possible to learn noisy, approximate representations of some aspects of meaning from text alone. 7/ This work would not have been possible without my amazing co-authors @Maxwell_Nye and @jacobandreas! Code is available at: 8/8",https://arxiv.org/abs/2106.00737,"Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the world they describe? In BART and T5 transformer language models, we identify contextual word representations that function as models of entities and situations as they evolve throughout a discourse. These neural representations have functional similarities to linguistic models of dynamic semantics: they support a linear readout of each entity's current properties and relations, and can be manipulated with predictable effects on language generation. Our results indicate that prediction in pretrained neural language models is supported, at least in part, by dynamic representations of meaning and implicit simulation of entity state, and that this behavior can be learned with only text as training data. Code and data are available at this https URL . ",Implicit Representations of Meaning in Neural Language Models,8,"['Do neural language models (trained on text alone!) construct representations of meaning? In a new #ACL2021NLP paper, we find that LM representations implicitly model *entities and situations* as they evolve through a discourse. 1/\n ', 'These representations roughly correspond to linguistic models of dynamic semantics---they update as new sentences are added to the discourse, to reflect changes to the underlying state. 2/ https://t.co/4AcxdqHN3m', 'Specifically, we train BART and T5 models on text transcripts in two datasets (Alchemy and Textworld). We find a linear probe can map encoder representations of text to the truth values of propositions about world that the text describes. 3/ https://t.co/JNlutxCjuJ', 'Furthermore, the representations are local: in Alchemy, the probe can tell the final state of an entity by looking at the first appearance of an entity in a discourse. 4/ https://t.co/8RkpTF1Npq', 'Finally, the representations can be manipulated to affect downstream generation: by replacing a small set of LM encodings, we can induce the decoder to generate text consistent with the *modified* state representation. 5/ https://t.co/IbXRWcssdv', 'Of course, LMs still make lots of semantic errors, and our experiments only look at a tiny slice of semantics. The datasets we test on are far simpler than, e.g., those featured in the thought experiments of Bender&Koller. Even here, probe accuracies are far from perfect. 6/', 'However, these experiments suggest that itâs possible to learn noisy, approximate representations of some aspects of meaning from text alone. 7/', 'This work would not have been possible without my amazing co-authors @Maxwell_Nye and @jacobandreas!\n\nCode is available at: https://t.co/TW2vcC2Igp\n\n8/8']",21,06,1651
344,144,1326847093886115841,38941661,Daniel Stilck França,"New paper out with the great @ChristophHirche and Cambyse RouzĂ©: On contraction coefficients, partial orders and approximation of capacities for quantum channels (). See đ for a summary! as indicated by the title, we talk about three intertwined subjects: contraction coefficients, partial orders for channels and capacities. Roughly speaking, contraction coefficients quantify by how much a channel makes states indistinguishable. Partial orders try to capture the notion that one channel is noisier than the other, while capacities quantify how useful a channel is for a task in the limit of many uses. Let us now discuss what we add to the discussion: As usual, tensorization is a huge issue for all these themes. We define some new orders for channels that tensorize and show new relations amongst existing ones. This figure summarizes all relations: We then put epsilons around all these orderings. This leads to new capacity bounds. Also, using our new orderings, we get simple, transparent proofs of the bounds based on approximate degradability by Sutter et al. Most of these orderings, like less noisy, are based on a bound on the mutual information of the output of the channels. We then proceed to show that one can equivalently formulate them in terms of a strong data processing inequality. This leads us to contraction coefficients. Here we study new ways to obtain strict contractions, with a focus on the relative entropy. We discuss a hypercontractive approach combined with the Petz recovery map. We also get new results for Weyl-covariant channels and compute the contraction coefficients for important classes of Gaussian channels. Finally, we extend our results to f-divergences and discuss how the contraction coefficients for various divergences are related. Now, some personal comments. I was really happy to finally write a paper with @ChristophHirche! We met at a spring school 7 years ago when we were still bachelor students, so it was time for this to happen! And I am also happy to continue my fruitful collaboration with Cambyse, as after this paper he is now my most frequent collaborator!",https://arxiv.org/abs/2011.05949,"The data processing inequality is the most basic requirement for any meaningful measure of information. It essentially states that distinguishability measures between states decrease if we apply a quantum channel and is the centerpiece of many results in information theory. Moreover, it justifies the operational interpretation of most entropic quantities. In this work, we revisit the notion of contraction coefficients of quantum channels, which provide sharper and specialized versions of the data processing inequality. A concept closely related to data processing is partial orders on quantum channels. First, we discuss several quantum extensions of the well-known less noisy ordering and relate them to contraction coefficients. We further define approximate versions of the partial orders and show how they can give strengthened and conceptually simple proofs of several results on approximating capacities. Moreover, we investigate the relation to other partial orders in the literature and their properties, particularly with regards to tensorization. We then examine the relation between contraction coefficients with other properties of quantum channels such as hypercontractivity. Next, we extend the framework of contraction coefficients to general f-divergences and prove several structural results. Finally, we consider two important classes of quantum channels, namely Weyl-covariant and bosonic Gaussian channels. For those, we determine new contraction coefficients and relations for various partial orders. ","On contraction coefficients, partial orders and approximation of
capacities for quantum channels",10,"['New paper out with the great @ChristophHirche and Cambyse RouzĂ©: On contraction coefficients, partial orders and approximation of capacities for quantum channels (). See đ for a summary!', 'as indicated by the title, we talk about three intertwined subjects: contraction coefficients, partial orders for channels and capacities. Roughly speaking, contraction coefficients quantify by how much a channel makes states indistinguishable.', 'Partial orders try to capture the notion that one channel is noisier than the other, while capacities quantify how useful a channel is for a task in the limit of many uses. Let us now discuss what we add to the discussion:', 'As usual, tensorization is a huge issue for all these themes. We define some new orders for channels that tensorize and show new relations amongst existing ones. This figure summarizes all relations: https://t.co/lmkWENU5wZ', 'We then put epsilons around all these orderings. This leads to new capacity bounds. Also, using our new orderings, we get simple, transparent proofs of the bounds based on approximate degradability by Sutter et al.', 'Most of these orderings, like less noisy, are based on a bound on the mutual information of the output of the channels. We then proceed to show that one can equivalently formulate them in terms of a strong data processing inequality. This leads us to contraction coefficients.', 'Here we study new ways to obtain strict contractions, with a focus on the relative entropy. We discuss a hypercontractive approach combined with the Petz recovery map.', 'We also get new results for Weyl-covariant channels and compute the contraction coefficients for important classes of Gaussian channels. Finally, we extend our results to f-divergences and discuss how the contraction coefficients for various divergences are related.', 'Now, some personal comments. I was really happy to finally write a paper with @ChristophHirche! We met at a spring school 7 years ago when we were still bachelor students, so it was time for this to happen!', 'And I am also happy to continue my fruitful collaboration with Cambyse, as after this paper he is now my most frequent collaborator!']",20,11,2134
345,22,953800657567444992,232294292,Gary Marcus đșđŠ,"Innateness, AlphaZero, and AI: AlphaGo & its successors have been presented as an argument that a ""even in the most challenging of domains: it is possible to train to superhuman level, without human ... guidance"". I evaluate this claim. new paper on arXiv ",https://arxiv.org/abs/1801.05667,"The concept of innateness is rarely discussed in the context of artificial intelligence. When it is discussed, or hinted at, it is often the context of trying to reduce the amount of innate machinery in a given system. In this paper, I consider as a test case a recent series of papers by Silver et al (Silver et al., 2017a) on AlphaGo and its successors that have been presented as an argument that a ""even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance"", ""starting tabula rasa."" I argue that these claims are overstated, for multiple reasons. I close by arguing that artificial intelligence needs greater attention to innateness, and I point to some proposals about what that innateness might look like. ","Innateness, AlphaZero, and Artificial Intelligence",1,"['Innateness, AlphaZero, and AI: AlphaGo & its successors have been presented as an argument that a ""even in the most challenging of domains: it is possible to train to superhuman level, without human ... guidance"". I evaluate this claim. new paper on arXiv ']",18,01,262
346,30,1432260914095460352,1326269340493246467,Rico Landman,"Happy to announce the publication of my first master project in @SPIEtweets JATIS on a new approach towards data-driven predictive control for high-contrast imaging. Find the paper on or . Our approach for predictive control is based on Reinforcement Learning, a subfield of Machine Learning that takes inspiration from the way humans and animals learn. The general idea is that the adaptive optics system learns to optimize a ârewardâ through trial and error. This can be framed into a mathematical optimization problem. We can then try to estimate the solution using numerical optimization methods, such as gradient descent. We show that our approach can lead to >order of magnitude improvement close to the star in a simulated adaptive optics system, even under non-stationary wind speeds and directions. We also validate our control algorithm in the lab for tip-tilt control and effectively show that this can remove vibrations from the system. Our approach allows for a flexible framework to optimize nonlinear data-driven controllers. In principle, it is also possible to feed it with information from multiple sensors and directly optimize for nonlinear objectives, e.g. the contrast in the focal plane.",https://arxiv.org/abs/2108.11332,"Current and future high-contrast imaging instruments require extreme adaptive optics (XAO) systems to reach contrasts necessary to directly image exoplanets. Telescope vibrations and the temporal error induced by the latency of the control loop limit the performance of these systems. One way to reduce these effects is to use predictive control. We describe how model-free Reinforcement Learning can be used to optimize a Recurrent Neural Network controller for closed-loop predictive control. First, we verify our proposed approach for tip-tilt control in simulations and a lab setup. The results show that this algorithm can effectively learn to mitigate vibrations and reduce the residuals for power-law input turbulence as compared to an optimal gain integrator. We also show that the controller can learn to minimize random vibrations without requiring online updating of the control law. Next, we show in simulations that our algorithm can also be applied to the control of a high-order deformable mirror. We demonstrate that our controller can provide two orders of magnitude improvement in contrast at small separations under stationary turbulence. Furthermore, we show more than an order of magnitude improvement in contrast for different wind velocities and directions without requiring online updating of the control law. ","Self-optimizing adaptive optics control with Reinforcement Learning for
high-contrast imaging",6,"['Happy to announce the publication of my first master project in @SPIEtweets JATIS on a new approach towards data-driven predictive control for high-contrast imaging. Find the paper on or . ', 'Our approach for predictive control is based on Reinforcement Learning, a subfield of Machine Learning that takes inspiration from the way humans and animals learn. The general idea is that the adaptive optics system learns to optimize a ârewardâ through trial and error.', 'This can be framed into a mathematical optimization problem. We can then try to estimate the solution using numerical optimization methods, such as gradient descent. https://t.co/PpTQbD4jH4', 'We show that our approach can lead to >order of magnitude improvement close to the star in a simulated adaptive optics system, even under non-stationary wind speeds and directions. https://t.co/rtMXovY5T8', 'We also validate our control algorithm in the lab for tip-tilt control and effectively show that this can remove vibrations from the system. https://t.co/uAAX9s6euC', 'Our approach allows for a flexible framework to optimize nonlinear data-driven controllers. In principle, it is also possible to feed it with information from multiple sensors and directly optimize for nonlinear objectives, e.g. the contrast in the focal plane.']",21,08,1254
347,42,1119652750453874688,84871401,Marco Peressotti,"Taking Linear Logic Apart, a new paper with @wenkokke and @famontesi is out We introduce a reduction semantics for Hypersequent Classical Processes the first process calculus with a logical characterisation of parallel composition! ",https://arxiv.org/abs/1904.06848v1,"Process calculi based on logic, such as $\pi$DILL and CP, provide a foundation for deadlock-free concurrent programming. However, in previous work, there is a mismatch between the rules for constructing proofs and the term constructors of the $\pi$-calculus: the fundamental operator for parallel composition does not correspond to any rule of linear logic. Kokke et al. (2019) introduced Hypersequent Classical Processes (HCP), which addresses this mismatch using hypersequents (collections of sequents) to register parallelism in the typing judgements. However, the step from CP to HCP is a big one. As of yet, HCP does not have reduction semantics, and the addition of delayed actions means that CP processes interpreted as HCP processes do not behave as they would in CP. We introduce HCP-, a variant of HCP with reduction semantics and without delayed actions. We prove progress, preservation, and termination, and show that HCP- supports the same communication protocols as CP. ",] Taking Linear Logic Apart,1,"['Taking Linear Logic Apart, a new paper with @wenkokke and @famontesi is out \nWe introduce a reduction semantics for Hypersequent Classical Processes the first process calculus with a logical characterisation of parallel composition! ']",19,04,252
348,99,1480358744512999434,11346882,Larry Lee,"New paper day! Getting it out after years of exploring this space â Tackling combinatorial problems at particle colliders with Machine Learningâą. Itâs full of action and intrigue, and I hope you take a look! (Also itâs pretty short and pretty pretty) Work largely performed by Anthony Badea of @harvardphysics. Also w/ @JohnHuth1, @CodeMonkey8, @RiccardoPoggi, Will Fawcett, and myself from @UTKPhysAstro",https://arxiv.org/abs/2201.02205,"High-multiplicity signatures at particle colliders can arise in Standard Model processes and beyond. With such signatures, difficulties often arise from the large dimensionality of the kinematic space. For final states containing a single type of particle signature, this results in a combinatorial problem that hides underlying kinematic information. We explore using a neural network that includes a Lorentz Layer to extract high-dimensional correlations. We use the case of squark decays in $R$-Parity-violating Supersymmetry as a benchmark, comparing the performance to that of classical methods. ","Solving Combinatorial Problems at Particle Colliders Using Machine
Learning",2,"['New paper day! Getting it out after years of exploring this space â Tackling combinatorial problems at particle colliders with Machine Learningâą. Itâs full of action and intrigue, and I hope you take a look! (Also itâs pretty short and pretty pretty)\n\n ', 'Work largely performed by Anthony Badea of @harvardphysics. Also w/ @JohnHuth1, @CodeMonkey8, @RiccardoPoggi, Will Fawcett, and myself from @UTKPhysAstro']",22,01,418
349,291,1321085882825379843,1219928743944343553,Neil Zeghidour,"Large scale training of EEG classifiers requires exploiting many, small datasets. Problem: they are collected with different headsets (# and place of electrodes). We propose CHARM, a module that remaps arbitrary EEG inputs to a canonical placement 1/3 CHARM is trainable and compatible with archs (e.g. CNNs) that expect consistent channels (same number, same order). Across different noising scenarios we show its robustness. Moreover, we successfully perform transfer learning between datasets collected w/ different headsets! 2/3 Work done by @GoogleAI intern @aaqib_saeed, with @GrangierDavid and Olivier Pietquin. 3/3",https://arxiv.org/abs/2010.13694,"We propose CHARM, a method for training a single neural network across inconsistent input channels. Our work is motivated by Electroencephalography (EEG), where data collection protocols from different headsets result in varying channel ordering and number, which limits the feasibility of transferring trained systems across datasets. Our approach builds upon attention mechanisms to estimate a latent reordering matrix from each input signal and map input channels to a canonical order. CHARM is differentiable and can be composed further with architectures expecting a consistent channel ordering to build end-to-end trainable classifiers. We perform experiments on four EEG classification datasets and demonstrate the efficacy of CHARM via simulated shuffling and masking of input channels. Moreover, our method improves the transfer of pre-trained representations between datasets collected with different protocols. ","Learning from Heterogeneous EEG Signals with Differentiable Channel
Reordering",3,"['Large scale training of EEG classifiers requires exploiting many, small datasets. Problem: they are collected with different headsets (# and place of electrodes). We propose CHARM, a module that remaps arbitrary EEG inputs to a canonical placement 1/3\n ', 'CHARM is trainable and compatible with archs (e.g. CNNs) that expect consistent channels (same number, same order). Across different noising scenarios we show its robustness. Moreover, we successfully perform transfer learning between datasets collected w/ different headsets! 2/3 https://t.co/iJPQAnBOXe', 'Work done by @GoogleAI intern @aaqib_saeed, with @GrangierDavid and Olivier Pietquin. 3/3']",20,10,643
350,56,1351623115923619840,3327515352,M.P. Ross,New paper from our group describing our 2.8 mHz torsion balance whose angle is measured with a modified Michelson interferometer with ~10 picorad sensitivity. This apparatus has a range of applications including geophysics and gravitational wave astronomy. ,https://arxiv.org/abs/2101.01243,"We describe a torsion pendulum with a large mass-quadrupole moment and a resonant frequency of 2.8 mHz, whose angle is measured using a modified Michelson interferometer. The system achieved noise levels of $\sim200\ \text{prad}/\sqrt{\text{Hz}}$ between 0.2-30 Hz and $\sim10\ \text{prad}/\sqrt{\text{Hz}}$ above 100 Hz. Such a system can be applied to a broad range of fields from the study of rotational seismic motion and elastogravity signals to gravitational wave observation and tests of gravity. ",A Low-Frequency Torsion Pendulum with Interferometric Readout,1,['New paper from our group describing our 2.8 mHz torsion balance whose angle is measured with a modified Michelson interferometer with ~10 picorad sensitivity. This apparatus has a range of applications including geophysics and gravitational wave astronomy.\n'],21,01,263
351,146,1471425398860877829,426509606,Yamir Moreno,"New paper out ""Epidemic spreading in populations of mobile agents with adaptive behavioral response"" (). Here, we present and analyze a model for disease spreading in temporal networks of mobile agents that incorporates local behavioral responses. (1/3) We show that avoiding contacts with infectees considerably decreases the stationary prevalence when the spatial density of agents is low. However, for higher densities, the mechanism causes an abrupt phase transition, where a new bistable phase appears. (2/3) We develop a semi-analytic approach for the case when the mobility is fast compared to the disease dynamics, and use it to argue that the bistability is caused by the emergence of spatial clusters of susceptible agents. Kudos to co-authors @paulocv92 @SrAleta @FranciscoICMC 3/3. @thilogross @paulocv92 @SrAleta @FranciscoICMC @BIFI_Instituto @ISI_Fondazione Thanks, Thilo. And I fully agree. @dques2766 @paulocv92 @SrAleta @FranciscoICMC @BIFI_Instituto @ISI_Fondazione This is a nice idea indeed. We haven't explored such a scenario. @dques2766 @paulocv92 @SrAleta @FranciscoICMC @BIFI_Instituto @ISI_Fondazione Thanks, David",https://arxiv.org/abs/2112.07829,"Despite the advanced stage of epidemic modeling, there is a major demand for methods to incorporate behavioral responses to the spread of a disease such as social distancing and adoption of prevention methods. Mobility plays an important role in epidemic dynamics and is also affected by behavioral changes, but there are many situations in which real mobility data is incomplete or inaccessible. We present a model for epidemic spreading in temporal networks of mobile agents that incorporates local behavioral responses. Susceptible agents are allowed to move towards the opposite direction of infected agents in their neighborhood. We show that this mechanism considerably decreases the stationary prevalence when the spatial density of agents is low. However, for higher densities, the mechanism causes an abrupt phase transition, where a new bistable phase appears. We develop a semi-analytic approach for the case when the mobility is fast compared to the disease dynamics, and use it to argue that the bistability is caused by the emergence of spatial clusters of susceptible agents. Finally, we characterize the temporal networks formed in the fast mobility regime, showing how the degree distributions and other metrics are affected by the behavioral mechanism. Our work incorporates results previously known from adaptive networks into the population of mobile agents, which can be further developed to be used in mobility-driven models. ","Epidemic spreading in populations of mobile agents with adaptive
behavioral response",6,"['New paper out ""Epidemic spreading in populations of mobile agents with adaptive behavioral response"" (). Here, we present and analyze a model for disease spreading in temporal networks of mobile agents that incorporates local behavioral responses. (1/3) ', 'We show that avoiding contacts with infectees considerably decreases the stationary prevalence when the spatial density of agents is low. However, for higher densities, the mechanism causes an abrupt phase transition, where a new bistable phase appears. (2/3) https://t.co/gozVFgsXYL', 'We develop a semi-analytic approach for the case when the mobility is fast compared to the disease dynamics, and use it to argue that the bistability is caused by the emergence of spatial clusters of susceptible agents. Kudos to co-authors @paulocv92 @SrAleta @FranciscoICMC 3/3. https://t.co/r7HpTgM7vN', '@thilogross @paulocv92 @SrAleta @FranciscoICMC @BIFI_Instituto @ISI_Fondazione Thanks, Thilo. And I fully agree.', ""@dques2766 @paulocv92 @SrAleta @FranciscoICMC @BIFI_Instituto @ISI_Fondazione This is a nice idea indeed. We haven't explored such a scenario."", '@dques2766 @paulocv92 @SrAleta @FranciscoICMC @BIFI_Instituto @ISI_Fondazione Thanks, David']",21,12,1168
352,163,1316448818867699714,2878031881,Yangsibo Huang,"How to tackle data privacy for language understanding tasks in distributed learning (without slowing down training or reducing accuracy)? Happy to share our new #emnlp2020 findings paper w/ @realZhaoSong, @danqi_chen, Prof. Kai Li, @prfsanjeevarora paper: ",https://arxiv.org/abs/2010.06053,"An unsolved challenge in distributed or federated learning is to effectively mitigate privacy risks without slowing down training or reducing accuracy. In this paper, we propose TextHide aiming at addressing this challenge for natural language understanding tasks. It requires all participants to add a simple encryption step to prevent an eavesdropping attacker from recovering private text data. Such an encryption step is efficient and only affects the task performance slightly. In addition, TextHide fits well with the popular framework of fine-tuning pre-trained language models (e.g., BERT) for any sentence or sentence-pair task. We evaluate TextHide on the GLUE benchmark, and our experiments show that TextHide can effectively defend attacks on shared gradients or representations and the averaged accuracy reduction is only $1.9\%$. We also present an analysis of the security of TextHide using a conjecture about the computational intractability of a mathematical problem. Our code is available at this https URL ",TextHide: Tackling Data Privacy in Language Understanding Tasks,1,"['How to tackle data privacy for language understanding tasks in distributed learning (without slowing down training or reducing accuracy)? Happy to share our new #emnlp2020 findings paper\n\nw/ @realZhaoSong, @danqi_chen, Prof. Kai Li, @prfsanjeevarora\npaper: ']",20,10,269
353,132,1257354626912980992,46165258,Shane Steinert-Threlkeld,"New paper for #acl2020nlp! ""On the Spontaneous Emergence of Discrete and Compositional Signals"" with @nurikolan and Emmanuel Chemla Paper: Code: We look at two of Hockett's design features for language---discreteness and displacement---in a general setting for emergent communication, where our message space is continuous instead of discrete. This allows us to (i) explore when discrete signals emerge and (ii) avoid using tools like reinforcement learning, which are difficult in practice. We reliably find discreteness, in both production and preception. Two probes for compositional structure (vector analogies, and a composition prediction network) both fail. There are interesting and very robust patterns during training though (see original tweet's gif), so stay tuned! We also show how the rudiments of Categorical Perception---where discriminability in a continuous space depends on discrete labels, not merely perceptual distance---can be found in our message space. More ""psycholinguistic"" work in emergent communication would be great! ",https://arxiv.org/abs/2005.00110,"We propose a general framework to study language emergence through signaling games with neural agents. Using a continuous latent space, we are able to (i) train using backpropagation, (ii) show that discrete messages nonetheless naturally emerge. We explore whether categorical perception effects follow and show that the messages are not compositional. ",On the Spontaneous Emergence of Discrete and Compositional Signals,5,"['New paper for #acl2020nlp!\n\n""On the Spontaneous Emergence of Discrete and Compositional Signals"" \nwith @nurikolan and Emmanuel Chemla\n\nPaper: \nCode: ', ""We look at two of Hockett's design features for language---discreteness and displacement---in a general setting for emergent communication, where our message space is continuous instead of discrete. https://t.co/aHlwHAVLuQ"", 'This allows us to (i) explore when discrete signals emerge and (ii) avoid using tools like reinforcement learning, which are difficult in practice. We reliably find discreteness, in both production and preception.', ""Two probes for compositional structure (vector analogies, and a composition prediction network) both fail. There are interesting and very robust patterns during training though (see original tweet's gif), so stay tuned!"", 'We also show how the rudiments of Categorical Perception---where discriminability in a continuous space depends on discrete labels, not merely perceptual distance---can be found in our message space. More ""psycholinguistic"" work in emergent communication would be great! https://t.co/tioN51gsVE']",20,05,1083
354,71,1320753970256744451,3377714115,Karl Pertsch,"How can we use large offline datasets for accelerating the learning of new tasks? We can transfer skills! Check out our #CoRL2020 paper on efficient skill transfer with learned skill priors! đPaper: đ»Website & Code: Threadđ(1/8) Learning reusable skills is great: it does not need rewards (vs offline RL) and can be done fully offline without a training task distribution (vs meta-RL). But: with larger datasets, the number of learned skills grows. Exploring all skills on the target task is infeasible! (2/8) Intuitively, not all skills should be explored with equal probability: when holding onto the handle of a kettle, it is more promising to attempt a pickup than a sweeping skill. We can learn such _skill priors_ from our experience solving other tasks! (3/8) We introduce SPiRL (Skill-Prior RL), an approach for jointly learning an embedding space of skills and a prior over skills from offline data. For learning downstream tasks, we modify the SAC objective to use the learned prior as guidance for a hierarchical policy. (4/8) We test SPiRL on three environments: maze navigation, block stacking and kitchen manipulation. For each we collect a large offline dataset and then test transfer to new tasks. (5/8) On all environments, we show that SPiRLâs skill prior leads to better target task performance than using a âflatâ prior over primitive actions or training a hierarchical policy without prior guidance. (6/8) Only SPiRL is able to learn the most challenging target tasks â a hierarchical SAC baseline without prior guidance can solve some subtasks, but struggles to effectively explore the large set of learned skills. (7/8) For more details, code and videos, check out our website: joint work w/ @YoungwoonLee and @JosephLim_AI (8/8) Shoutout to @abhishekunique7 for publishing the kitchen environment with his RPL paper, and to Aviral Kumar and Justin Fu for packaging env+data nicely in the D4RL benchmark! :)",https://arxiv.org/abs/2010.11944,"Intelligent agents rely heavily on prior experience when learning a new task, yet most modern reinforcement learning (RL) approaches learn every task from scratch. One approach for leveraging prior knowledge is to transfer skills learned on prior tasks to the new task. However, as the amount of prior experience increases, the number of transferable skills grows too, making it challenging to explore the full set of available skills during downstream learning. Yet, intuitively, not all skills should be explored with equal probability; for example information about the current state can hint which skills are promising to explore. In this work, we propose to implement this intuition by learning a prior over skills. We propose a deep latent variable model that jointly learns an embedding space of skills and the skill prior from offline agent experience. We then extend common maximum-entropy RL approaches to use skill priors to guide downstream learning. We validate our approach, SPiRL (Skill-Prior RL), on complex navigation and robotic manipulation tasks and show that learned skill priors are essential for effective skill transfer from rich datasets. Videos and code are available at this https URL ",Accelerating Reinforcement Learning with Learned Skill Priors,9,"['How can we use large offline datasets for accelerating the learning of new tasks? We can transfer skills!\nCheck out our #CoRL2020 paper on efficient skill transfer with learned skill priors!\nđPaper: \nđ»Website & Code: \n\nThreadđ(1/8) ', 'Learning reusable skills is great: it does not need rewards (vs offline RL) and can be done fully offline without a training task distribution (vs meta-RL).\nBut: with larger datasets, the number of learned skills grows. Exploring all skills on the target task is infeasible!\n(2/8) https://t.co/XjecYmfYf1', 'Intuitively, not all skills should be explored with equal probability: when holding onto the handle of a kettle, it is more promising to attempt a pickup than a sweeping skill.\n\nWe can learn such _skill priors_ from our experience solving other tasks!\n(3/8) https://t.co/7rMtmlnP5x', 'We introduce SPiRL (Skill-Prior RL), an approach for jointly learning an embedding space of skills and a prior over skills from offline data. For learning downstream tasks, we modify the SAC objective to use the learned prior as guidance for a hierarchical policy.\n(4/8) https://t.co/eHlPwYbCGd', 'We test SPiRL on three environments: maze navigation, block stacking and kitchen manipulation. For each we collect a large offline dataset and then test transfer to new tasks.\n(5/8) https://t.co/eREcaEsTvi', 'On all environments, we show that SPiRLâs skill prior leads to better target task performance than using a âflatâ prior over primitive actions or training a hierarchical policy without prior guidance.\n(6/8) https://t.co/IxxugJCbY0', 'Only SPiRL is able to learn the most challenging target tasks â a hierarchical SAC baseline without prior guidance can solve some subtasks, but struggles to effectively explore the large set of learned skills.\n(7/8) https://t.co/3Dhb2HkAXG', 'For more details, code and videos, check out our website: https://t.co/9fyZ1NfP9g\n\njoint work w/ @YoungwoonLee and @JosephLim_AI\n\n(8/8)', 'Shoutout to @abhishekunique7 for publishing the kitchen environment with his RPL paper, and to Aviral Kumar and Justin Fu for packaging env+data nicely in the D4RL benchmark! :)']",20,10,2000
355,168,1316291247254974466,90131577,Noam Slonim đą,The workweek is the best time to start a family! What happens when we finetune #GPT2 to _argue_ about controversial topics? A new paper from our #ProjectDebater team @IBMResearch to appear in Findings of #emnlp2020 #NLP #ComputationalArgumentation ,https://arxiv.org/abs/2010.06185,"Argument generation is a challenging task whose research is timely considering its potential impact on social media and the dissemination of information. Here we suggest a pipeline based on GPT-2 for generating coherent claims, and explore the types of claims that it produces, and their veracity, using an array of manual and automatic assessments. In addition, we explore the interplay between this task and the task of Claim Retrieval, showing how they can complement one another. ","The workweek is the best time to start a family -- A Study of GPT-2
Based Claim Generation",1,['The workweek is the best time to start a family! What happens when we finetune #GPT2 to _argue_ about controversial topics? A new paper from our #ProjectDebater team @IBMResearch to appear in Findings of #emnlp2020 #NLP #ComputationalArgumentation\n'],20,10,254
356,66,1295636132013768704,129492292,Yudai Suwa / è«èšȘé性,"A new paper has appeared on arXiv. We derive analytic formulae for the long-time evolution of supernova neutrinos. The most important ones are Eqs 47-49, which also reproduce the numerical models very well. Useful for data analysis are Eqs 54 and 55, which give how the neutrino detection rate and the positron's average energy are related to the NS mass and radius. Changing M_det applies to different detectors. ",https://arxiv.org/abs/2008.07070,"Neutrinos are a guaranteed signal from supernova explosions in the Milky Way, and a most valuable messenger that can provide us with information about the deepest parts of supernovae. In particular, neutrinos will provide us with physical quantities, such as the radius and mass of protoneutron stars (PNS), which are the central engine of supernovae. This requires a theoretical model that connects observables such as neutrino luminosity and average energy with physical quantities. Here, we show analytic solutions for the neutrino-light curve derived from the neutrino radiation transport equation by employing the diffusion approximation and the analytic density solution of the hydrostatic equation for a PNS. The neutrino luminosity and the average energy as functions of time are explicitly presented, with dependence on PNS mass, radius, the total energy of neutrinos, surface density, and opacity. The analytic solutions provide good representations of the numerical models from a few seconds after the explosion and allow a rough estimate of these physical quantities to be made from observational data. ",Analytic solutions for neutrino-light curves of core-collapse supernovae,2,"['A new paper has appeared on arXiv. \n\nWe derive analytic formulae for the long-time evolution of supernova neutrinos. The most important ones are Eqs 47-49, which also reproduce the numerical models very well. ', ""Useful for data analysis are Eqs 54 and 55, which give how the neutrino detection rate and the positron's average energy are related to the NS mass and radius. Changing M_det applies to different detectors. https://t.co/cyO0kg5KuX""]",20,08,434
357,157,1265443043848552454,1264414190887817216,Behnam Hedayatnia,"Excited to share new work from my team at Alexa ""Policy-Driven Neural Response Generation for Knowledge-Grounded Dialogue Systems"" We propose using a dialogue policy to plan the content and style of our responses for open-domain response generation ",https://arxiv.org/abs/2005.12529?context=cs.CL,"Open-domain dialogue systems aim to generate relevant, informative and engaging responses. Seq2seq neural response generation approaches do not have explicit mechanisms to control the content or style of the generated response, and frequently result in uninformative utterances. In this paper, we propose using a dialogue policy to plan the content and style of target responses in the form of an action plan, which includes knowledge sentences related to the dialogue context, targeted dialogue acts, topic information, etc. The attributes within the action plan are obtained by automatically annotating the publicly released Topical-Chat dataset. We condition neural response generators on the action plan which is then realized as target utterances at the turn and sentence levels. We also investigate different dialogue policy models to predict an action plan given the dialogue context. Through automated and human evaluation, we measure the appropriateness of the generated responses and check if the generation models indeed learn to realize the given action plans. We demonstrate that a basic dialogue policy that operates at the sentence level generates better responses in comparison to turn level generation as well as baseline models with no action plan. Additionally the basic dialogue policy has the added effect of controllability. ","Policy-Driven Neural Response Generation for Knowledge-Grounded Dialogue
Systems",1,"['Excited to share new work from my team at Alexa \n\n""Policy-Driven Neural Response Generation for Knowledge-Grounded Dialogue Systems""\n\n\nWe propose using a dialogue policy to plan the content and style of our responses for open-domain response generation ']",20,05,263
358,148,1134124150812041216,1575689167,Jonathan Vacher,Check out our new preprint with @CoenCagli_Lab ! We propose a regularization of mixture models (MM). We combine MM for image segmentation and show that MM can achieve SOTA scores on boundaries detection. Fitted MM can be used to perform image synthesis ! ,https://arxiv.org/abs/1905.10629,"Probabilistic finite mixture models are widely used for unsupervised clustering. These models can often be improved by adapting them to the topology of the data. For instance, in order to classify spatially adjacent data points similarly, it is common to introduce a Laplacian constraint on the posterior probability that each data point belongs to a class. Alternatively, the mixing probabilities can be treated as free parameters, while assuming Gauss-Markov or more complex priors to regularize those mixing probabilities. However, these approaches are constrained by the shape of the prior and often lead to complicated or intractable inference. Here, we propose a new parametrization of the Dirichlet distribution to flexibly regularize the mixing probabilities of over-parametrized mixture distributions. Using the Expectation-Maximization algorithm, we show that our approach allows us to define any linear update rule for the mixing probabilities, including spatial smoothing regularization as a special case. We then show that this flexible design can be extended to share class information between multiple mixture models. We apply our algorithm to artificial and natural image segmentation tasks, and we provide quantitative and qualitative comparison of the performance of Gaussian and Student-t mixtures on the Berkeley Segmentation Dataset. We also demonstrate how to propagate class information across the layers of deep convolutional neural networks in a probabilistically optimal way, suggesting a new interpretation for feedback signals in biological visual systems. Our flexible approach can be easily generalized to adapt probabilistic mixture models to arbitrary data topologies. ","Flexibly Regularized Mixture Models and Application to Image
Segmentation",1,['Check out our new preprint with @CoenCagli_Lab ! We propose a regularization of mixture models (MM). We combine MM for image segmentation and show that MM can achieve SOTA scores on boundaries detection. Fitted MM can be used to perform image synthesis ! '],19,05,261
359,106,1225785674487517185,4249537197,Christian Wolf,"New paper: we imbue Deep-RL agents for 3D environments with inductive bias using projective geometry, and we show that the algorithm automatically discovers objet affordances and places objects on a map. Work by @edwardbeeching + J. Dibangoye, O. Simonin. ",https://arxiv.org/abs/2002.02286,"Tasks involving localization, memorization and planning in partially observable 3D environments are an ongoing challenge in Deep Reinforcement Learning. We present EgoMap, a spatially structured neural memory architecture. EgoMap augments a deep reinforcement learning agent's performance in 3D environments on challenging tasks with multi-step objectives. The EgoMap architecture incorporates several inductive biases including a differentiable inverse projection of CNN feature vectors onto a top-down spatially structured map. The map is updated with ego-motion measurements through a differentiable affine transform. We show this architecture outperforms both standard recurrent agents and state of the art agents with structured memory. We demonstrate that incorporating these inductive biases into an agent's architecture allows for stable training with reward alone, circumventing the expense of acquiring and labelling expert trajectories. A detailed ablation study demonstrates the impact of key aspects of the architecture and through extensive qualitative analysis, we show how the agent exploits its structured internal memory to achieve higher performance. ",EgoMap: Projective mapping and structured egocentric memory for Deep RL,1,"['New paper: we imbue Deep-RL agents for 3D environments with inductive bias using projective geometry, and we show that the algorithm automatically discovers objet affordances and places objects on a map. Work by @edwardbeeching + J. Dibangoye, O. Simonin. ']",20,02,269
360,58,1230407626862858241,281711973,Dr. Emily Rickman,Today I have a new paper up on arXivđof a new benchmark brown dwarfđ± We followed up some previously detected giant planets & low-mass brown dwarfs from CORALIE () with VLT/SPHERE @SPHERE_outreach & LOOK HOW BEAUTIFUL IT IS đâš @SPHERE_outreach But what is even cooler is that we have can look at its atmosphere and we see indicators of methane and water đđ Detections like these are vital for calibrating atmospheric models of ultracool objects & act as analogues towards understanding exoplanetary atmospheres đȘâïžđ @ClaireEsau @SPHERE_outreach This!! I'm just writing up my thesis now & someone said to me the other day you have to think of it as a snapshot in time of where your research currently is rather than a complete A-Z of every problem solved. I like this way of thinking.,https://arxiv.org/abs/2002.08319,"Context. HD13724 is a nearby solar-type star at 43.48 $\pm$ 0.06 pc hosting a long-period low-mass brown dwarf detected with the CORALIE echelle spectrograph as part of the historical CORALIE radial-velocity search for extra-solar planets. The companion has a minimum mass of $26.77^{+4.4}_{-2.2} M_{\mathrm{Jup}}$ and an expected semi-major axis of $\sim$ 240 mas making it a suitable target for further characterisation with high-contrast imaging, in particular to measure its inclination, mass, and spectrum and thus establish its substellar nature. Aims. Using high-contrast imaging with the SPHERE instrument on the Very Large Telescope (VLT), we are able to directly image a brown dwarf companion to HD13724 and obtain a low-resolution spectrum. Methods. We combine the radial-velocity measurements of CORALIE and HARPS taken over two decades and high contrast imaging from SPHERE to obtain a dynamical mass estimate. From the SPHERE data we obtain a low resolution spectrum of the companion from Y to J band, as well as photometric measurements from IRDIS in the J, H and K bands. Results. Using high-contrast imaging with the SPHERE instrument at the VLT, we report the first images of a brown dwarf companion to the host star HD13724. It has an angular separation of 175.6 $\pm$ 4.5 mas and H-band contrast of $10.61\pm0.16$ mag and, using the age estimate of the star to be $\sim$1 Gyr, gives an isochronal mass estimate of $\sim$44 $M_{\mathrm{Jup}}$. By combining radial-velocity and imaging data we also obtain a dynamical mass of $50.5^{+3.3}_{-3.5} M_{\mathrm{Jup}}$. Through fitting an atmospheric model, we estimate a surface gravity of $\log g = 5.5$ and an effective temperature of 1000K. A comparison of its spectrum with observed T dwarfs estimates a spectral type of T4 or T4.5, with a T4 object providing the best fit. ","Spectral and atmospheric characterisation of a new benchmark brown dwarf
HD13724B",3,"['Today I have a new paper up on arXivđof a new benchmark brown dwarfđ±\n\n\n\nWe followed up some previously detected giant planets & low-mass brown dwarfs from CORALIE () with VLT/SPHERE @SPHERE_outreach & LOOK HOW BEAUTIFUL IT IS đâš ', '@SPHERE_outreach But what is even cooler is that we have can look at its atmosphere and we see indicators of methane and water đđ\n\nDetections like these are vital for calibrating atmospheric models of ultracool objects & act as analogues towards understanding exoplanetary atmospheres đȘâïžđ https://t.co/jDw6RprYXK', ""@ClaireEsau @SPHERE_outreach This!! I'm just writing up my thesis now & someone said to me the other day you have to think of it as a snapshot in time of where your research currently is rather than a complete A-Z of every problem solved. I like this way of thinking.""]",20,02,809
361,247,1369377672082690048,66173851,Jason Ramapuram,"Kanerva++: extending The Kanerva Machine with differentiable, locally block allocated latent memory : our recent ICLR 2021 paper in collaboration with Yan Wu & Alexandros Kalousis. We propose a novel memory model inspired by traditional heap allocation We extend the Kanerva Machine ( ) by simplifying the process of memory writing by treating it as a fully feed forward deterministic process, relying on the stochasticity of the read key distribution to disperse information within the memory. Sample generation uses stochasticity in a low dimensional key distribution (R^3) which is then used to parameterize a spatial transformer () to read contiguous memory sub-regions. Local key perturbations of a trained DMLab maze K++ model induces resultant generations that provide a natural traversal of the maze as observed by scanning the figure below row by row, from left to right. The model was trained with IID inputs! A similar behavior is observed for Omniglot generations. We also show that iterative inference via the K++ model is robust to various kinds of noise such as salt & pepper, speckle and Poisson noise. TLDR: stochasticity in a low dimensional distribution can be leveraged to generate samples by reading contiguous spatial latents from a differentiable memory model! @poolio We use the std Larochelle bin-MNIST with the std splits (code in ICLR supp material). Quick run attached. Lower value reasons: 1. Core stochasticity is in a low dim space (R^3) --> decoder can better reconstruct samples. 2. Memory houses contextual info about other samples. ",https://arxiv.org/abs/2103.03905,"Episodic and semantic memory are critical components of the human memory model. The theory of complementary learning systems (McClelland et al., 1995) suggests that the compressed representation produced by a serial event (episodic memory) is later restructured to build a more generalized form of reusable knowledge (semantic memory). In this work we develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory via a hierarchical latent variable model. We take inspiration from traditional heap allocation and extend the idea of locally contiguous memory to the Kanerva Machine, enabling a novel differentiable block allocated latent memory. In contrast to the Kanerva Machine, we simplify the process of memory writing by treating it as a fully feed forward deterministic process, relying on the stochasticity of the read key distribution to disperse information within the memory. We demonstrate that this allocation scheme improves performance in memory conditional image generation, resulting in new state-of-the-art conditional likelihood values on binarized MNIST (<=41.58 nats/image) , binarized Omniglot (<=66.24 nats/image), as well as presenting competitive performance on CIFAR10, DMLab Mazes, Celeb-A and ImageNet32x32. ","Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory",8,"['Kanerva++: extending The Kanerva Machine with differentiable, locally block allocated latent memory : our recent ICLR 2021 paper in collaboration with Yan Wu & Alexandros Kalousis.\n\n\n\nWe propose a novel memory model inspired by traditional heap allocation ', 'We extend the Kanerva Machine ( https://t.co/ZM8vnC9qow ) by simplifying the process of memory writing by treating it as a fully feed forward deterministic process, relying on the stochasticity of the read key distribution to disperse information within the memory. https://t.co/utE7eZtPIE', 'Sample generation uses stochasticity in a low dimensional key distribution (R^3) which is then used to parameterize a spatial transformer (https://t.co/L0SOw7C2rI) to read contiguous memory sub-regions. https://t.co/6J6Sg4RkLP', 'Local key perturbations of a trained DMLab maze K++ model induces resultant generations that provide a natural traversal of the maze as observed by scanning the figure below row by row, from left to right. The model was trained with IID inputs! https://t.co/KnrkH378Zi', 'A similar behavior is observed for Omniglot generations. https://t.co/YJcivStLvd', 'We also show that iterative inference via the K++ model is robust to various kinds of noise such as salt & pepper, speckle and Poisson noise. https://t.co/FklOLb2fMa', 'TLDR: stochasticity in a low dimensional distribution can be leveraged to generate samples by reading contiguous spatial latents from a differentiable memory model!', '@poolio We use the std Larochelle bin-MNIST with the std splits (code in ICLR supp material). Quick run attached. Lower value reasons:\n1. Core stochasticity is in a low dim space (R^3) --> decoder can better reconstruct samples. \n2. Memory houses contextual info about other samples. https://t.co/iQdD6wt0cd']",21,03,1638
362,6,1300856923277983747,1548884580,Ryan Urbanowicz,"Check out our new rigorous #MachineLearning analysis #pipeline for #biomedical binary #classification paper available on #arXiv #ArtificialIntelligence. Jupyter notebook at: with @DrShannonMLynch @moorejh @RobertZhang100 +others Other authors include Pranshu Suri, Yuchen Lu, Karen Ruth and Rachael Stolzenberg-Solomon",https://arxiv.org/abs/2008.12829,"Machine learning (ML) offers a collection of powerful approaches for detecting and modeling associations, often applied to data having a large number of features and/or complex associations. Currently, there are many tools to facilitate implementing custom ML analyses (e.g. scikit-learn). Interest is also increasing in automated ML packages, which can make it easier for non-experts to apply ML and have the potential to improve model performance. ML permeates most subfields of biomedical research with varying levels of rigor and correct usage. Tremendous opportunities offered by ML are frequently offset by the challenge of assembling comprehensive analysis pipelines, and the ease of ML misuse. In this work we have laid out and assembled a complete, rigorous ML analysis pipeline focused on binary classification (i.e. case/control prediction), and applied this pipeline to both simulated and real world data. At a high level, this 'automated' but customizable pipeline includes a) exploratory analysis, b) data cleaning and transformation, c) feature selection, d) model training with 9 established ML algorithms, each with hyperparameter optimization, and e) thorough evaluation, including appropriate metrics, statistical analyses, and novel visualizations. This pipeline organizes the many subtle complexities of ML pipeline assembly to illustrate best practices to avoid bias and ensure reproducibility. Additionally, this pipeline is the first to compare established ML algorithms to 'ExSTraCS', a rule-based ML algorithm with the unique capability of interpretably modeling heterogeneous patterns of association. While designed to be widely applicable we apply this pipeline to an epidemiological investigation of established and newly identified risk factors for pancreatic cancer to evaluate how different sources of bias might be handled by ML algorithms. ","A Rigorous Machine Learning Analysis Pipeline for Biomedical Binary
Classification: Application in Pancreatic Cancer Nested Case-control Studies
with Implications for Bias Assessments",2,"['Check out our new rigorous #MachineLearning analysis #pipeline for #biomedical binary #classification paper available on #arXiv #ArtificialIntelligence. Jupyter notebook at: with @DrShannonMLynch @moorejh @RobertZhang100 +others ', 'Other authors include Pranshu Suri, Yuchen Lu, Karen Ruth and Rachael Stolzenberg-Solomon']",20,08,339
363,85,1171385645920595968,180451291,Uri Shalit,"In a new paper with Shie Mannor and led by @guytenn, we work at the intersection of RL and causal inference. We give a new method to perform off-policy evaluation in the presence of dynamic unmeasured confounders, or a POMDP to use RL terminology. Imagine learning a policy offline from hospital data. We know the actions we learn from were conducted by doctors who have some information our RL agent canât see: unmeasured confounders. Gottesman et al. showed how bad things can go in this scenario. In our paper, we give a way to evaluate agent policies in this difficult scenario. Main assumption: the unknown cond. prob. matrix between unmeasured and measured is invertible, as are the transition matrices btw unmeasured. Inspired by Miao et al. While we believe this is an important theoretical advance, the resulting method requires estimating inverses of big matrices. So we introduce a âdecoupled POMDPâ model which we consider sensible in its causal assumptions and leads to an easier estimation problem. In the figures: u_t are unobserved confounders/states, z_t and o_t are observations, a_t are actions, r_t rewards, Ï_b behavior policy (e.g. human doctor), Ï_e agent policy we want to evaluate",https://arxiv.org/abs/1909.03739,"This work studies the problem of batch off-policy evaluation for Reinforcement Learning in partially observable environments. Off-policy evaluation under partial observability is inherently prone to bias, with risk of arbitrarily large errors. We define the problem of off-policy evaluation for Partially Observable Markov Decision Processes (POMDPs) and establish what we believe is the first off-policy evaluation result for POMDPs. In addition, we formulate a model in which observed and unobserved variables are decoupled into two dynamic processes, called a Decoupled POMDP. We show how off-policy evaluation can be performed under this new model, mitigating estimation errors inherent to general POMDPs. We demonstrate the pitfalls of off-policy evaluation in POMDPs using a well-known off-policy method, Importance Sampling, and compare it with our result on synthetic medical data. ",Off-Policy Evaluation in Partially Observable Environments,5,"['In a new paper with Shie Mannor and led by @guytenn, we work at the intersection of RL and causal inference. We give a new method to perform off-policy evaluation in the presence of dynamic unmeasured confounders, or a POMDP to use RL terminology. \n ', 'Imagine learning a policy offline from hospital data. We know the actions we learn from were conducted by doctors who have some information our RL agent canât see: unmeasured confounders. Gottesman et al. https://t.co/IIEH1xZrcd showed how bad things can go in this scenario.', 'In our paper, we give a way to evaluate agent policies in this difficult scenario. \nMain assumption: the unknown cond. prob. matrix between unmeasured and measured is invertible, as are the transition matrices btw unmeasured. Inspired by Miao et al. \nhttps://t.co/mgz65P9uxM', 'While we believe this is an important theoretical advance, the resulting method requires estimating inverses of big matrices. So we introduce a âdecoupled POMDPâ model which we consider sensible in its causal assumptions and leads to an easier estimation problem. https://t.co/52snAPT6h3', 'In the figures: u_t are unobserved confounders/states, z_t and o_t are observations, a_t are actions, r_t rewards, Ï_b behavior policy (e.g. human doctor), Ï_e agent policy we want to evaluate']",19,09,1241
364,136,1446136483589537796,349321075,Oncel Tuzel,Style equalization is a new paper from our research team @Apple to train controllable generative sequence models that can mimic speech or handwriting styles using a single reference example. Paper: Synthesis videos: R. Chang et al. ,https://arxiv.org/abs/2110.02891,"Controllable generative sequence models with the capability to extract and replicate the style of specific examples enable many applications, including narrating audiobooks in different voices, auto-completing and auto-correcting written handwriting, and generating missing training samples for downstream recognition tasks. However, typical training algorithms for these controllable sequence generative models suffer from the training-inference mismatch, where the same sample is used as content and style input during training but different samples are given during inference. In this paper, we tackle the training-inference mismatch encountered during unsupervised learning of controllable generative sequence models. By introducing a style transformation module that we call style equalization, we enable training using different content and style samples and thereby mitigate the training-inference mismatch. To demonstrate its generality, we applied style equalization to text-to-speech and text-to-handwriting synthesis on three datasets. Our models achieve state-of-the-art style replication with a similar mean style opinion score as the real data. Moreover, the proposed method enables style interpolation between sequences and generates novel styles. ","Style Equalization: Unsupervised Learning of Controllable Generative
Sequence Models",1,['Style equalization is a new paper from our research team @Apple to train controllable generative sequence models that can mimic speech or handwriting styles using a single reference example.\n\nPaper: \nSynthesis videos: \nR. Chang et al. '],21,10,252
365,190,1374366178966142980,1056626853652426752,Jonathan Herzig,"1/4 Much focus is given to dense retrieval of textual passages, but how should we design retrievers for tables in the context of open domain QA? New #NAACL2021 short paper: With @muelletm, Syrine Krichene and @eisenjulian 2/4 We tackle open-domain QA over tables, and show that a BERT based retriever can be improved by a retriever designed to handle tabular context based on TAPAS. We present an effective pre-training procedure for our retriever and improve its quality with mined hard negatives. 3/4 We extract an 11K subset of Natural Questions where answers reside in a table into a Table QA dataset (called NQ-tables), and find that our retriever improves both retrieval results and end-to-end QA results over BERT and BM25 baselines. 4/4 Our code for generating the NQ-tables dataset, and model checkpoints will be available soon in our repository: And here is a figure as well đ: ",https://arxiv.org/abs/2103.12011,"Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages. In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context. We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives. As relevant datasets are missing, we extract a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA dataset. We find that our retriever improves retrieval results from 72.0 to 81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a BERT based retriever. ",Open Domain Question Answering over Tables via Dense Retrieval,5,"['1/4 Much focus is given to dense retrieval of textual passages, but how should we design retrievers for tables in the context of open domain QA?\n\nNew #NAACL2021 short paper: \n\nWith @muelletm, Syrine Krichene and @eisenjulian', '2/4 We tackle open-domain QA over tables, and show that a BERT based retriever can be improved by a retriever designed to handle tabular context based on TAPAS. \n\nWe present an effective pre-training procedure for our retriever and improve its quality with mined hard negatives.', '3/4 We extract an 11K subset of Natural Questions where answers reside in a table into a Table QA dataset (called NQ-tables), and find that our retriever improves both retrieval results and end-to-end QA results over BERT and BM25 baselines.', '4/4 Our code for generating the NQ-tables dataset, and model checkpoints will be available soon in our repository:\n\nhttps://t.co/Nh6mWkaUiL', 'And here is a figure as well đ: https://t.co/5MB8kpEiy3']",21,03,909
366,90,1005001401041346560,933826478038544384,Mark Williams,Another *new* paper submission from the LHCb charm physics group. What is the lifetime of the recently discovered doubly-charmed-doubly-charged baryon? Should be long-lived as it decays weakly - results confirm this! @LHCbExperiment @LHCbPhysics ,https://arxiv.org/abs/1806.02744,"The first measurement of the lifetime of the doubly charmed baryon $\Xi_{cc}^{++}$ is presented, with the signal reconstructed in the final state $\Lambda_c^+ K^- \pi^+ \pi^+$. The data sample used corresponds to an integrated luminosity of $1.7\,\mathrm{fb}^{-1}$, collected by the LHCb experiment in proton-proton collisions at a centre-of-mass energy of $13\mathrm{\,Te\kern -0.1em V}$. The $\Xi_{cc}^{++}$ lifetime is measured to be $0.256\,^{+0.024}_{-0.022}{\,\rm (stat)\,} \pm 0.014 {\,\rm(syst)}\mathrm{\,ps}$. ",Measurement of the lifetime of the doubly charmed baryon $\Xi_{cc}^{++}$,1,['Another *new* paper submission from the LHCb charm physics group. What is the lifetime of the recently discovered doubly-charmed-doubly-charged baryon? Should be long-lived as it decays weakly - results confirm this! @LHCbExperiment @LHCbPhysics '],18,06,259
367,133,1190333497451450369,317422543,Ricard Solé,"Can parasites be central to the evolution of complexity? They are often seen as byproducts of evolutionary change, but they can actually be crucial to expand the computational landscape of complexity. Here's our new paper with @brigan_raman on @sfiscience ",https://arxiv.org/abs/1910.14339,"Why are living systems complex? Why does the biosphere contain living beings with complexity features beyond those of the simplest replicators? What kind of evolutionary pressures result in more complex life forms? These are key questions that pervade the problem of how complexity arises in evolution. One particular way of tackling this is grounded in an algorithmic description of life: living organisms can be seen as systems that extract and process information from their surroundings in order to reduce uncertainty. Here we take this computational approach using a simple bit string model of coevolving agents and their parasites. While agents try to predict their worlds, parasites do the same with their hosts. The result of this process is that, in order to escape their parasites, the host agents expand their computational complexity despite the cost of maintaining it. This, in turn, is followed by increasingly complex parasitic counterparts. Such arms races display several qualitative phases, from monotonous to punctuated evolution or even ecological collapse. Our minimal model illustrates the relevance of parasites in providing an active mechanism for expanding living complexity beyond simple replicators, suggesting that parasitic agents are likely to be a major evolutionary driver for biological complexity. ",How Turing parasites expand the computational landscape of digital life,1,"[""Can parasites be central to the evolution of complexity? They are often seen as byproducts of evolutionary change, but they can actually be crucial to expand the computational landscape of complexity. Here's our new paper with @brigan_raman on @sfiscience ""]",19,10,269
368,59,1239497859147456518,1066288106,Fabian Dablander,"New paper! We derive default Bayes factors for testing hypotheses on independent population variances: @donvdbergh @AlexanderLyNL @EJWagenmakers (1/6) We choose a prior so that the resulting Bayes factor fulfills a number of desiderata. This leads to a closed-form expression for the K = 1 and K = 2 group case. (2/6) It is arguably more intuitive to reason about standard deviations than about variances, and we provide practical examples from psychology and paleoanthropology (cool, right?). One may also elicit an informative prior rather than rely on defaults. We extend out Bayes factor to K > 2 groups and allow testing 'mixed' or 'informative' hypothesis (see left image). We provide practical examples from archeology (chupa pots!) and education. Our default Bayes factor generalizes a recently proposed automatic fractional Bayes factor (Böing-Messing & Mulder, 2018), yielding the exact same results when testing hypotheses about (in)equality when choosing a rather wide prior \alpha = 0.50. All of this is implemented in the R package 'bfvartest', which is available from Github (). Give it a go! Of course, this will also be implemented in @JASPStats for ease of use :-) ",https://arxiv.org/abs/2003.06278,"Testing the (in)equality of variances is an important problem in many statistical applications. We develop default Bayes factor tests to assess the (in)equality of two or more population variances, as well as a test for whether the population variance equals a specific value. The resulting test can be used to check assumptions for commonly used procedures such as the $t$-test or ANOVA, or test substantive hypotheses concerning variances directly. We further extend the Bayes factor to allow $\mathcal{H}_0$ to have a null-region. Researchers may have directed hypotheses such as $\sigma_1^2 > \sigma_2^2$, or want to combine hypotheses about equality with hypotheses about inequality, for example $\sigma_1^2 = \sigma_2^2 > (\sigma_3^2, \sigma_4^2)$. We generalize our Bayes factor to accommodate such hypotheses for $K > 2$ groups. We show that our Bayes factor fulfills a number of desiderata, provide practical examples illustrating the method, and compare it to a recently proposed fractional Bayes factor procedure by B\""oing-Messing & Mulder (2018). Our procedure is implemented in the R package $bfvartest$. ","Default Bayes Factors for Testing the (In)equality of Several Population
Variances",6,"['New paper! We derive default Bayes factors for testing hypotheses on independent population variances: \n\n@donvdbergh @AlexanderLyNL @EJWagenmakers (1/6) ', 'We choose a prior so that the resulting Bayes factor fulfills a number of desiderata. This leads to a closed-form expression for the K = 1 and K = 2 group case. (2/6) https://t.co/5wpxZ167k9', 'It is arguably more intuitive to reason about standard deviations than about variances, and we provide practical examples from psychology and paleoanthropology (cool, right?). One may also elicit an informative prior rather than rely on defaults. https://t.co/Ml5PwJAPyq', ""We extend out Bayes factor to K > 2 groups and allow testing 'mixed' or 'informative' hypothesis (see left image). We provide practical examples from archeology (chupa pots!) and education. https://t.co/ritexKthmG"", 'Our default Bayes factor generalizes a recently proposed automatic fractional Bayes factor (Böing-Messing & Mulder, 2018), yielding the exact same results when testing hypotheses about (in)equality when choosing a rather wide prior \\alpha = 0.50. https://t.co/b0PcNKYhxz', ""All of this is implemented in the R package 'bfvartest', which is available from Github (https://t.co/jHuVZdQPaE). Give it a go! Of course, this will also be implemented in @JASPStats for ease of use :-) https://t.co/8pk63VkbkW""]",20,03,1240
369,74,1229620661586235392,338922968,Jess Thorne,"New Paper Time! My supervisor Aaron Robotham @ICRAR wrote a new spectral energy distribution fitting tool called ProSpect and the paper has just been put on the #astroph #arXiv! see: (with @CDPLagos @SabineBellstedt @astrowelshluke) ProSpect combines stellar models, dust models, and an AGN model to allow for the generation of spectral energy distributions and the recovery of star formation and metallicity histories of galaxies. Here is a graphic I made for the paper showing how we model galaxy components! ProSpect allows the user to pick almost any reasonable star formation history to fit for including parametric and non-parametric options. Here are just a few of the options provided, but the user is also able to provide their own. ProSpect also includes the ability to model an evolving metallicity using a number of different models. This allows for more thorough modeling of the evolution of galaxies! What is really cool is that we can take star formation histories from semi-analytic models such as SHARK (@CDPLagos) and calculate their spectral energy distributions, then re-fit them using ProSpect to test whether we can re-create star formation histories. The answer is yes! Overall, ProSpect is able to extract similar stellar masses and star formation rates to those extracted from MAGPHYS! This means that we're not significantly biased in our recovery of stellar masses! There's heaps of exciting science to be done with ProSpect which will form the basis of the next three years of my PhD so watch this space! @astro_jje It is!",https://arxiv.org/abs/2002.06980,"We introduce ProSpect, a generative galaxy spectral energy distribution (SED) package that encapsulates the best practices for SED methodologies in a number of astrophysical domains. ProSpect comes with two popular families of stellar population libraries (BC03 and EMILES), and a large variety of methods to construct star formation and metallicity histories. It models dust through the use of a Charlot & Fall attenuation model, with re-emission using Dale far-infrared templates. It also has the ability to model AGN through the inclusion of a simple AGN and hot torus model. Finally, it makes use of MAPPINGS-III photoionisation tables to produce line emission features. We test the generative and inversion utility of ProSpect through application to the Shark galaxy formation semi-analytic code, and informed by these results produce fits to the final ultraviolet to far-infrared photometric catalogues produces by the Galaxy and Mass Assembly Survey (GAMA). As part of the testing of ProSpect, we also produce a range of simple photometric stellar mass approximations covering a range of filters for both observed frame and rest frame photometry. ","ProSpect: Generating Spectral Energy Distributions with Complex Star
Formation and Metallicity Histories",8,"['New Paper Time! My supervisor Aaron Robotham @ICRAR wrote a new spectral energy distribution fitting tool called ProSpect and the paper has just been put on the #astroph #arXiv! see: (with @CDPLagos @SabineBellstedt @astrowelshluke) ', 'ProSpect combines stellar models, dust models, and an AGN model to allow for the generation of spectral energy distributions and the recovery of star formation and metallicity histories of galaxies. Here is a graphic I made for the paper showing how we model galaxy components! https://t.co/HIXv3uTW7X', 'ProSpect allows the user to pick almost any reasonable star formation history to fit for including parametric and non-parametric options. Here are just a few of the options provided, but the user is also able to provide their own. https://t.co/uwsKeN4rgH', 'ProSpect also includes the ability to model an evolving metallicity using a number of different models. This allows for more thorough modeling of the evolution of galaxies! https://t.co/U6yfHguRPR', 'What is really cool is that we can take star formation histories from semi-analytic models such as SHARK (@CDPLagos) and calculate their spectral energy distributions, then re-fit them using ProSpect to test whether we can re-create star formation histories. The answer is yes! https://t.co/SJy1kSQ9Wb', ""Overall, ProSpect is able to extract similar stellar masses and star formation rates to those extracted from MAGPHYS! This means that we're not significantly biased in our recovery of stellar masses! https://t.co/WAV4RhKdNZ"", ""There's heaps of exciting science to be done with ProSpect which will form the basis of the next three years of my PhD so watch this space!"", '@astro_jje It is!']",20,02,1599
370,1,1491000370033745924,1230320689250553856,Christian Malacaria,I'm so proud of our new paper now accepted on #ApJ! A fleet of X-ray telescopes #NICER @NuSTAR_Science @NASASwift @AstroSat3 in collaboration w/ #XMAG colleagues @BIGfalke @pragati2707 et many al.! Come&see how it feels when it's #accretingontheedge: #XMAG: @ColleenWilsonH @LapaSokolova @oznerold,https://arxiv.org/abs/2201.11376,"Accreting X-ray pulsars (XRPs) undergo luminous X-ray outbursts during which the luminosity-dependent spectral and timing features of the neutron star's emission can be analyzed in detail, thus shedding light on the accretion regime at work. We took advantage of a monitoring campaign performed with NuSTAR, Swift/XRT, AstroSat and NICER, to follow the Be/X-ray Binary 2S 1553-542 along one of its rare outbursts and trace its spectral and timing evolution. We report the discovery of a luminosity-dependent cyclotron line energy for the first time in this source. The pulse profiles and pulsed fraction also show variability along the outburst, consistently with the interpretation that the source transitions from the sub-critical to the super-critical accretion regime, separated by a critical luminosity of L$_{crit}\approx4\times10^{37}$ erg/s. ","Accreting on the edge: a luminosity-dependent cyclotron line in the
Be/X-ray Binary 2S 1553-542 accompanied by accretion regimes transition",2,"[""I'm so proud of our new paper now accepted on #ApJ!\n\nA fleet of X-ray telescopes #NICER @NuSTAR_Science @NASASwift @AstroSat3 in collaboration w/ #XMAG colleagues @BIGfalke @pragati2707 et many al.!\n\nCome&see how it feels when it's #accretingontheedge:\n\n "", '#XMAG: @ColleenWilsonH @LapaSokolova @oznerold']",22,01,311
371,28,1196804810173042688,97707247,Gautam Kamath,"New paper on arXiv: Random Restrictions of High-Dimensional Distributions and Uniformity Testing with Subcube Conditioning. With Clément L. Canonne (@ccanonne_), Xi Chen, Amit Levi, and Erik Waingarten. Check out this thread for a summary 1/n Say you have a distribution over a discrete support, and you want to test if it's uniform. Birthday paradox says you need sqrt(domain size) samples, and this is achievable. This is sublinear (great), but in high dimensions, the domain is exponentially large (not great) 2/n Enter the conditional sampling model. You can ask for samples, conditioned on being from some subset of the domain. Canonne, Ron, and Servedio showed that uniformity can be tested with a number of samples independent of the domain size! But the subsets queried may be complex 3/n We study the problem in a weaker model, where you can only condition on ""subcubes"" of the domain. We show the a nearly optimal upper bound of ~sqrt(dimension). That is, sqrt(log(domain size)), and matching the complexity of testing product distributions with vanilla samples 4/n The main technical tool is a structural statement, lower bounding the mean distance after a random restriction by the total variation distance after a random projection. Proved via a ""robust"" (a la Khot-Minzer-Safra) Pisier inequality 5/n A useful subroutine of independent interest is for testing whether a distribution on the hypercube is uniform, or has mean far from zero. Can be done with same ~sqrt(dimension) complexity, generalizing previous results with the same complexity for product distributions. 6/6",https://arxiv.org/abs/1911.07357,"We give a nearly-optimal algorithm for testing uniformity of distributions supported on $\{-1,1\}^n$, which makes $\tilde O (\sqrt{n}/\varepsilon^2)$ queries to a subcube conditional sampling oracle (Bhattacharyya and Chakraborty (2018)). The key technical component is a natural notion of random restriction for distributions on $\{-1,1\}^n$, and a quantitative analysis of how such a restriction affects the mean vector of the distribution. Along the way, we consider the problem of mean testing with independent samples and provide a nearly-optimal algorithm. ","Random Restrictions of High-Dimensional Distributions and Uniformity
Testing with Subcube Conditioning",6,"['New paper on arXiv: Random Restrictions of High-Dimensional Distributions and Uniformity Testing with Subcube Conditioning. With \nClément L. Canonne (@ccanonne_),\xa0Xi Chen,\xa0Amit Levi, and\xa0Erik Waingarten. Check out this thread for a summary 1/n ', ""Say you have a distribution over a discrete support, and you want to test if it's uniform. Birthday paradox says you need sqrt(domain size) samples, and this is achievable. This is sublinear (great), but in high dimensions, the domain is exponentially large (not great) 2/n"", 'Enter the conditional sampling model. You can ask for samples, conditioned on being from some subset of the domain. Canonne, Ron, and Servedio showed that uniformity can be tested with a number of samples independent of the domain size! But the subsets queried may be complex 3/n', 'We study the problem in a weaker model, where you can only condition on ""subcubes"" of the domain. We show the a nearly optimal upper bound of ~sqrt(dimension). That is, sqrt(log(domain size)), and matching the complexity of testing product distributions with vanilla samples 4/n', 'The main technical tool is a structural statement, lower bounding the mean distance after a random restriction by the total variation distance after a random projection. Proved via a ""robust"" (a la Khot-Minzer-Safra) Pisier inequality 5/n', 'A useful subroutine of independent interest is for testing whether a distribution on the hypercube is uniform, or has mean far from zero. Can be done with same ~sqrt(dimension) complexity, generalizing previous results with the same complexity for product distributions. 6/6']",19,11,1603
372,140,1237644677828022274,3232627976,Khyati Malhan,"""Measuring the Matter Density of the Galactic Disk Using Stellar Streams"" today on arXiv by our ""Galaxy & Gaia"" group @TheOKC . With A. Widmark, @PFdeSalas and S. Sivertsson, we propose a new technique to infer disk surface density using star streams. ",https://arxiv.org/abs/2003.04318,"We present a novel method for determining the total matter surface density of the Galactic disk by analysing the kinematics of a dynamically cold stellar stream that passes through or close to the Galactic plane. The method relies on the fact that the vertical component of energy for such stream stars is approximately constant, such that their vertical positions and vertical velocities are interrelated via the matter density of the Galactic disk. By testing our method on mock data stellar streams, with realistic phase-space dispersions and Gaia uncertainties, we demonstrate that it is applicable to small streams out to a distance of a few kilo-parsec, and that the surface density of the disk can be determined to a precision of 6 %. This method is complementary to other mass measurements. In particular, it does not rely on any equilibrium assumption for stars in the Galactic disk, and also makes it possible to measure the surface density to good precision at large distances from the Sun. Such measurements would inform us of the matter composition of the Galactic disk and its spatial variation, place stronger constraints on dark disk sub-structure, and even diagnose possible non-equilibrium effects that bias other types of dynamical mass measurements. ",Measuring the Matter Density of the Galactic Disk Using Stellar Streams,1,"['""Measuring the Matter Density of the Galactic Disk Using Stellar Streams"" today on arXiv by our ""Galaxy & Gaia"" group @TheOKC . \nWith A. Widmark, @PFdeSalas and S. Sivertsson, we propose a new technique to infer disk surface density using star streams. ']",20,03,265
373,112,1490609016917172224,2969696397,Ion Nechita,"New paper with @MariaJivulescu and @TeikoHeinosaari about the post-processing order of quantum measurements. We analyze this fundamental partial order, which lies at the heart of incompatibility of quantum measurements @TeikoHeinosaari has written a very nice post about it ",https://arxiv.org/abs/2202.00725,"We study the partially ordered set of equivalence classes of quantum measurements endowed with the post-processing partial order. The post-processing order is fundamental as it enables to compare measurements by their intrinsic noise and it gives grounds to define the important concept of quantum incompatibility. Our approach is based on mapping this set into a simpler partially ordered set using an order preserving map and investigating the resulting image. The aim is to ignore unnecessary details while keeping the essential structure, thereby simplifying e.g. detection of incompatibility. One possible choice is the map based on Fisher information introduced by Huangjun Zhu, known to be an order morphism taking values in the cone of positive semidefinite matrices. We explore the properties of that construction and improve Zhu's incompatibility criterion by adding a constraint depending on the number of measurement outcomes. We generalize this type of construction to other ordered vector spaces and we show that this map is optimal among all quadratic maps. ",Order preserving maps on quantum measurements,2,"['New paper with @MariaJivulescu and @TeikoHeinosaari about the post-processing order of quantum measurements. We analyze this fundamental partial order, which lies at the heart of incompatibility of quantum measurements', '@TeikoHeinosaari has written a very nice post about it https://t.co/EYAGkY6hUn https://t.co/V2nkfjxb5m']",22,02,294
374,52,1418222057704263680,1199153585013055489,Katiana Kontolati,"Interested in constructing surrogates for very high-dimensional models? đCheck out our new paper on arXiv: #AcademicTwitter #AcademicChatter #JohnsHopkins We construct polynomial chaos expansion surrogates on subspace manifolds by utilizing manifold learning techniques and propose an encoder-decoder framework that leads to a significant acceleration of UQ tasks in complex systems. đGithub repo: @mistrigr Thank you! đâșïž @KaustavBera11 Thank you, Kaustav! đ",https://arxiv.org/abs/2107.09814,"In this work we introduce a manifold learning-based method for uncertainty quantification (UQ) in systems describing complex spatiotemporal processes. Our first objective is to identify the embedding of a set of high-dimensional data representing quantities of interest of the computational or analytical model. For this purpose, we employ Grassmannian diffusion maps, a two-step nonlinear dimension reduction technique which allows us to reduce the dimensionality of the data and identify meaningful geometric descriptions in a parsimonious and inexpensive manner. Polynomial chaos expansion is then used to construct a mapping between the stochastic input parameters and the diffusion coordinates of the reduced space. An adaptive clustering technique is proposed to identify an optimal number of clusters of points in the latent space. The similarity of points allows us to construct a number of geometric harmonic emulators which are finally utilized as a set of inexpensive pre-trained models to perform an inverse map of realizations of latent features to the ambient space and thus perform accurate out-of-sample predictions. Thus, the proposed method acts as an encoder-decoder system which is able to automatically handle very high-dimensional data while simultaneously operating successfully in the small-data regime. The method is demonstrated on two benchmark problems and on a system of advection-diffusion-reaction equations which model a first-order chemical reaction between two species. In all test cases, the proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks. ","Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models",4,"['Interested in constructing surrogates for very high-dimensional models? đCheck out our new paper on arXiv: \n\n#AcademicTwitter #AcademicChatter #JohnsHopkins ', 'We construct polynomial chaos expansion surrogates on subspace manifolds by utilizing manifold learning techniques and propose an encoder-decoder framework that leads to a significant acceleration of UQ tasks in complex systems.\nđGithub repo: https://t.co/kFqg7dPHp8', '@mistrigr Thank you! đâșïž', '@KaustavBera11 Thank you, Kaustav! đ']",21,07,480
375,165,1311588057783635969,366131267,Nisha katyal,Sharing our most recent work connecting the planetary interiors and atmospheres: We study the influence of mantle redox state on the atmospheric development and provides useful insights to inform the future missions looking for magma ocean planets. The paper is recently accepted to A&A @tonhingm Thanks for being a really helpful co-author,http://arxiv.org/abs/2009.14599,"The magma ocean period was a critical phase determining how Earth atmosphere developed into habitability. However there are major uncertainties in the role of key processes such as outgassing from the planetary interior and escape of species to space that play a major role in determining the atmosphere of early Earth. We investigate the influence of outgassing of various species and escape of H$_2$ for different mantle redox states upon the composition and evolution of the atmosphere for the magma ocean period. We include an important new atmosphere-interior coupling mechanism namely the redox evolution of the mantle which strongly affects the outgassing of species. We simulate the volatile outgassing and chemical speciation at the surface for various redox states of the mantle by employing a C-H-O based chemical speciation model combined with an interior outgassing model. We then apply a line-by-line radiative transfer model to study the remote appearance of the planet in terms of the infrared emission and transmission. Finally, we use a parameterized diffusion-limited and XUV energy-driven atmospheric escape model to calculate the loss of H$_2$ to space. We have simulated the thermal emission and transmission spectra for reduced or oxidized atmospheres present during the magma ocean period of Earth. Reduced or thin atmospheres consisting of H$_2$ in abundance emit more radiation to space and have larger effective height as compared to oxidized or thick atmospheres which are abundant in H$_2$O and CO$_2$. We obtain the outgassing rates of H2 from the mantle into the atmosphere to be a factor of ten times larger than the rates of diffusion-limited escape to space. Our work presents useful insight into the development of Earth atmosphere during the magma ocean period as well as input to guide future studies discussing exoplanetary interior compositions. ","Effect of mantle oxidation state and escape upon the evolution of
Earth's magma ocean atmosphere",3,"['Sharing our most recent work connecting the planetary interiors and atmospheres:\n\nWe study the influence of mantle redox state on the atmospheric development and provides useful insights to inform the future missions looking for magma ocean planets.', 'The paper is recently accepted to A&A', '@tonhingm Thanks for being a really helpful co-author']",20,09,347
376,198,1356563822216110084,1023681782363901952,Michal P. Heller,"We're very happy to share our group's latest collaborative @_arXiv_hep_th preprint, where we study the long-distance behaviour of #Entanglement of purification in CFTs and find interesting universal results. Check it out here: [] #GQFI #QuantumInformation ",https://arxiv.org/abs/2102.00013,"Quantifying entanglement properties of mixed states in quantum field theory via entanglement of purification and reflected entropy is a new and challenging subject. In this work, we study both quantities for two spherical subregions far away from each other in the vacuum of a conformal field theory in any number of dimensions. Using lattice techniques, we find an elementary proof that the decay of both, the entanglement of purification and reflected entropy, is enhanced with respect to the mutual information behaviour by a logarithm of the distance between the subregions. In the case of the Ising spin chain at criticality and the related free fermion conformal field theory, we compute also the overall coefficients numerically for the both quantities of interest. ","Long-distance entanglement of purification and reflected entropy in
conformal field theory",1,"[""We're very happy to share our group's latest collaborative @_arXiv_hep_th preprint, where we study the long-distance behaviour of #Entanglement of purification in CFTs and find interesting universal results. Check it out here: [] #GQFI #QuantumInformation ""]",21,02,268
377,7,1378103690221998080,899120403649404928,Sam Cree,"New paper! To build a good quantum computer, we need to encode information so that it can be processed without getting corrupted. We found that certain codes inspired by quantum gravity only allow very limited kinds of fault-tolerant processing. đ»âïžđ€ 1/4 Context: theorists have been using holographic codes as toy models of quantum gravity for the last few years. They do a decent job at protecting information from errors/environmental noise, but their usefulness for fault-tolerant quantum computing is unclear. 2/4 Our contribution: The kinds of fault-tolerant operations you can do most easily in holographic codes are very restricted - you can't implement non-Clifford gates, which are often the ""magic ingredient"" in universal fault-tolerant quantum computation strategies. 3/4 So holographic codes might yet find a practical application for quantum computing, but we've ruled out the most straightforward way we might have hoped for. 4/4 Thanks to coauthors, @DomJWilliamson, @KfirDolev and Vlad Calvera #quantph #quantum #scicomm #sciencetwitter #arxiv",https://arxiv.org/abs/2103.13404,"We evaluate the usefulness of holographic stabilizer codes for practical purposes by studying their allowed sets of fault-tolerantly implementable gates. We treat them as subsystem codes and show that the set of transversally implementable logical operations is contained in the Clifford group for sufficiently localized logical subsystems. As well as proving this concretely for several specific codes, we argue that this restriction naturally arises in any stabilizer subsystem code that comes close to capturing certain properties of holography. We extend these results to approximate encodings, locality-preserving gates, certain codes whose logical algebras have non-trivial centers, and discuss cases where restrictions can be made to other levels of the Clifford hierarchy. A few auxiliary results may also be of interest, including a general definition of entanglement wedge map for any subsystem code, and a thorough classification of different correctability properties for regions in a subsystem code. ","Fault-tolerant logical gates in holographic stabilizer codes are
severely restricted",4,"['New paper! To build a good quantum computer, we need to encode information so that it can be processed without getting corrupted. We found that certain codes inspired by quantum gravity only allow very limited kinds of fault-tolerant processing.\nđ»âïžđ€\n\n1/4 ', 'Context: theorists have been using holographic codes as toy models of quantum gravity for the last few years. They do a decent job at protecting information from errors/environmental noise, but their usefulness for fault-tolerant quantum computing is unclear.\n2/4', 'Our contribution: The kinds of fault-tolerant operations you can do most easily in holographic codes are very restricted - you can\'t implement non-Clifford gates, which are often the ""magic ingredient"" in universal fault-tolerant quantum computation strategies.\n3/4', ""So holographic codes might yet find a practical application for quantum computing, but we've ruled out the most straightforward way we might have hoped for.\n4/4\n\nThanks to coauthors, @DomJWilliamson, @KfirDolev\nand Vlad Calvera\n#quantph #quantum #scicomm #sciencetwitter #arxiv""]",21,03,1075
378,17,1365057786250489858,1882939814,Sarah Wiegreffe,"Happy to share our new preprint (with @anmarasovic) âTeach Me to Explain: A Review of Datasets for Explainable NLPâ Paper: Website: Itâs half survey, half reflections for more standardized ExNLP dataset collection. Highlights: 1/6 We focus on datasets of the form: (inputs, labels, explanations). We describe these instance-wise explanations as âexplaining human decisionsâ (the labels). Other types of explanations may explain something about the world. We focus on the shaded area. 2/6 We identify 3 major classes of explanation datasets: highlights, free-text, and structured explanations. Each one is surveyed in a table like this (structured explanations here). >95% of the datasets are collected using human annotation, and >70% use crowdsourcing. 3/6 We discuss how constraints placed on data annotation can influence modeling and evaluation, and suggest the use of datasheets to make collection decisions more transparent. Also discuss structure that sometimes emerges free-text rationales and what to do about it. 4/6 Finally, we synthesize crowdsourcing methods from NLP/ HCI literature on improving quality+diversity of ExNLP datasets, such as using a crowd editing stage, collecting large set of explanations per instance, and using a diverse set of crowdworkers to avoid annotator bias. 5/6 Please submit an issue or PR to our Github repository (linked through website) to add or edit characteristics of datasets! We are open to feedback. 6/6",https://arxiv.org/abs/2102.12060,"Explainable NLP (ExNLP) has increasingly focused on collecting human-annotated textual explanations. These explanations are used downstream in three ways: as data augmentation to improve performance on a predictive task, as supervision to train models to produce explanations for their predictions, and as a ground-truth to evaluate model-generated explanations. In this review, we identify 65 datasets with three predominant classes of textual explanations (highlights, free-text, and structured), organize the literature on annotating each type, identify strengths and shortcomings of existing collection methodologies, and give recommendations for collecting ExNLP datasets in the future. ","Teach Me to Explain: A Review of Datasets for Explainable Natural
Language Processing",6,"['Happy to share our new preprint (with @anmarasovic) âTeach Me to Explain: A Review of Datasets for Explainable NLPâ\nPaper: \nWebsite: \n\nItâs half survey, half reflections for more standardized ExNLP dataset collection. Highlights:\n\n1/6', 'We focus on datasets of the form: (inputs, labels, explanations). We describe these instance-wise explanations as âexplaining human decisionsâ (the labels). Other types of explanations may explain something about the world. We focus on the shaded area.\n\n2/6 https://t.co/EUxVH09c8U', 'We identify 3 major classes of explanation datasets: highlights, free-text, and structured explanations. Each one is surveyed in a table like this (structured explanations here).\n\n>95% of the datasets are collected using human annotation, and >70% use crowdsourcing.\n\n3/6 https://t.co/jduqxNq174', 'We discuss how constraints placed on data annotation can influence modeling and evaluation, and suggest the use of datasheets to make collection decisions more transparent.\n\nAlso discuss structure that sometimes emerges free-text rationales and what to do about it.\n\n4/6 https://t.co/G6u3nvQrrC', 'Finally, we synthesize crowdsourcing methods from NLP/ HCI literature on improving quality+diversity of ExNLP datasets, such as using a crowd editing stage, collecting large set of explanations per instance, and using a diverse set of crowdworkers to avoid annotator bias.\n\n5/6', 'Please submit an issue or PR to our Github repository (linked through website) to add or edit characteristics of datasets! We are open to feedback.\n\n6/6']",21,02,1496
379,44,1452818407334817796,750009830786605057,"Michele L. Silverstein, Ph.D. (she/her/hers)","New paper! The LHS 1678 exoplanet system includes an M dwarf in the Gaia/Jao Gap, 2 confirmed small planets discovered by TESS (1 ultra-short period & 1 Venus-zone), a sneaky brown dwarf w/ a decades-long orbit, and a candidate 3rd planet in 4:3 resonance! One of the coolest things about this system, in my opinion, is that in the past, the star likely expanded & contracted over billions of years. We know this bc of its association with a gap in the HR diagram. We don't know how this affected the how the planets' formed or evolved! LHS 1678 is arguably the only TESS planet host so far clearly in the gap. Until our LHS 1678 paper, no one had published a paper raising the issue of how this relates to exoplanets (to my knowledge). For reference, here's the Gaia/Jao gap discovery paper I don't use twitter a whole lot, and it's late here. So I will stop here for now. But I think every member of this system is exciting for a different reason, and I hope you will feel compelled to look at the paper and see so for yourself. :)",https://arxiv.org/abs/2110.12079,"We present the TESS discovery of the LHS 1678 (TOI-696) exoplanet system, comprised of two approximately Earth-sized transiting planets and a likely astrometric brown dwarf orbiting a bright ($V_J$=12.5, $K_s$=8.3) M2 dwarf at 19.9 pc. The two TESS-detected planets are of radius 0.70$\pm$0.04 $R_\oplus$ and 0.98$\pm$0.06 $R_\oplus$ in 0.86-day and 3.69-day orbits, respectively. Both planets are validated and characterized via ground-based follow-up observations. HARPS RV monitoring yields 97.7 percentile mass upper limits of 0.35 $M_\oplus$ and 1.4 $M_\oplus$ for planets b and c, respectively. The astrometric companion detected by the CTIO/SMARTS 0.9m has an orbital period on the order of decades and is undetected by other means. Additional ground-based observations constrain the companion to being a high-mass brown dwarf or smaller. Each planet is of unique interest; the inner planet has an ultra-short period, and the outer planet is in the Venus zone. Both are promising targets for atmospheric characterization with the JWST and mass measurements via extreme-precision radial velocity. A third planet candidate of radius 0.9$\pm$0.1 $R_\oplus$ in a 4.97-day orbit is also identified in multi-Cycle TESS data for validation in future work. The host star is associated with an observed gap in the lower main sequence of the Hertzsprung-Russell diagram. This gap is tied to the transition from partially- to fully-convective interiors in M dwarfs, and the effect of the associated stellar astrophysics on exoplanet evolution is currently unknown. The culmination of these system properties makes LHS 1678 a unique, compelling playground for comparative exoplanet science and understanding the formation and evolution of small, short-period exoplanets orbiting low-mass stars. ","The LHS 1678 System: Two Earth-Sized Transiting Planets and an
Astrometric Companion Orbiting an M Dwarf Near the Convective Boundary at 20
pc",4,"['New paper! The LHS 1678 exoplanet system includes an M dwarf in the Gaia/Jao Gap, 2 confirmed small planets discovered by TESS (1 ultra-short period & 1 Venus-zone), a sneaky brown dwarf w/ a decades-long orbit, and a candidate 3rd planet in 4:3 resonance! ', ""One of the coolest things about this system, in my opinion, is that in the past, the star likely expanded & contracted over billions of years. We know this bc of its association with a gap in the HR diagram. We don't know how this affected the how the planets' formed or evolved!"", ""LHS 1678 is arguably the only TESS planet host so far clearly in the gap. Until our LHS 1678 paper, no one had published a paper raising the issue of how this relates to exoplanets (to my knowledge). For reference, here's the Gaia/Jao gap discovery paper https://t.co/KFnbMEQSBs"", ""I don't use twitter a whole lot, and it's late here. So I will stop here for now. But I think every member of this system is exciting for a different reason, and I hope you will feel compelled to look at the paper and see so for yourself. :)""]",21,10,1047
380,152,1231863479293874177,57640264,Ariane Nunes Alves,".@Rebecca_Wade_C , @DariaKokh and I wrote a review of computational methods to study ligand-protein binding kinetics. We assessed the performance of methods considering two benchmark systems, T4 lysozyme (T4L) and N-HSP90. (1/3) @HITStudies #compchem @Rebecca_Wade_C @DariaKokh @HITStudies The figure below shows the times required for different methods to simulate complexes with different exp. residence times (RT). In the last two years many methods to compute relative RT were published. An increase of four orders of magnitude in exp. RT leads to a smaller, (2/3) @Rebecca_Wade_C @DariaKokh @HITStudies one order of magnitude increase in comp. time. For T4L, results indicate that good RT estimation can be achieved without exhaustive path sampling so long as the most probable paths are sampled (table below). Another highlight: two works aiming at computing RT prospectively. (3/3) @_Maicol_ @Rebecca_Wade_C @DariaKokh @HITStudies thanks! đ @sowmyaindrakum1 @Rebecca_Wade_C @DariaKokh @HITStudies what i showed you in your visit is related, but not the same thing. @Jmondal_tifrh @Rebecca_Wade_C @DariaKokh @HITStudies thanks! đ @Herr_Flow @Rebecca_Wade_C @DariaKokh @HITStudies you are welcome! đ is the arXiv paper published? send me the link, so I can correct the citation later. our work is under review on COSB right now.",https://arxiv.org/abs/2002.08983,"Due to the contribution of drug-target binding kinetics to drug efficacy, there is a high level of interest in developing methods to predict drug-target binding kinetic parameters. During the review period, a wide range of enhanced sampling molecular dynamics simulation-based methods has been developed for computing drug-target binding kinetics and studying binding and unbinding mechanisms. Here, we assess the performance of these methods considering two benchmark systems in detail: mutant T4 lysozyme-ligand complexes and a large set of N-HSP90-inhibitor complexes. The results indicate that some of the simulation methods can already be usefully applied in drug discovery or lead optimization programs but that further studies on more high-quality experimental benchmark datasets are necessary to improve and validate computational methods. ","Recent progress in molecular simulation methods for drug binding
kinetics",7,"['.@Rebecca_Wade_C , @DariaKokh and I wrote a review of computational methods to study ligand-protein binding kinetics. We assessed the performance of methods considering two benchmark systems, T4 lysozyme (T4L) and N-HSP90. (1/3)\n@HITStudies #compchem\n ', '@Rebecca_Wade_C @DariaKokh @HITStudies The figure below shows the times required for different methods to simulate complexes with different exp. residence times (RT). In the last two years many methods to compute relative RT were published. An increase of four orders of magnitude in exp. RT leads to a smaller, (2/3) https://t.co/xfMw29mTgi', '@Rebecca_Wade_C @DariaKokh @HITStudies one order of magnitude increase in comp. time.\nFor T4L, results indicate that good RT estimation can be achieved without exhaustive path sampling so long as the most probable paths are sampled (table below).\nAnother highlight: two works aiming at computing RT prospectively. (3/3) https://t.co/Eaom4mwQPN', '@_Maicol_ @Rebecca_Wade_C @DariaKokh @HITStudies thanks!\nđ', '@sowmyaindrakum1 @Rebecca_Wade_C @DariaKokh @HITStudies what i showed you in your visit is related, but not the same thing.', '@Jmondal_tifrh @Rebecca_Wade_C @DariaKokh @HITStudies thanks!\nđ', '@Herr_Flow @Rebecca_Wade_C @DariaKokh @HITStudies you are welcome!\nđ\nis the arXiv paper published? send me the link, so I can correct the citation later. our work is under review on COSB right now.']",20,02,1361
381,113,1370563872168374275,1665897810,Kevin Frans,"New paper on how Population-based Evolution is a natural meta-learning algorithm! With @okw at @crosslabstokyo @okw @crosslabstokyo Basic idea: In evolutionary algorithms, a strong gene is a gene which survives for many generations. Thus, strong genes should grant fitness not only to an individual but also to all of the individual's offspring. This means that in non-stationary environments, genes that increase the *adaptive ability* of a genome are naturally selected for. Over time, the learning ability of a population increases. This perspective can help explain why biological systems are so adaptable, or why languages are so robust -- with large populations, evolution naturally selects for systems that are good at adapting to new tasks.",https://arxiv.org/abs/2103.06435,"Meta-learning models, or models that learn to learn, have been a long-desired target for their ability to quickly solve new tasks. Traditional meta-learning methods can require expensive inner and outer loops, thus there is demand for algorithms that discover strong learners without explicitly searching for them. We draw parallels to the study of evolvable genomes in evolutionary systems -- genomes with a strong capacity to adapt -- and propose that meta-learning and adaptive evolvability optimize for the same objective: high performance after a set of learning iterations. We argue that population-based evolutionary systems with non-static fitness landscapes naturally bias towards high-evolvability genomes, and therefore optimize for populations with strong learning ability. We demonstrate this claim with a simple evolutionary algorithm, Population-Based Meta Learning (PBML), that consistently discovers genomes which display higher rates of improvement over generations, and can rapidly adapt to solve sparse fitness and robotic control tasks. ",Population-Based Evolution Optimizes a Meta-Learning Objective,4,"['New paper on how Population-based Evolution is a natural meta-learning algorithm! With @okw at @crosslabstokyo ', ""@okw @crosslabstokyo Basic idea: In evolutionary algorithms, a strong gene is a gene which survives for many generations. Thus, strong genes should grant fitness not only to an individual but also to all of the individual's offspring."", 'This means that in non-stationary environments, genes that increase the *adaptive ability* of a genome are naturally selected for. Over time, the learning ability of a population increases.', 'This perspective can help explain why biological systems are so adaptable, or why languages are so robust -- with large populations, evolution naturally selects for systems that are good at adapting to new tasks.']",21,03,755
382,121,1512458719925264386,1362475407874867200,Max Hunter Gordon,"New paper out today! We show how to prepare the covariance matrix on a quantum computer to do quantum principal component analysis: use an ensemble average density matrix! Thanks to @MvsCerezo , @LCincio and @ColesQuantum for a great collaboration! 1/n Principal component analysis (PCA) is a common technique in data analysis and machine learning. Our work provides a simple method to prepare the covariance matrix on a quantum computer, a key step in quantum PCA, using the ensemble average density matrix. 2/n We find our method is equivalent to doing âPCA without centeringâ which we interpret as doing PCA on a symmetrized dataset. We rigorously analyze the difference between these two approaches. 3/n We argue that PCA is natural on quantum data (with complex amplitudes) and in this case there is no difference between PCA and PCA on symmetrized data! For classical datasets one can bound the deviation of the spectrum obtained with our method from that of standard PCA. 4/n To explore the performance of our method on classical data we do PCA on the MNIST data set. We find that our method for constructing the covariance matrix to calculate the principal components gives practically equivalent results to conventional PCA. 5/n We also look at how our approach performs on doing PCA of H2 and BeH2 ground states and make some pretty plots! Our work paves the way for the implementation of quantum PCA on near-term quantum computers. 6/n ",https://arxiv.org/abs/2204.03495,"Principal component analysis (PCA) is a dimensionality reduction method in data analysis that involves diagonalizing the covariance matrix of the dataset. Recently, quantum algorithms have been formulated for PCA based on diagonalizing a density matrix. These algorithms assume that the covariance matrix can be encoded in a density matrix, but a concrete protocol for this encoding has been lacking. Our work aims to address this gap. Assuming amplitude encoding of the data, with the data given by the ensemble $\{p_i,| \psi_i \rangle\}$, then one can easily prepare the ensemble average density matrix $\overline{\rho} = \sum_i p_i |\psi_i\rangle \langle \psi_i |$. We first show that $\overline{\rho}$ is precisely the covariance matrix whenever the dataset is centered. For quantum datasets, we exploit global phase symmetry to argue that there always exists a centered dataset consistent with $\overline{\rho}$, and hence $\overline{\rho}$ can always be interpreted as a covariance matrix. This provides a simple means for preparing the covariance matrix for arbitrary quantum datasets or centered classical datasets. For uncentered classical datasets, our method is so-called ""PCA without centering"", which we interpret as PCA on a symmetrized dataset. We argue that this closely corresponds to standard PCA, and we derive equations and inequalities that bound the deviation of the spectrum obtained with our method from that of standard PCA. We numerically illustrate our method for the MNIST handwritten digit dataset. We also argue that PCA on quantum datasets is natural and meaningful, and we numerically implement our method for molecular ground-state datasets. ",Covariance matrix preparation for quantum principal component analysis,6,"['New paper out today!\n\nWe show how to prepare the covariance matrix on a quantum computer to do quantum principal component analysis: use an ensemble average density matrix! \nThanks to @MvsCerezo , @LCincio and @ColesQuantum for a great collaboration! 1/n', 'Principal component analysis (PCA) is a common technique in data analysis and machine learning.\nOur work provides a simple method to prepare the covariance matrix on a quantum computer, a key step in quantum PCA, using the ensemble average density matrix. 2/n https://t.co/Iufr3Nwmv5', 'We find our method is equivalent to doing âPCA without centeringâ which we interpret as doing PCA on a symmetrized dataset. We rigorously analyze the difference between these two approaches. 3/n https://t.co/KrNfnh0Ind', 'We argue that PCA is natural on quantum data (with complex amplitudes) and in this case there is no difference between PCA and PCA on symmetrized data! For classical datasets one can bound the deviation of the spectrum obtained with our method from that of standard PCA. 4/n https://t.co/ldiGfbeZA1', 'To explore the performance of our method on classical data we do PCA on the MNIST data set. We find that our method for constructing the covariance matrix to calculate the principal components gives practically equivalent results to conventional PCA. 5/n https://t.co/zfRpzaOtzs', 'We also look at how our approach performs on doing PCA of H2 and BeH2 ground states and make some pretty plots! \n\nOur work paves the way for the implementation of quantum PCA on near-term quantum computers. 6/n https://t.co/jASuN1F0LV']",22,04,1489
383,0,1249779748961767425,2785337469,Sebastian Ruder,"I'm excited to announce XTREME, a new benchmark that covers 9 tasks and 40 typologically diverse languages. Paper: Blog post: Code: XTREME evaluates models on their capability to do zero-shot cross-lingual transfer when fine-tuned on English. In the paper, we conduct experiments with several state-of-the-art models that have been pre-trained on large multilingual corpora. Overall, we find that there is a large gap to English and human performance and a lot of potential for improvement, particularly on syntactic tasks. See below for instance for the performance of XLM-R across tasks and languages. This work would have not been possible without the contributions of many amazing people including @JunjieHu12 @orf_bnw @gneubig Melvin, Aditya @dhgarrette @JonClarkSeattle and others. @gena_d Argh. The last letter of the website got lost during copy-pasting. đ
Here's the correct link: @boknilev Yes. For all but TyDiQA-GoldP, human performance is based on English. That comes with the obvious caveat that performance will differ across languages. IMO for datasets that are derived using translation (XNLI, PAWS-X, XQuAD), human perf should be very similar across languages.",https://arxiv.org/abs/2003.11080,"Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks. ","XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating
Cross-lingual Generalization",6,"[""I'm excited to announce XTREME, a new benchmark that covers 9 tasks and 40 typologically diverse languages.\n\nPaper: \nBlog post: \nCode: "", 'XTREME evaluates models on their capability to do zero-shot cross-lingual transfer when fine-tuned on English. In the paper, we conduct experiments with several state-of-the-art models that have been pre-trained on large multilingual corpora. https://t.co/9X59Ht8v7m', 'Overall, we find that there is a large gap to English and human performance and a lot of potential for improvement, particularly on syntactic tasks. See below for instance for the performance of XLM-R across tasks and languages. https://t.co/b8jUjW4ufF', 'This work would have not been possible without the contributions of many amazing people including @JunjieHu12 @orf_bnw @gneubig Melvin, Aditya @dhgarrette @JonClarkSeattle and others.', ""@gena_d Argh. The last letter of the website got lost during copy-pasting. đ
Here's the correct link:\nhttps://t.co/S8RdKmhCsE"", '@boknilev Yes. For all but TyDiQA-GoldP, human performance is based on English. That comes with the obvious caveat that performance will differ across languages. IMO for datasets that are derived using translation (XNLI, PAWS-X, XQuAD), human perf should be very similar across languages.']",20,03,1227
384,94,1019125191207768064,426509606,Yamir Moreno,"In our last work, , we explore the block nature of the matrix representation of multiplex networks and study their spectral properties by formulating the corresponding polynomial eigenvalue problem. Work done with @GuiFdeArruda @FranciscoICMC @ecozzo @HirokiSayama @GuiFdeArruda @FranciscoICMC @ecozzo It is :-) and Thanks!",https://arxiv.org/abs/1807.05588,"We explore the block nature of the matrix representation of multiplex networks, introducing a new formalism to deal with its spectral properties as a function of the inter-layer coupling parameter. This approach allows us to derive interesting results based on an interpretation of the traditional eigenvalue problem. More specifically, we reduce the dimensionality of our matrices but increase the power of the characteristic polynomial, i.e, a polynomial eigenvalue problem. Such an approach may sound counterintuitive at first glance, but it allows us to relate the quadratic problem for a 2-Layer multiplex system with the spectra of the aggregated network and to derive bounds for the spectra, among many other interesting analytical insights. Furthermore, it also permits to directly obtain analytical and numerical insights on the eigenvalue behavior as a function of the coupling between layers. Our study includes the supra-adjacency, supra-Laplacian, and the probability transition matrices, which enable us to put our results under the perspective of structural phases in multiplex networks. We believe that this formalism and the results reported will make it possible to derive new results for multiplex networks in the future. ",A polynomial eigenvalue approach for multiplex networks,2,"['In our last work, , we explore the block nature of the matrix representation of multiplex networks\nand study their spectral properties by formulating the corresponding polynomial eigenvalue problem. Work done with @GuiFdeArruda @FranciscoICMC @ecozzo ', '@HirokiSayama @GuiFdeArruda @FranciscoICMC @ecozzo It is :-) and Thanks!']",18,07,336
385,76,1049693858240585734,174052756,Thiago Serra,New paper on approximating the linear regions of neural nets: We exploit neat tricks: - MIP formulations to find which units are locally stable - Approximate counting methods from SAT to get probabilistic LBs on the number of solutions of a MIP #orms #ml ,https://arxiv.org/abs/1810.03370,"We can compare the expressiveness of neural networks that use rectified linear units (ReLUs) by the number of linear regions, which reflect the number of pieces of the piecewise linear functions modeled by such networks. However, enumerating these regions is prohibitive and the known analytical bounds are identical for networks with same dimensions. In this work, we approximate the number of linear regions through empirical bounds based on features of the trained network and probabilistic inference. Our first contribution is a method to sample the activation patterns defined by ReLUs using universal hash functions. This method is based on a Mixed-Integer Linear Programming (MILP) formulation of the network and an algorithm for probabilistic lower bounds of MILP solution sets that we call MIPBound, which is considerably faster than exact counting and reaches values in similar orders of magnitude. Our second contribution is a tighter activation-based bound for the maximum number of linear regions, which is particularly stronger in networks with narrow layers. Combined, these bounds yield a fast proxy for the number of linear regions of a deep neural network. ",Empirical Bounds on Linear Regions of Deep Rectifier Networks,1,['New paper on approximating the linear regions of neural nets: \n\nWe exploit neat tricks:\n- MIP formulations to find which units are locally stable\n- Approximate counting methods from SAT to get probabilistic LBs on the number of solutions of a MIP\n#orms #ml '],18,10,268
386,59,1428383830055006211,29931309,Gillian Hadfield,"Critical new paper from @StanfordHAI on the massive AI models emerging largely inside private tech and beyond the current reach of oversight and regulation--a core focus also for @TorontoSRI working with @RockefellerFdn, @jackclarkSF and @MarietjeSchaake ",https://arxiv.org/abs/2108.07258,"AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature. ",On the Opportunities and Risks of Foundation Models,1,"['Critical new paper from @StanfordHAI on the massive AI models emerging largely inside private tech and beyond the current reach of oversight and regulation--a core focus also for @TorontoSRI working with @RockefellerFdn, @jackclarkSF and @MarietjeSchaake ']",21,08,261
387,46,1064550885873905665,1725428047,Jacques Carolan,"***AIRHORN*** new preprint alert đ±đ±đ± TFW you think an experiment is going to be super quick, then it turns out to be really hard, then you write a paper about why it was so hard and how you solved it @Dirk_Englund #quantum #silicon #photonics #photons @RLEatMIT @MIT",https://arxiv.org/abs/1811.06557,"Large-scale quantum technologies require exquisite control over many individual quantum systems. Typically, such systems are very sensitive to environmental fluctuations, and diagnosing errors via measurements causes unavoidable perturbations. In this work we present an in situ frequency locking technique that monitors and corrects frequency variations in single photon sources based on microring resonators. By using the same classical laser fields required for photon generation as a probe to diagnose variations in the resonator frequency, our protocol applies feedback control to correct photon frequency errors in parallel to the optical quantum computation without disturbing the physical qubit. We implement our technique on a silicon photonic device and demonstrate sub 1 pm frequency stabilization in the presence of applied environmental noise, corresponding to a fractional frequency drift of <1 % of a photon linewidth. Using these methods we demonstrate feedback controlled quantum state engineering. By distributing a single local oscillator across a single chip or network of chips, our approach enables frequency locking of many single photon sources for large-scale photonic quantum technologies. ","Scalable feedback control of single photon sources for photonic quantum
technologies",2,"['***AIRHORN*** new preprint alert đ±đ±đ± TFW you think an experiment is going to be super quick, then it turns out to be really hard, then you write a paper about why it was so hard and how you solved it @Dirk_Englund #quantum #silicon #photonics #photons', '@RLEatMIT @MIT']",18,11,272
388,90,1319458053629116418,112654363,Torsten Scholak,"Our new text-to-SQL paper is now on the ArXiv! The complexity of current systems that translate natural language to SQL queries is very high. Read up on how our fully transformer-based model enabled us to make things much simpler! Iâm deeply indebted to my coworkers at @element_ai, namely Raymond Li, @DBahdanau, Harm de Vries, and @chrisjpal. It was truly a team effort! We published the code, too: . @ipvkyte I wonât deny that I have thought about doing that đ
@ipvkyte I very much agree, Iâm working on a port of @PyTorch to Haskell also because I think the language is a great fit for neural program synthesis. @ipvkyte @PyTorch Thanks! Indeed, weâve been struggling with the GC a bit, and thereâs certainly room for improvement, but the library works already quite well. @mandubian Sweet, let me know what you think đ @lzamparo Thanks Lee! @lzamparo It is very gratifying, all the hard work is starting to pay off đ",https://arxiv.org/abs/2010.11119,"Recent neural text-to-SQL models can effectively translate natural language questions to corresponding SQL queries on unseen databases. Working mostly on the Spider dataset, researchers have proposed increasingly sophisticated solutions to the problem. Contrary to this trend, in this paper we focus on simplifications. We begin by building DuoRAT, a re-implementation of the state-of-the-art RAT-SQL model that unlike RAT-SQL is using only relation-aware or vanilla transformers as the building blocks. We perform several ablation experiments using DuoRAT as the baseline model. Our experiments confirm the usefulness of some techniques and point out the redundancy of others, including structural SQL features and features that link the question with the schema. ",DuoRAT: Towards Simpler Text-to-SQL Models,8,"['Our new text-to-SQL paper is now on the ArXiv!\n\n\nThe complexity of current systems that translate natural language to SQL queries is very high.\n\nRead up on how our fully transformer-based model enabled us to make things much simpler!', 'Iâm deeply indebted to my coworkers at @element_ai, namely Raymond Li, @DBahdanau, Harm de Vries, and @chrisjpal. It was truly a team effort!\n\nWe published the code, too: https://t.co/wPul9GeLnL.', '@ipvkyte I wonât deny that I have thought about doing that đ
', '@ipvkyte I very much agree, Iâm working on a port of @PyTorch to Haskell also because I think the language is a great fit for neural program synthesis.', '@ipvkyte @PyTorch Thanks! Indeed, weâve been struggling with the GC a bit, and thereâs certainly room for improvement, but the library works already quite well.', '@mandubian Sweet, let me know what you think đ', '@lzamparo Thanks Lee!', '@lzamparo It is very gratifying, all the hard work is starting to pay off đ']",20,10,934
389,1,758929810945040386,1579939676,Michael Childress,Out today: our new paper releasing >350 supernova spectra taken with the ANU WiFeS spectrograph: @AstroChildress this paper marks my first astro acronym: The Anu Wifes SuperNovA Program -- AWSNAP (I was very pleased with myself for this) @AstroChildress and the first time Kiwi @AstroSmurph and his British doppleganger appear on the same paper (double the Simon J. Murphy!) @AstroChildress more importantly the paper reflects my philosophy that science is maximised when data is put into the public domain :),http://arxiv.org/abs/1607.08526,"This paper presents the first major data release and survey description for the ANU WiFeS SuperNovA Program (AWSNAP). AWSNAP is an ongoing supernova spectroscopy campaign utilising the Wide Field Spectrograph (WiFeS) on the Australian National University (ANU) 2.3m telescope. The first and primary data release of this program (AWSNAP-DR1) releases 357 spectra of 175 unique objects collected over 82 equivalent full nights of observing from July 2012 to August 2015. These spectra have been made publicly available via the WISeREP supernova spectroscopy repository. We analyse the AWSNAP sample of Type Ia supernova spectra, including measurements of narrow sodium absorption features afforded by the high spectral resolution of the WiFeS instrument. In some cases we were able to use the integral-field nature of the WiFeS instrument to measure the rotation velocity of the SN host galaxy near the SN location in order to obtain precision sodium absorption velocities. We also present an extensive time series of SN 2012dn, including a near-nebular spectrum which both confirms its ""super-Chandrasekhar"" status and enables measurement of the sub-solar host metallicity at the SN site. ",The ANU WiFeS SuperNovA Program (AWSNAP),4,"['Out today: our new paper releasing >350 supernova spectra taken with the ANU WiFeS spectrograph:\n', '@AstroChildress this paper marks my first astro acronym: The Anu Wifes SuperNovA Program -- AWSNAP (I was very pleased with myself for this)', '@AstroChildress and the first time Kiwi @AstroSmurph and his British doppleganger appear on the same paper (double the Simon J. Murphy!)', '@AstroChildress more importantly the paper reflects my philosophy that science is maximised when data is put into the public domain :)']",16,07,519
390,30,1386387633752977411,963873866392186882,Philipp Schindler,"New paper on heating effects in #iontraps @uniinnsbruck We had a detailed look into how an ion crystal can melt from a collision with background gas, and how to re-crystallize. We found a surprisingly simple model that is validated by the experiment. ",https://arxiv.org/abs/2104.10623,"We investigate the energy dynamics of non-crystallized (melted) ions, confined in a Paul trap. The non-periodic Coulomb interaction experienced by melted ions forms a medium for non-conservative energy transfer from the radio-frequency (rf) field to the ions, a process known as rf heating. We study rf heating by analyzing numerical simulations of non-crystallized ion motion in Paul trap potentials, in which the energy of the ions' secular motion changes at discrete intervals, corresponding to ion-ion collisions. The analysis of these collisions is used as a basis to derive a simplified model of rf heating energy dynamics, from which we conclude that the rf heating rate is predominantly dependent on the rf field strength. We confirm the predictability of the model experimentally: Two trapped $^{40}$Ca$^{+}$ ions are deterministically driven to melt, and their fluorescence rate is used to infer the ions' energy. From simulation and experimental results, we generalize which experimental parameters are required for efficient recrystallization of melted trapped ions. ",RF-induced heating dynamics of non-crystallized trapped ions,1,"['New paper on heating effects in #iontraps @uniinnsbruck We had a detailed look into how an ion crystal can melt from a collision with background gas, and how to re-crystallize. We found a surprisingly simple model that is validated by the experiment. ']",21,04,264
391,40,1321516150052790272,16252640,Matthew FL,New paper on arXiv about implementing logic programs with aggregation using term rewriting and a relational algebra paper: code/video recording: with @xtimv @adveisner #Dyna An earlier version of this paper appeared at #WRLA2020 ,https://arxiv.org/abs/2010.10503,"We present a scheme for translating logic programs, which may use aggregation and arithmetic, into algebraic expressions that denote bag relations over ground terms of the Herbrand universe. To evaluate queries against these relations, we develop an operational semantics based on term rewriting of the algebraic expressions. This approach can exploit arithmetic identities and recovers a range of useful strategies, including lazy strategies that defer work until it becomes possible or necessary. ","Evaluation of Logic Programs with Built-Ins and Aggregation: A Calculus
for Bag Relations",2,"['New paper on arXiv about implementing logic programs with aggregation using term rewriting and a relational algebra\npaper: \ncode/video recording: \nwith @xtimv @adveisner #Dyna ', 'An earlier version of this paper appeared at #WRLA2020 https://t.co/Dt3vgFxumb']",20,10,256
392,83,1214431462323130369,801743,Neil Ernst,"New paper for #saner20 from my students and I : cross-dataset design discussion mining. Preprint: Replication: We look at how well classifiers do across project datasets (ok, not great) when labeling design. Big congrats to my students Alvi and Karan who worked very hard on this one. Thanks to the ICSME and SANER reviewers who had some great comments that improved the paper a lot. And finally thanks to @joaobrunet, @gnviviani and others who made it easy for us to build on their work. Have a citation! đđđđ @joaobrunet @gnviviani About 2 tweets earlier :)",https://arxiv.org/abs/2001.01424,"Being able to identify software discussions that are primarily about design, which we call design mining, can improve documentation and maintenance of software systems. Existing design mining approaches have good classification performance using natural language processing (NLP) techniques, but the conclusion stability of these approaches is generally poor. A classifier trained on a given dataset of software projects has so far not worked well on different artifacts or different datasets. In this study, we replicate and synthesize these earlier results in a meta-analysis. We then apply recent work in transfer learning for NLP to the problem of design mining. However, for our datasets, these deep transfer learning classifiers perform no better than less complex classifiers. We conclude by discussing some reasons behind the transfer learning approach to design mining. ",Cross-Dataset Design Discussion Mining,4,"['New paper for #saner20 from my students and I : cross-dataset design discussion mining. Preprint: Replication: We look at how well classifiers do across project datasets (ok, not great) when labeling design.', 'Big congrats to my students Alvi and Karan who worked very hard on this one. Thanks to the ICSME and SANER reviewers who had some great comments that improved the paper a lot.', 'And finally thanks to @joaobrunet, @gnviviani and others who made it easy for us to build on their work. Have a citation! đđđđ', '@joaobrunet @gnviviani About 2 tweets earlier :)']",20,01,573
393,71,1006699758755426304,2935607927,Peter Hull,"New(ish) working paper with Kirill Borusyak and Xavier Jaravel: ""Quasi-experimental Shift-share Research Designs"" Comments very welcome! A brief summary thread follows This paper updates a May 2017 note on how to think about identification in ""shift-share"" or ""Bartik"" IVs as coming from quasi-random variation in the aggregate shocks Although this seems to often be the intuition behind shift-share IV, we haven't yet seen it formalized We first show that even though Bartik IV is estimated in the ""location"" space, its validity condition can be expressed in the ""industry"" space (i.e. the space of shocks g). For shift-share IV validity, the covariance of g and relevant unobservables should be close to zero. We then derive sufficient shock-level assumptions for this condition to hold. We think of shocks as being as-good-as-randomly assigned, with the impact of each shock shrinking as the sample grows We also consider extensions with conditionally random shocks and panel variation Lastly we caution that estimating aggregate shocks in the IV sample can make shift-share IV inconsistent, even when the shocks are random This probably is analogous to the classic inconsistency of many-instrument 2SLS and can similarly be solved by split-sample shock estimation We hope this helps clarify how one may think of identification in shift-share IVs as ""coming from"" a natural shock experiment, and conclude with some practical advice for such settings Of course this is work in progress, so its helpfulness is up to you! Please send your thoughts",https://arxiv.org/abs/1806.01221,"Many studies use shift-share (or ``Bartik'') instruments, which average a set of shocks with exposure share weights. We provide a new econometric framework for shift-share instrumental variable (SSIV) regressions in which identification follows from the quasi-random assignment of shocks, while exposure shares are allowed to be endogenous. The framework is motivated by an equivalence result: the orthogonality between a shift-share instrument and an unobserved residual can be represented as the orthogonality between the underlying shocks and a shock-level unobservable. SSIV regression coefficients can similarly be obtained from an equivalent shock-level regression, motivating shock-level conditions for their consistency. We discuss and illustrate several practical insights of this framework in the setting of Autor et al. (2013), estimating the effect of Chinese import competition on manufacturing employment across U.S. commuting zones. ",Quasi-Experimental Shift-Share Research Designs,6,"['New(ish) working paper with Kirill Borusyak and Xavier Jaravel: ""Quasi-experimental Shift-share Research Designs"" \n\n\n\nComments very welcome! A brief summary thread follows ', 'This paper updates a May 2017 note on how to think about identification in ""shift-share"" or ""Bartik"" IVs as coming from quasi-random variation in the aggregate shocks \n\nAlthough this seems to often be the intuition behind shift-share IV, we haven\'t yet seen it formalized https://t.co/FNnPy4yT9T', 'We first show that even though Bartik IV is estimated in the ""location"" space, its validity condition can be expressed in the ""industry"" space (i.e. the space of shocks g). \n\nFor shift-share IV validity, the covariance of g and relevant unobservables should be close to zero. https://t.co/UOxCdZ16A0', 'We then derive sufficient shock-level assumptions for this condition to hold. We think of shocks as being as-good-as-randomly assigned, with the impact of each shock shrinking as the sample grows\n\nWe also consider extensions with conditionally random shocks and panel variation https://t.co/KnSIVVUByQ', 'Lastly we caution that estimating aggregate shocks in the IV sample can make shift-share IV inconsistent, even when the shocks are random \n\nThis probably is analogous to the classic inconsistency of many-instrument 2SLS and can similarly be solved by split-sample shock estimation https://t.co/ZUFgWRh3cA', 'We hope this helps clarify how one may think of identification in shift-share IVs as ""coming from"" a natural shock experiment, and conclude with some practical advice for such settings\n\nOf course this is work in progress, so its helpfulness is up to you! Please send your thoughts']",18,06,1593
394,186,1361414327031369729,1155072721325383680,Adolfo Carvalho,"Hi all! Super excited to emerge from the void to share my first first-author paper, which I've been working on for over 3 years now. We used 14 years of RV measurements and one set of Hubble Space Telescope images to study the Hubble 4 young star system @IveyEDavis Thanks Ivey!",https://arxiv.org/abs/2102.06257,"We studied the weak-lined T Tauri star Hubble 4, a known long-period binary, and its starspot phenomena. We used optical radial velocity (RV) data taken over a span of 14 years (2004-2010, 2017-2019) at the McDonald Observatory 2.7m Harlan J. Smith telescope and single epoch imaging from the HST/WFC3 instrument. The observed and apparent RV variations show contributions, respectively, from the binary motion as well as from a large spot group on one of the stars, presumed to be the primary. Fitting and removing the orbital signal from the RVs, we found the lower bound on the lifetime of a previously identified large spot group on the surface of the star to be at least 5.1 years. A $\sim5$ year lower limit is a long, but not unprecedented, duration for a single spot group. The later epoch data indicate significant spot evolution has occurred, placing an upper bound on the spot group lifetime at 12 years. We find that pre-main sequence evolutionary models for the age of Taurus ($\sim2$ Myr), combined with component mass estimates from the literature, permit us to reproduce the HST relative photometry and the binary-induced contribution to the apparent RV variations. The long-lived star spot we find on Hubble 4 has significant implications for dynamo models in young stars, as it adds evidence for long lifetimes of magnetic field topologies. There are also significant implications for young star exoplanet searches as long-lived coherent RV signals may be spot-induced and not the result of planetary motion. (This paper includes data taken at The McDonald Observatory of The University of Texas at Austin.) ","Radial Velocity Monitoring of the Young Star Hubble 4: Disentangling
Starspot Lifetimes from Orbital Motion",2,"[""Hi all! Super excited to emerge from the void to share my first first-author paper, which I've been working on for over 3 years now. \n\nWe used 14 years of RV measurements and one set of Hubble Space Telescope images to study the Hubble 4 young star system\n\n"", '@IveyEDavis Thanks Ivey!']",21,02,286
395,68,1295304450136051713,1190175298106675200,Jonas Latz,"Error analysis for probabilities of rare events with approximate models - a new paper in the arXiv () by Fabian Wagner, Iason Papaioannou, Elisabeth Ullmann, and myself. A #thread. #research #Mathematics #numerics (1/7) An important task in, e.g. #structuralengineering and #environmentalengineering, is the estimation of the probability of a system failure, e.g., the probability of groundwater pollution in case a radioactive waste repository is damaged. We call this probability P_f. (2/7) P_f will usually be in the range [1E-9, 1E-6]. Moreover, as in the example mentioned above, P_f often depends on mathematical models, like a #PDE or #ODE. When estimating P_f, e.g. using a sampling or optimisation method, this model needs to be approximated as well. (3/7) In this paper, we are not interested in the accuracy of a sampling method, but in how accurate we need to approximate the underlying mathematical model (ODE or PDE) to get a reasonable approximation of P_f. We denote the probability of failure with approximate model by P_h. (4/7) Past results indicate that the error |P_f - P_h| scales like the approximation error in the mathematical model (Elfverson et al. 2016; ). This implies that (up to an unknown constant) the model needs to be approximated super accurately. (5/7) In our paper, we show that actually |P_f - P_h|/P_hFORM behaves like the model approximation error. Here, P_hFORM is the approximation of P_h using the First Order Reliability Method (FORM), an optimisation based approach towards the estimation of rare events. (6/7) This shows that the model only needs to be approximated moderately precisely. Moreover, it gives an immediate algorithmic procedure to estimate the error in P_f. The paper contains a rigorous analysis alongside with numerical experiments. (7/7)",https://arxiv.org/abs/2008.06368,"The estimation of the probability of rare events is an important task in reliability and risk assessment. We consider failure events that are expressed in terms of a limit-state function, which depends on the solution of a partial differential equation (PDE). In many applications, the PDE cannot be solved analytically. We can only evaluate an approximation of the exact PDE solution. Therefore, the probability of rare events is estimated with respect to an approximation of the limit-state function. This leads to an approximation error in the estimate of the probability of rare events. Indeed, we prove an error bound for the approximation error of the probability of failure, which behaves like the discretization accuracy of the PDE multiplied by an approximation of the probability of failure, the first order reliability method (FORM) estimate. This bound requires convexity of the failure domain. For non-convex failure domains, we prove an error bound for the relative error of the FORM estimate. Hence, we derive a relationship between the required accuracy of the probability of rare events estimate and the PDE discretization level. This relationship can be used to guide practicable reliability analyses and, for instance, multilevel methods. ",Error analysis for probabilities of rare events with approximate models,7,"['Error analysis for probabilities of rare events with approximate models - a new paper in the arXiv () by Fabian Wagner, Iason Papaioannou, Elisabeth Ullmann, and myself. A #thread. #research #Mathematics #numerics (1/7)', 'An important task in, e.g. #structuralengineering and #environmentalengineering, is the estimation of the probability of a system failure, e.g., the probability of groundwater pollution in case a radioactive waste repository is damaged. We call this probability P_f. (2/7)', 'P_f will usually be in the range [1E-9, 1E-6]. Moreover, as in the example mentioned above, P_f often depends on mathematical models, like a #PDE or #ODE. When estimating P_f, e.g. using a sampling or optimisation method, this model needs to be approximated as well. (3/7)', 'In this paper, we are not interested in the accuracy of a sampling method, but in how accurate we need to approximate the underlying mathematical model (ODE or PDE) to get a reasonable approximation of P_f. We denote the probability of failure with approximate model by P_h. (4/7)', 'Past results indicate that the error |P_f - P_h| scales like the approximation error in the mathematical model (Elfverson et al. 2016; https://t.co/z8koIHlEOS). This implies that (up to an unknown constant) the model needs to be approximated super accurately. (5/7)', 'In our paper, we show that actually |P_f - P_h|/P_hFORM behaves like the model approximation error. Here, P_hFORM is the approximation of P_h using the First Order Reliability Method (FORM), an optimisation based approach towards the estimation of rare events. (6/7)', 'This shows that the model only needs to be approximated moderately precisely. Moreover, it gives an immediate algorithmic procedure to estimate the error in P_f. The paper contains a rigorous analysis alongside with numerical experiments. (7/7)']",20,08,1813
396,129,1248502690126020611,1020088099,Umberto Picchini,"new paper: ""Adaptive MCMC for synthetic likelihoods and correlated synthetic likelihoods"", with Umberto Simola and Jukka Corander Proposes a sequentially updated MCMC kernel, partly inspired by the SNL method of @gpapamak @DavidSterratt and @driainmurray ",https://arxiv.org/abs/2004.04558,"Synthetic likelihood (SL) is a strategy for parameter inference when the likelihood function is analytically or computationally intractable. In SL, the likelihood function of the data is replaced by a multivariate Gaussian density over summary statistics of the data. SL requires simulation of many replicate datasets at every parameter value considered by a sampling algorithm, such as Markov chain Monte Carlo (MCMC), making the method computationally-intensive. We propose two strategies to alleviate the computational burden. First, we introduce an algorithm producing a proposal distribution that is sequentially tuned and made conditional to data, thus it rapidly \textit{guides} the proposed parameters towards high posterior density regions. In our experiments, a small number of iterations of our algorithm is enough to rapidly locate high density regions, which we use to initialize one or several chains that make use of off-the-shelf adaptive MCMC methods. Our ""guided"" approach can also be potentially used with MCMC samplers for approximate Bayesian computation (ABC). Second, we exploit strategies borrowed from the correlated pseudo-marginal MCMC literature, to improve the chains mixing in a SL framework. Moreover, our methods enable inference for challenging case studies, when the posterior is multimodal and when the chain is initialised in low posterior probability regions of the parameter space, where standard samplers failed. To illustrate the advantages stemming from our framework we consider five benchmark examples, including estimation of parameters for a cosmological model and a stochastic model with highly non-Gaussian summary statistics. ","Sequentially guided MCMC proposals for synthetic likelihoods and
correlated synthetic likelihoods",1,"['new paper: ""Adaptive MCMC for synthetic likelihoods and correlated synthetic likelihoods"", with Umberto Simola and Jukka Corander Proposes a sequentially updated MCMC kernel, partly inspired by the SNL method of @gpapamak @DavidSterratt and @driainmurray ']",20,04,268
397,287,1321399158037749760,718084071,Denis Erkal,"Paper day! We find evidence that the inner Milky Way is sloshing about with respect to the outer stellar halo (>50 kpc) due to the LMC. We consider 492 stars in the outer stellar halo with measured radial velocities and see a clear dipole on the sky. 1/n The stars in the North are redshifted on average and the stars in the South are blueshifted on average, consistent with us moving downwards at ~30 km/s with respect to the outer stellar halo. This matches the predicted effect of the LMC. 2/n The basic picture is that as the LMC falls in, it pulls on the Milky Way. Stars in the inner ~30 kpc have a short enough orbital period that they respond adiabatically while stars beyond this roughly stand still. As a result, the inner MW moves with respect to the outer halo. 3/n The measured radial velocity signal is consistent with a broad range of LMC masses, including those we measured with tidal streams around the Milky Way. 4/n We also took a gander at the proper motions of these stars with Gaia DR2. The measurements are broadly consistent although since the predicted effect is so small, it's difficult to tell. Hopefully with Gaia EDR3 we can measure this more precisely. 5/n This means that the outskirts of our Galaxy are really out of equilibrium which must be accounted for when modelling any tracers out there. In addition, it means that the inner Milky Way is not an inertial reference frame but instead has been substantially accelerated. 6/n See for more details on the mechanism of the LMC's effect @iHinkthere4iam Good question. We didn't try to fit this direction since the predicted effect on the radial velocity isn't just a dipole. Given that we see the signal over a lot of the sky, I'd guess the direction could be measured with an uncertainty of ~10s of degrees even with current data. @iHinkthere4iam A very precise measure of the orientation could perhaps tell us something about the dark matter around the LMC (which would include SMC material) but I'd guess we would need a much bigger sample to learn about such subtle features.",https://arxiv.org/abs/2010.13789,"A wealth of recent studies have shown that the LMC is likely massive, with a halo mass $>10^{11} M_\odot$. One consequence of having such a nearby and massive neighbour is that the inner Milky Way is expected to be accelerated with respect to our Galaxy's outskirts (beyond $\sim 30$ kpc). In this work we compile a sample of $\sim 500$ stars with radial velocities in the distant stellar halo, $r_{\rm GC}> 50$ kpc, to test this hypothesis. These stars span a large fraction of the sky and thus give a global view of the stellar halo. We find that stars in the Southern hemisphere are on average blueshifted, while stars in the North are redshifted, consistent with the expected, mostly downwards acceleration of the inner halo due to the LMC. We compare these results with simulations and find the signal is consistent with the infall of a $1.5\times10^{11} M_\odot$ LMC. We cross-match our stellar sample with \textit{Gaia} DR2 and find that the mean proper motions are not yet precise enough to discern the LMC's effect. Our results show that the outer Milky Way is significantly out of equilibrium and that the LMC has a substantial effect on our Galaxy. ",Detection of the LMC-induced sloshing of the Galactic halo,9,"['Paper day! We find evidence that the inner Milky Way is sloshing about with respect to the outer stellar halo (>50 kpc) due to the LMC.\n\nWe consider 492 stars in the outer stellar halo with measured radial velocities and see a clear dipole on the sky. 1/n\n\n ', 'The stars in the North are redshifted on average and the stars in the South are blueshifted on average, consistent with us moving downwards at ~30 km/s with respect to the outer stellar halo. This matches the predicted effect of the LMC. 2/n https://t.co/fcBm7prClO', 'The basic picture is that as the LMC falls in, it pulls on the Milky Way. Stars in the inner ~30 kpc have a short enough orbital period that they respond adiabatically while stars beyond this roughly stand still. As a result, the inner MW moves with respect to the outer halo. 3/n', 'The measured radial velocity signal is consistent with a broad range of LMC masses, including those we measured with tidal streams around the Milky Way. 4/n https://t.co/oZuaJxhu6D', ""We also took a gander at the proper motions of these stars with Gaia DR2. The measurements are broadly consistent although since the predicted effect is so small, it's difficult to tell. Hopefully with Gaia EDR3 we can measure this more precisely. 5/n https://t.co/m2IAJ8JU4K"", 'This means that the outskirts of our Galaxy are really out of equilibrium which must be accounted for when modelling any tracers out there. In addition, it means that the inner Milky Way is not an inertial reference frame but instead has been substantially accelerated. 6/n', ""See https://t.co/OeNz3cub98 https://t.co/VA1DcSYsJc\nhttps://t.co/SMjZbnuemy\nfor more details on the mechanism of the LMC's effect"", ""@iHinkthere4iam Good question. We didn't try to fit this direction since the predicted effect on the radial velocity isn't just a dipole. Given that we see the signal over a lot of the sky, I'd guess the direction could be measured with an uncertainty of ~10s of degrees even with current data."", ""@iHinkthere4iam A very precise measure of the orientation could perhaps tell us something about the dark matter around the LMC (which would include SMC material) but I'd guess we would need a much bigger sample to learn about such subtle features.""]",20,10,2120
398,13,1355898548169175041,2279164099,Bruno Lepri,"New paper accepted @FAccTConference. In this work, we propose a method of data annotation based on Bayesian statistical inference that aims to warn about the risk of discriminatory results of a given data set. Link: Work done with @ElenaBeretta4, @phisaz, and @demartin",https://arxiv.org/abs/2101.11358,"Thanks to the increasing growth of computational power and data availability, the research in machine learning has advanced with tremendous rapidity. Nowadays, the majority of automatic decision making systems are based on data. However, it is well known that machine learning systems can present problematic results if they are built on partial or incomplete data. In fact, in recent years several studies have found a convergence of issues related to the ethics and transparency of these systems in the process of data collection and how they are recorded. Although the process of rigorous data collection and analysis is fundamental in the model design, this step is still largely overlooked by the machine learning community. For this reason, we propose a method of data annotation based on Bayesian statistical inference that aims to warn about the risk of discriminatory results of a given data set. In particular, our method aims to deepen knowledge and promote awareness about the sampling practices employed to create the training set, highlighting that the probability of success or failure conditioned to a minority membership is given by the structure of the data available. We empirically test our system on three datasets commonly accessed by the machine learning community and we investigate the risk of racial discrimination. ","Detecting discriminatory risk through data annotation based on Bayesian
inferences",2,"['New paper accepted @FAccTConference. In this work, we propose a method of data annotation based on Bayesian statistical inference that aims to warn about the risk of discriminatory results of a given data set. Link: ', 'Work done with @ElenaBeretta4, @phisaz, and @demartin']",21,01,276
399,21,1299222634098376704,140287694,Anowar J Shajib,"Our new paper is on arXiv today: . By analyzing 23 elliptical lens galaxies from SLACS, we find that the dark matter distribution is close to the NFW profile on average at z~0.2 without any contraction/expansion. A summary with some figures in the thread. We perform state-of-the-art lens modeling for these 23 lens galaxies. This figure shows 5 of them. We combine the strong lensing constraints with the stellar kinematics and weak lensing measurements to individually constrain the stellar and dark matter distributions. Our model allows for adiabatic contraction in the dark matter and a M/L gradient in the stellar distribution. We find that the NFW+stars profile deviate upwards by ~5% on average from the power-law model near the Einstein radius. On average, there is no significant contraction/expansion in the dark matter halos with M_200 ~ 10^13.1 M_sun at z~0.2. Furthermore, almost no gradient in the stellar M/L is favored around the effective or half-light radius. Comparing our inferred stellar masses with those from SPS-based measurements supports a heavy IMF like the Salpeter IMF in these elliptical galaxies. All of our results are consistent with a scenario where halos of massive elliptical galaxies first contract up to z~2 due to baryonic cooling, but then the halos primarily grow through dissipationless mergers while AGN feedback counteracts the initial contraction.",https://arxiv.org/abs/2008.11724,"We investigate the internal structure of elliptical galaxies at $z\sim 0.2$ from a joint lensing-dynamics analysis. We model Hubble Space Telescope images of a sample of 23 galaxy-galaxy lenses selected from the Sloan Lens ACS (SLACS) survey. Whereas the original SLACS analysis estimated the logarithmic slopes by combining the kinematics with the imaging data, we estimate the logarithmic slopes only from the imaging data. We find that the distribution of the lensing-only logarithmic slopes has a median $2.08\pm0.03$ and intrinsic scatter $0.13 \pm 0.02$, consistent with the original SLACS analysis. We combine the lensing constraints with the stellar kinematics and weak lensing measurements, and constrain the amount of adiabatic contraction in the dark matter (DM) halos. We find that the DM halos are well described by a standard Navarro-Frenk-White halo with no contraction on average for both of a constant stellar mass-to-light ratio ($M/L$) model and a stellar $M/L$ gradient model. For the $M/L$ gradient model, we find that most galaxies are consistent with no $M/L$ gradient. Comparison of our inferred stellar masses with those obtained from the stellar population synthesis method supports a heavy initial mass function (IMF) such as the Salpeter IMF. We discuss our results in the context of previous observations and simulations, and argue that our result is consistent with a scenario in which active galactic nucleus feedback counteracts the baryonic-cooling-driven contraction in the DM halos. ","Dark matter halos of massive elliptical galaxies at $z \sim 0.2$ are
well described by the Navarro-Frenk-White profile",7,"['Our new paper is on arXiv today: .\n\nBy analyzing 23 elliptical lens galaxies from SLACS, we find that the dark matter distribution is close to the NFW profile on average at z~0.2 without any contraction/expansion.\n\nA summary with some figures in the thread.', 'We perform state-of-the-art lens modeling for these 23 lens galaxies. This figure shows 5 of them. https://t.co/tP9NrGyH5S', 'We combine the strong lensing constraints with the stellar kinematics and weak lensing measurements to individually constrain the stellar and dark matter distributions. Our model allows for adiabatic contraction in the dark matter and a M/L gradient in the stellar distribution. https://t.co/XhE2OHahsz', 'We find that the NFW+stars profile deviate upwards by ~5% on average from the power-law model near the Einstein radius. https://t.co/IAILEn1rL4', 'On average, there is no significant contraction/expansion in the dark matter halos with M_200 ~ 10^13.1 M_sun at z~0.2. Furthermore, almost no gradient in the stellar M/L is favored around the effective or half-light radius. https://t.co/kk96jRfzxy', 'Comparing our inferred stellar masses with those from SPS-based measurements supports a heavy IMF like the Salpeter IMF in these elliptical galaxies. https://t.co/IglQXCUWcD', 'All of our results are consistent with a scenario where halos of massive elliptical galaxies first contract up to z~2 due to baryonic cooling, but then the halos primarily grow through dissipationless mergers while AGN feedback counteracts the initial contraction.']",20,08,1434
400,199,1516686749170290690,1455258712961167361,BelĂ©n Alastruey,Happy to share âOn the Locality of Attention in Direct Speech TranslationâđŹđ accepted at the ACL-SRW! We use interpretability techniques to propose an efficient architecture that matches the baseline performance while reducing computational cost. ,https://arxiv.org/abs/2204.09028,"Transformers have achieved state-of-the-art results across multiple NLP tasks. However, the self-attention mechanism complexity scales quadratically with the sequence length, creating an obstacle for tasks involving long sequences, like in the speech domain. In this paper, we discuss the usefulness of self-attention for Direct Speech Translation. First, we analyze the layer-wise token contributions in the self-attention of the encoder, unveiling local diagonal patterns. To prove that some attention weights are avoidable, we propose to substitute the standard self-attention with a local efficient one, setting the amount of context used based on the results of the analysis. With this approach, our model matches the baseline performance, and improves the efficiency by skipping the computation of those weights that standard attention discards. ",On the Locality of Attention in Direct Speech Translation,1,['Happy to share âOn the Locality of Attention in Direct Speech TranslationâđŹđ accepted at the ACL-SRW!\n\nWe use interpretability techniques to\npropose an efficient architecture that\nmatches the baseline performance while reducing computational cost.\n\n '],22,04,260
401,98,1457669678596300803,958374804,Jonathan Gorard," New paper (with Xerxes) on the homotopic foundations of @wolframphysics , addressing the fundamental question of *how* and *why* geometrical structures in physics (such as spacetime, Hilbert space, etc.) emerge from discrete ""pregeometric"" data! (1/5) The basic idea is that, starting from a multiway system, which is really a monoidal 1-category, one can perform âcompletionsâ by introducing higher cells (and thus higher homotopies) until one obtains the full ""rulial"" multiway system, which is really an infinity-groupoid. (2/5) The ârulialâ multiway system can thus be interpreted as a homotopy type via Grothendieck's hypothesis, and the âmultiverseâ of all such ârulialâ multiway systems thus carries the structure of a cohesive (infinity, 1)-topos, with cohesivity being preserved under fibration. (3/5) Treating ârulial spaceâ as a fibration, one can then obtain the usual discrete structures (hypergraphs, causal graphs, multiway systems, etc.) as global sections of rulial space, which inherit spatial/geometrical structure functorially from the aforementioned homotopy type. (4/5) It's perhaps one of the most philosophically interesting papers I've ever (co-)written, since it begins to get at the foundational question of *why* the laws of physics have the structure that they do. With potential applications to TQFT, (higher) gauge field theory, etc. (5/5) @mattecapu @wolframphysics Cheers Matteo! Since ultimately we were concerned with obtaining homotopy n-types, it made sense to work with n-fold groupoids (since in that case it doesnât matter). We could always collapse the faces in the cells to obtain the corresponding n-categories if we wanted toâŠ",https://arxiv.org/abs/2111.03460,"How do spaces emerge from pregeometric discrete building blocks governed by computational rules? To address this, we investigate non-deterministic rewriting systems (multiway systems) of the Wolfram model. We express these rewriting systems as homotopy types. Using this new formulation, we outline how spatial structures can be functorially inherited from pregeometric type-theoretic constructions. We show how higher homotopy types are constructed from rewriting rules. These correspond to morphisms of an $n$-fold category. Subsequently, the $n \to \infty$ limit of the Wolfram model rulial multiway system is identified as an $\infty$-groupoid, with the latter being relevant given Grothendieck's homotopy hypothesis. We then go on to show how this construction extends to the classifying space of rulial multiway systems, which forms a multiverse of multiway systems and carries the formal structure of an ${\left(\infty, 1\right)}$-topos. This correspondence to higher categorical structures offers a new way to understand how spaces relevant to physics may arise from pregeometric combinatorial models. A key issue we have addressed here is to relate abstract non-deterministic rewriting systems to higher homotopy spaces. A consequence of constructing spaces and geometry synthetically is that it eliminates ad hoc assumptions about geometric attributes of a model such as an a priori background or pre-assigned geometric data. Instead, geometry is inherited functorially by higher structures. This is relevant for formally justifying different choices of underlying spacetime discretization adopted by models of quantum gravity. We conclude with comments on how our framework of higher category-theoretic combinatorial constructions, corroborates with other approaches investigating higher categorical structures relevant to the foundations of physics. ","Pregeometric Spaces from Wolfram Model Rewriting Systems as Homotopy
Types",6,"['\nNew paper (with Xerxes) on the homotopic foundations of @wolframphysics , addressing the fundamental question of *how* and *why* geometrical structures in physics (such as spacetime, Hilbert space, etc.) emerge from discrete ""pregeometric"" data! (1/5) ', 'The basic idea is that, starting from a multiway system, which is really a monoidal 1-category, one can perform âcompletionsâ by introducing higher cells (and thus higher homotopies) until one obtains the full ""rulial"" multiway system, which is really an infinity-groupoid. (2/5)', ""The ârulialâ multiway system can thus be interpreted as a homotopy type via Grothendieck's hypothesis, and the âmultiverseâ of all such ârulialâ multiway systems thus carries the structure of a cohesive (infinity, 1)-topos, with cohesivity being preserved under fibration. (3/5)"", 'Treating ârulial spaceâ as a fibration, one can then obtain the usual discrete structures (hypergraphs, causal graphs, multiway systems, etc.) as global sections of rulial space, which inherit spatial/geometrical structure functorially from the aforementioned homotopy type. (4/5)', ""It's perhaps one of the most philosophically interesting papers I've ever (co-)written, since it begins to get at the foundational question of *why* the laws of physics have the structure that they do. With potential applications to TQFT, (higher) gauge field theory, etc. (5/5)"", '@mattecapu @wolframphysics Cheers Matteo! Since ultimately we were concerned with obtaining homotopy n-types, it made sense to work with n-fold groupoids (since in that case it doesnât matter). We could always collapse the faces in the cells to obtain the corresponding n-categories if we wanted toâŠ']",21,11,1684
402,109,1250443231965450240,177416255,Daniel Litt,"New paper (joint with Dean Bisogno, Wanlin Li, and Padma Srinivasan): ! This came out of a recent @amermathsoc MRC program. The paper is a pretty fun mix (IMO) of ideas from low-dimensional topology and arithmetic, and it ends with a bit of a mystery. Namely, we produce (what we think is) the first known example of a non-hyperelliptic curve whose Ceresa cycle vanishes under the l-adic Abel Jacobi map. It's the Hurwitz curve of genus 7, also known as the Fricke-Macbeath curve; a picture of it, drawn by Klein, is below. It's not too hard to see that the Ceresa cycle also has torsion image under the Hodge-theoretic Abel-Jacobi map. So here's the mystery -- is this cycle actually algebraically or rationally equivalent to zero? This seems pretty hard! Let me know if you have any thoughts, or would like more details on how to make the question precise.",https://arxiv.org/abs/2004.06146,"Let l be a prime and G a pro-l group with torsion-free abelianization. We produce group-theoretic analogues of the Johnson/Morita cocycle for G -- in the case of surface groups, these cocycles appear to refine existing constructions when l=2. We apply this to the pro-l etale fundamental groups of smooth curves to obtain Galois-cohomological analogues, and discuss their relationship to work of Hain and Matsumoto in the case the curve is proper. We analyze many of the fundamental properties of these classes and use them to give an example of a non-hyperelliptic curve whose Ceresa class has torsion image under the l-adic Abel-Jacobi map. ","Group-theoretic Johnson classes and a non-hyperelliptic curve with
torsion Ceresa class",4,"['New paper (joint with Dean Bisogno, Wanlin Li, and Padma Srinivasan): ! This came out of a recent @amermathsoc MRC program. ', 'The paper is a pretty fun mix (IMO) of ideas from low-dimensional topology and arithmetic, and it ends with a bit of a mystery. Namely, we produce (what we think is) the first known example of a non-hyperelliptic curve whose Ceresa cycle vanishes under the l-adic Abel Jacobi map.', ""It's the Hurwitz curve of genus 7, also known as the Fricke-Macbeath curve; a picture of it, drawn by Klein, is below. It's not too hard to see that the Ceresa cycle also has torsion image under the Hodge-theoretic Abel-Jacobi map. https://t.co/TcSSr0lvQ9"", ""So here's the mystery -- is this cycle actually algebraically or rationally equivalent to zero? This seems pretty hard! Let me know if you have any thoughts, or would like more details on how to make the question precise.""]",20,04,878
403,216,1513450976883232773,699180629246672897,Subhrajit Roy,"Can we move away from in-clinic towards at-home measurements for multiple sclerosis predictions? Our work investigates multiple different endpoints on 2 datasets - one from a clinical study, one from mobile devices. #CHIL2022 @d_mincu @negar_rz @kat_heller @JessicaSchrouff @weballergy.",https://arxiv.org/abs/2204.03969,"Literature on machine learning for multiple sclerosis has primarily focused on the use of neuroimaging data such as magnetic resonance imaging and clinical laboratory tests for disease identification. However, studies have shown that these modalities are not consistent with disease activity such as symptoms or disease progression. Furthermore, the cost of collecting data from these modalities is high, leading to scarce evaluations. In this work, we used multi-dimensional, affordable, physical and smartphone-based performance outcome measures (POM) in conjunction with demographic data to predict multiple sclerosis disease progression. We performed a rigorous benchmarking exercise on two datasets and present results across 13 clinically actionable prediction endpoints and 6 machine learning models. To the best of our knowledge, our results are the first to show that it is possible to predict disease progression using POMs and demographic data in the context of both clinical trials and smartphone-base studies by using two datasets. Moreover, we investigate our models to understand the impact of different POMs and demographics on model performance through feature ablation studies. We also show that model performance is similar across different demographic subgroups (based on age and sex). To enable this work, we developed an end-to-end reusable pre-processing and machine learning framework which allows quicker experimentation over disparate MS datasets. ","Disability prediction in multiple sclerosis using performance outcome
measures and demographic data",2,"['Can we move away from in-clinic towards at-home measurements for multiple sclerosis predictions? Our work investigates multiple different endpoints on 2 datasets - one from a clinical study, one from mobile devices. #CHIL2022\n\n', '@d_mincu @negar_rz @kat_heller @JessicaSchrouff @weballergy.']",22,04,293
404,52,1230685393223372800,106843613,Jacob Haqq Misra,"New paper by myself, @ravi_kopparapu & @nogreenstars - we argue that technosignatures are a logical continuation of the search for biosignatures, with the search for technosignatures providing important information about the future of civilization on Earth ",https://arxiv.org/abs/2002.08776,"The search for spectroscopic biosignatures with the next-generation of space telescopes could provide observational constraints on the abundance of exoplanets with signs of life. An extension of this spectroscopic characterization of exoplanets is the search for observational evidence of technology, known as technosignatures. Current mission concepts that would observe biosignatures from ultraviolet to near-infrared wavelengths could place upper limits on the fraction of planets in the galaxy that host life, although such missions tend to have relatively limited capabilities of constraining the prevalence of technosignatures at mid-infrared wavelengths. Yet searching for technosignatures alongside biosignatures would provide important knowledge about the future of our civilization. If planets with technosignatures are abundant, then we can increase our confidence that the hardest step in planetary evolution--the Great Filter--is probably in our past. But if we find that life is commonplace while technosignatures are absent, then this would increase the likelihood that the Great Filter awaits to challenge us in the future. ",Observational Constraints on the Great Filter,1,"['New paper by myself, @ravi_kopparapu & @nogreenstars - we argue that technosignatures are a logical continuation of the search for biosignatures, with the search for technosignatures providing important information about the future of civilization on Earth\n']",20,02,263
405,88,1337378119821488128,980524179739963392,James Lucas,"1/6 Check our new paper on Flexible Few-Shot Learning! We extend FSL to include flexible classification criterion in each episode. Unsupervised representation learning beats supervised approaches. Short version @ #NeurIPS2020 metalearn workshop, 10am EST 2/6 In the FFSL setting, each episode includes a support set, query set, AND a hidden context. The context determines how each example is classified and varies across episodes. At test time, we see novel contexts that werenât included in the training set. 3/6 We build two new benchmark datasets based on Celeb-A and Zappos-50k, where the context is determined by a choice of object attributes. Existing episodic training methods perform poorly on these tasks, but unsupervised representation learning gets closer to oracle performance 4/6 Why do existing methods fail? We analyze training a protonet in a toy FFSL problem and identify that, unlike in the FSL setting, the protonet is encouraged to destroy information relevant for test-time contexts. 5/6 The unsupervised representation learning approaches (U/UFT) learn localized features relevant to the test-time context. This allows them to generalize well. 6/6 Many additional results included in our paper: Including: careful evaluation of our proposed method, further Celeb-A + Zappos-50k experiments, evaluation on an ImageNet FFSL task, + much more! ",https://arxiv.org/abs/2012.05895,"Semantic concepts are frequently defined by combinations of underlying attributes. As mappings from attributes to classes are often simple, attribute-based representations facilitate novel concept learning with zero or few examples. A significant limitation of existing attribute-based learning paradigms, such as zero-shot learning, is that the attributes are assumed to be known and fixed. In this work we study the rapid learning of attributes that were not previously labeled. Compared to standard few-shot learning of semantic classes, in which novel classes may be defined by attributes that were relevant at training time, learning new attributes imposes a stiffer challenge. We found that supervised learning with training attributes does not generalize well to new test attributes, whereas self-supervised pre-training brings significant improvement. We further experimented with random splits of the attribute space and found that predictability of test attributes provides an informative estimate of a model's generalization ability. ",Few-Shot Attribute Learning,6,"['1/6 Check our new paper on Flexible Few-Shot Learning! \n\nWe extend FSL to include flexible classification criterion in each episode. Unsupervised representation learning beats supervised approaches.\n\nShort version @ #NeurIPS2020 metalearn workshop, 10am EST ', '2/6 In the FFSL setting, each episode includes a support set, query set, AND a hidden context.\n\nThe context determines how each example is classified and varies across episodes.\n\nAt test time, we see novel contexts that werenât included in the training set. https://t.co/fRqRs2vmGd', '3/6 We build two new benchmark datasets based on Celeb-A and Zappos-50k, where the context is determined by a choice of object attributes.\n\nExisting episodic training methods perform poorly on these tasks, but unsupervised representation learning gets closer to oracle performance https://t.co/z0Vrt9abNy', '4/6 Why do existing methods fail?\n\nWe analyze training a protonet in a toy FFSL problem and identify that, unlike in the FSL setting, the protonet is encouraged to destroy information relevant for test-time contexts. https://t.co/7GAiqlxvuB', '5/6 The unsupervised representation learning approaches (U/UFT) learn localized features relevant to the test-time context. This allows them to generalize well. https://t.co/zLNYuLhpjw', '6/6 Many additional results included in our paper: \n\nIncluding: careful evaluation of our proposed method, further Celeb-A + Zappos-50k experiments, evaluation on an ImageNet FFSL task, + much more! https://t.co/4zAhXIBFQp']",20,12,1414
406,74,1253247752848519171,841031248839618560,Relja ArandjeloviÄ,"New paper by my student Ignacio Rocco - a 10x faster and less memory hungry NCNet with equivalent results (or more accurate with a smaller speedup) . Code and models available @ducha_aiki You mean to the challenge? I'll mention it but I'm not sure we have the time (NeurIPS, personal reasons, etc)",https://arxiv.org/abs/2004.10566,"In this work we target the problem of estimating accurately localised correspondences between a pair of images. We adopt the recent Neighbourhood Consensus Networks that have demonstrated promising performance for difficult correspondence problems and propose modifications to overcome their main limitations: large memory consumption, large inference time and poorly localised correspondences. Our proposed modifications can reduce the memory footprint and execution time more than $10\times$, with equivalent results. This is achieved by sparsifying the correlation tensor containing tentative matches, and its subsequent processing with a 4D CNN using submanifold sparse convolutions. Localisation accuracy is significantly improved by processing the input images in higher resolution, which is possible due to the reduced memory footprint, and by a novel two-stage correspondence relocalisation module. The proposed Sparse-NCNet method obtains state-of-the-art results on the HPatches Sequences and InLoc visual localisation benchmarks, and competitive results in the Aachen Day-Night benchmark. ","Efficient Neighbourhood Consensus Networks via Submanifold Sparse
Convolutions",2,"['New paper by my student Ignacio Rocco - a 10x faster and less memory hungry NCNet with equivalent results (or more accurate with a smaller speedup) . Code and models available ', ""@ducha_aiki You mean to the challenge? I'll mention it but I'm not sure we have the time (NeurIPS, personal reasons, etc)""]",20,04,318
407,54,1461363725361811459,773137323907223552,katie breivik,"New software paper from @tomjwagg, myself, and Selma de Mink on arXiv today: ! If youâve ever tried to calculate the average LISA signal to noise ratio for binary systems, you know there are A LOT of opportunities for missed factors of 2 or 10/3 or 98/5. We ran into this and rederived the SNR so many times that we said ENOUGH! Enter LEGWORK. A python package that does the legwork (đ) for calculating lowest order post-Newtonian gravitational wave evolution of binary populations and SNRs for GW detectors like @LISACommunity. @tomjwagg is an absolute visionary when it comes to visualization, demos, and tutorials; so please do check them out here And of course, let us know if you run into any trouble! Finally, keep an eye out for a couple of papers in the next few days on the projects that inspired us to put LEGWORK together. đđđ (spoiler: get hype for some compact binary populations yâall!!) @AstrophysicalAC @tomjwagg So glad you think so! I have definitely found myself SO frustrated chasing derivations that I figured it was worth the package just for that! @LISACommunity We are heavily stellar-origin binary biased so we really only do the lowest-order PN stuff (think sine waves with an amplitude modulation). @LISACommunity BUT! we optimized the code so that you can compute literally TENS-OF-MILLIONS of SNRS in just a couple of mins! So you win some you lose some (a lot of) LISA sources đ OMG I also forgot my absolute FAVORITE part of the paper itself: We used @rodluger's showyourwork package (), so you can see *exactly* what code was used to compile the figures in the paper. You can see the paper source code here: @duetosymmetry @tomjwagg Hahahaha Iâm so glad you caught my Easter egg joke! It does prove the point!! @duetosymmetry @tomjwagg You (and everyone else!) can do: > pip install legwork Congratulations!! đ„ł @AstroVivi @exoplaneteer @tomjwagg Thanks Vivi!!",https://arxiv.org/abs/2111.08717,"We present LEGWORK (LISA Evolution and Gravitational Wave Orbit Kit), an open-source Python package for making predictions about stellar-origin gravitational wave sources and their detectability in LISA or other space-based gravitational wave detectors. LEGWORK can be used to evolve the orbits of sources due to gravitational wave emission, calculate gravitational wave strains (using post-Newtonian approximations), compute signal-to-noise ratios and visualise the results. It can be applied to a variety of potential sources, including binaries consisting of white dwarfs, neutron stars and black holes. Although we focus on double compact objects, in principle LEGWORK can be used for any system with a user-specified orbital evolution, such as those affected by a third object or gas drag. We optimised the package to make it efficient for use in population studies which can contain tens-of-millions of sources. This paper describes the package and presents several potential use cases. We explain in detail the derivations of the expressions behind the package as well as identify and clarify some discrepancies currently present in the literature. We hope that LEGWORK will enable and accelerate future studies triggered by the rapidly growing interest in gravitational wave sources. ","LEGWORK: A python package for computing the evolution and detectability
of stellar-origin gravitational-wave sources with space-based detectors",11,"['New software paper from @tomjwagg, myself, and Selma de Mink on arXiv today: !\nIf youâve ever tried to calculate the average LISA signal to noise ratio for binary systems, you know there are A LOT of opportunities for missed factors of 2 or 10/3 or 98/5. ', 'We ran into this and rederived the SNR so many times that we said ENOUGH! Enter LEGWORK. A python package that does the legwork (đ) for calculating lowest order post-Newtonian gravitational wave evolution of binary populations and SNRs for GW detectors like @LISACommunity.', '@tomjwagg is an absolute visionary when it comes to visualization, demos, and tutorials; so please do check them out here https://t.co/iNAxUGqYEE\nAnd of course, let us know if you run into any trouble!', 'Finally, keep an eye out for a couple of papers in the next few days on the projects that inspired us to put LEGWORK together. \nđđđ (spoiler: get hype for some compact binary populations yâall!!)', '@AstrophysicalAC @tomjwagg So glad you think so! I have definitely found myself SO frustrated chasing derivations that I figured it was worth the package just for that!', '@LISACommunity We are heavily stellar-origin binary biased so we really only do the lowest-order PN stuff (think sine waves with an amplitude modulation).', '@LISACommunity BUT! we optimized the code so that you can compute literally TENS-OF-MILLIONS of SNRS in just a couple of mins! So you win some you lose some (a lot of) LISA sources đ', ""OMG I also forgot my absolute FAVORITE part of the paper itself:\nWe used @rodluger's showyourwork package (https://t.co/4LNGL6q85T), so you can see *exactly* what code was used to compile the figures in the paper. You can see the paper source code here: https://t.co/zhG48TziuS"", '@duetosymmetry @tomjwagg Hahahaha Iâm so glad you caught my Easter egg joke! It does prove the point!!', '@duetosymmetry @tomjwagg You (and everyone else!) can do:\n> pip install legwork\n\nCongratulations!! đ„ł', '@AstroVivi @exoplaneteer @tomjwagg Thanks Vivi!!']",21,11,1927
408,221,1313381016309039105,1051867025319047169,Tycho van der Ouderaa,"New paper. Happy to share that our paper extending DL-based image registration to group-wise registration was accepted for the Thoracic Image Analysis workshop at @MICCAI2020. arXiv: w/ @ivanaisgum, Wouter B. Veldhuis and @BobdeVos @Quantib #MICCAI2020 ",https://arxiv.org/abs/2010.00231,"Deep neural networks are increasingly used for pair-wise image registration. We propose to extend current learning-based image registration to allow simultaneous registration of multiple images. To achieve this, we build upon the pair-wise variational and diffeomorphic VoxelMorph approach and present a general mathematical framework that enables both registration of multiple images to their geodesic average and registration in which any of the available images can be used as a fixed image. In addition, we provide a likelihood based on normalized mutual information, a well-known image similarity metric in registration, between multiple images, and a prior that allows for explicit control over the viscous fluid energy to effectively regularize deformations. We trained and evaluated our approach using intra-patient registration of breast MRI and Thoracic 4DCT exams acquired over multiple time points. Comparison with Elastix and VoxelMorph demonstrates competitive quantitative performance of the proposed method in terms of image similarity and reference landmark distances at significantly faster registration. ",Deep Group-wise Variational Diffeomorphic Image Registration,1,"['New paper. Happy to share that our paper extending DL-based image registration to group-wise registration was accepted for the Thoracic Image Analysis workshop at @MICCAI2020.\n\narXiv: \nw/ @ivanaisgum, Wouter B. Veldhuis and @BobdeVos\n\n@Quantib #MICCAI2020 ']",20,10,266
409,112,1225337667082215425,75249390,Axel Maas,"We have put out a new paper, among others with @SimonPlaetzer, on the 'valence Higgs' of the proton. That may sounds weird, as usually it is told of only three 'valence quarks' in the proton. So let me explain a little - you can find the paper at A particle is called valence, if it contributes to the quantum number of the proton. The quarks make up its electric charge and baryon number. So why would the Higgs be there? Which quantum number does it contribute to? It is quite involved,and has to do with weak interactions. The weak interactions couple to something we call flavor, and which is usually said to make the difference between proton and neutron. The underlying theory is more complicated, and formally says something like flavor is not observable - see the review To get a physical version of flavor, you need to add something which acts 'like flavor', and creates the distinction. In the standard model, the only particle which can do this without changing the spin is the Higgs. This has been worked out for leptons in 1980 by Fröhlich et al. However, a proton is a composite particle, and things are a bit more weird. But essentially it needs to have three quarks and a Higgs to get all quantum numbers right and observable in the standard model. We deduced this in But this should have observable consequences, if we have enough energy to get the sluggish Higgs to react - at least something like the LHC. This paper is our first attempt to determine how much we would see, and where we would see it. We find that the effect needs to be tiny, but not impossible. It is worthwhile to invest more effort into it, as we did a lot of very crude estimates. But confirming it would be a big step in understanding the field theory underlying the standard model.",https://arxiv.org/abs/2002.01688,"Non-perturbative gauge-invariance under the strong and the weak interactions dictates that the proton contains a non-vanishing valence contribution from the Higgs particle. By introducing an additional parton distribution function (PDF), we investigate the experimental consequences of this prediction. The Herwig 7 event generator and a parametrized CMS detector simulation are used to obtain predictions for a scenario amounting to the LHC Run II data set. We use those to assess the impact of the Higgs PDF on the pp->ttbar process in the single lepton final state. Comparing to nominal simulation we derive expected limits as a function of the shape of the valence Higgs PDF. We also investigate the process pp->ttZ at the parton level to add further constraints. ",Constraining the Higgs valence contribution in the proton,7,"[""We have put out a new paper, among others with @SimonPlaetzer, on the 'valence Higgs' of the proton. That may sounds weird, as usually it is told of only three 'valence quarks' in the proton. So let me explain a little - you can find the paper at "", 'A particle is called valence, if it contributes to the quantum number of the proton. The quarks make up its electric charge and baryon number. So why would the Higgs be there? Which quantum number does it contribute to?\n\nIt is quite involved,and has to do with weak interactions.', 'The weak interactions couple to something we call flavor, and which is usually said to make the difference between proton and neutron.\n\nThe underlying theory is more complicated, and formally says something like flavor is not observable - see the review https://t.co/jVo2JEcpFO', ""To get a physical version of flavor, you need to add something which acts 'like flavor', and creates the distinction. In the standard model, the only particle which can do this without changing the spin is the Higgs. This has been worked out for leptons in 1980 by Fröhlich et al."", 'However, a proton is a composite particle, and things are a bit more weird. But essentially it needs to have three quarks and a Higgs to get all quantum numbers right and observable in the standard model. We deduced this in https://t.co/EhNqRI8CIn', 'But this should have observable consequences, if we have enough energy to get the sluggish Higgs to react - at least something like the LHC.\n\nThis paper is our first attempt to determine how much we would see, and where we would see it.', 'We find that the effect needs to be tiny, but not impossible.\n\nIt is worthwhile to invest more effort into it, as we did a lot of very crude estimates. But confirming it would be a big step in understanding the field theory underlying the standard model.']",20,02,1794
410,65,1395009341569310722,1246943809,Philipp Moesta,"New group paper: First paper led by PhD student Swapnil Shankar at @uva_api and @GRAPPAinstitute This is our first attempt at end-to-end modelling of type Ic-bl supernova. We extract the engine dynamics from a full 3D GRMHD CCSN simulation performed with and use the resulting parameters as engine models for full-star jet-breakout simulations with JET. Finally we perform radiation-transport calculation with SEDONA to get lightcurves and spectras for these events. We find that the engine dynamics from 3D CCSN jet formation simulations are consistent with type Ic-bl supernova lightcurves and spectra. Lightcurves and spectra also seem robust with respect to uncertainties in engine parameter extraction. This is very encouraging to keep improving our end-to-end modelling approach. The most important step will be to do full 3D jet-propagation simulations in the future. Huge congratulations to Swapnil for carrying out this project and working with multiple different codes and datasets. Such a pleasure to be working with him. Also, many thanks to collaborators Jennifer Barnes, Paul Duffel and Dan Kasen whose work we relied on heavily.",https://arxiv.org/abs/2105.08092,"A subset of type Ic supernovae (SNe Ic), broad-lined SNe Ic (SNe Ic-bl), show unusually high kinetic energies ($\sim 10^{52}$ erg) which cannot be explained by the energy supplied by neutrinos alone. Many SNe Ic-bl have been observed in coincidence with long gamma-ray bursts (GRBs) which suggests a connection between SNe and GRBs. A small fraction of core-collapse supernovae (CCSNe) form a rapidly-rotating and strongly-magnetized protoneutron star (PNS), a proto-magnetar. Jets from such magnetars can provide the high kinetic energies observed in SNe Ic-bl and also provide the connection to GRBs. In this work we use the jetted outflow produced in a 3D CCSN simulation from a consistently formed proto-magnetar as the central engine for full-star explosion simulations. We extract a range of central engine parameters and find that the extracted engine energy is in the range of $6.231 \times 10^{51}-1.725 \times 10^{52}$ erg, the engine time-scale in the range of $0.479-1.159$ s and the engine half-opening angle in the range of $\sim 9-19^{\circ}$. Using these as central engines, we perform 2D special-relativistic (SR) hydrodynamic (HD) and radiation transfer simulations to calculate the corresponding light curves and spectra. We find that these central engine parameters successfully produce SNe Ic-bl which demonstrates that jets from proto-magnetars can be viable engines for SNe Ic-bl. We also find that only the central engines with smaller opening angles ($\sim 10^{\circ}$) form a GRB implying that GRB formation is likely associated with narrower jet outflows and Ic-bl's without GRBs may be associated with wider outflows. ","Proto-magnetar jets as central engines for broad-lined type Ic
supernovae",5,"['New group paper: First paper led by PhD student Swapnil Shankar at @uva_api and @GRAPPAinstitute ', 'This is our first attempt at end-to-end modelling of type Ic-bl supernova. We extract the engine dynamics from a full 3D GRMHD CCSN simulation performed with https://t.co/m9JeRT3bki and use the resulting parameters as engine models for full-star jet-breakout simulations with JET.', 'Finally we perform radiation-transport calculation with SEDONA to get lightcurves and spectras for these events.\nWe find that the engine dynamics from 3D CCSN jet formation simulations are consistent with type Ic-bl supernova lightcurves and spectra. https://t.co/QdQlPf3teH', 'Lightcurves and spectra also seem robust with respect to uncertainties in engine parameter extraction. This is very encouraging to keep improving our end-to-end modelling approach. The most important step will be to do full 3D jet-propagation simulations in the future.', 'Huge congratulations to Swapnil for carrying out this project and working with multiple different codes and datasets. Such a pleasure to be working with him. Also, many thanks to collaborators Jennifer Barnes, Paul Duffel and Dan Kasen whose work we relied on heavily.']",21,05,1171
411,150,1234662387724062722,830120476282408960,Nienke van der Marel,"A new study on transition disks by @uvic @PHASTatUVIC @UVicScience PhD student Logan Francis and myself, with several new insights! In this work we focus on inner disks, detected and resolved by ALMA continuum observations. 1/5 @uvic @PHASTatUVIC @UVicScience In a sample of 38 targets, 18 inner disks are detected inside the cavities and 14 of those are resolved. 9 of those are definitely misaligned with the outer disk, and the other 5 might be as well. In fact, it is possible that ALL inner disks are misaligned w.r.t. the outer disk. @uvic @PHASTatUVIC @UVicScience Furthermore, the dust emission of the inner disks is shown to be optically thin, so we can derived dust masses. They are NOT correlated with NIR excess, which was the old method of determining inner disks. The size-luminosity relation of these disks suggest radial drift. 3/5 @uvic @PHASTatUVIC @UVicScience With the stellar accretion rate we compute a GDR of 10^4 for most inner disks, except PDS70, the only known disk with a detected planet! We formulate a hypothesis that planets can only be detected in a disk when still accreting, and material is flowing through the gap. 4/5 This amazing work was entirely done with @almaobs ALMA archival data, with no fewer than 38 datasets or 30 Tb of data, which was downloaded and reduced with the use of @ComputeCanada 5/5",https://arxiv.org/abs/2003.00079,"Transition disks with large inner dust cavities are thought to host massive companions. However, the disk structure inside the companion orbit and how material flows toward an actively accreting star remain unclear. We present a high resolution continuum study of inner disks in the cavities of 38 transition disks. Measurements of the dust mass from archival Atacama Large Millimeter/Submillimeter Array observations are combined with stellar properties and spectral energy distributions to assemble a detailed picture of the inner disk. An inner dust disk is detected in 18 of 38 disks in our sample. Of the 14 resolved disks, 9 are significantly misaligned with the outer disk. The near-infrared excess is uncorrelated with the mm dust mass of the inner disk. The size-luminosity correlation known for protoplanetary disks is recovered for the inner disks as well, consistent with radial drift. The inner disks are depleted in dust relative to the outer disk and their dust mass is uncorrelated with the accretion rates. This is interpreted as the result of radial drift and trapping by planets in a low $\alpha$ ($\sim 10^{-3}$) disk, or a failure of the $\alpha$-disk model to describe angular momentum transport and accretion. The only disk in our sample with confirmed planets in the gap, PDS 70, has an inner disk with a significantly larger radius and lower inferred gas-to-dust ratio than other disks in the sample. We hypothesize that these inner disk properties and the detection of planets are due to the gap having only been opened recently by young, actively accreting planets. ","Dust depleted inner disks in a large sample of transition disks through
long-baseline ALMA observations",5,"['A new study on transition disks by @uvic @PHASTatUVIC @UVicScience PhD student Logan Francis and myself, with several new insights! In this work we focus on inner disks, detected and resolved by ALMA continuum observations. 1/5\n\n ', '@uvic @PHASTatUVIC @UVicScience In a sample of 38 targets, 18 inner disks are detected inside the cavities and 14 of those are resolved. 9 of those are definitely misaligned with the outer disk, and the other 5 might be as well. In fact, it is possible that ALL inner disks are misaligned w.r.t. the outer disk. https://t.co/vRB8PVvtha', '@uvic @PHASTatUVIC @UVicScience Furthermore, the dust emission of the inner disks is shown to be optically thin, so we can derived dust masses. They are NOT correlated with NIR excess, which was the old method of determining inner disks. The size-luminosity relation of these disks suggest radial drift. 3/5', '@uvic @PHASTatUVIC @UVicScience With the stellar accretion rate we compute a GDR of 10^4 for most inner disks, except PDS70, the only known disk with a detected planet! We formulate a hypothesis that planets can only be detected in a disk when still accreting, and material is flowing through the gap. 4/5 https://t.co/y9MjQTlzTF', 'This amazing work was entirely done with @almaobs ALMA archival data, with no fewer than 38 datasets or 30 Tb of data, which was downloaded and reduced with the use of @ComputeCanada 5/5']",20,03,1368
412,130,1134141334414139392,449236360,Antoine Cully,"Glad to announce that I will be presenting my new (and first single-authored) paper on âAutonomous skill discovery with Quality-Diversity and Unsupervised Descriptorsâ @GECCO2019. The preprint is available here: @ImperialAI @ICComputing It introduces AURORA: AUtonomous RObots that Realize their Abilities. It uses auto-encoders to define descriptors in QD algorithms, while using QD algorithms to generate training datasets for the AE. Combined together this allows robots to discover their range of skills.",https://arxiv.org/abs/1905.11874,"Quality-Diversity optimization is a new family of optimization algorithms that, instead of searching for a single optimal solution to solving a task, searches for a large collection of solutions that all solve the task in a different way. This approach is particularly promising for learning behavioral repertoires in robotics, as such a diversity of behaviors enables robots to be more versatile and resilient. However, these algorithms require the user to manually define behavioral descriptors, which is used to determine whether two solutions are different or similar. The choice of a behavioral descriptor is crucial, as it completely changes the solution types that the algorithm derives. In this paper, we introduce a new method to automatically define this descriptor by combining Quality-Diversity algorithms with unsupervised dimensionality reduction algorithms. This approach enables robots to autonomously discover the range of their capabilities while interacting with their environment. The results from two experimental scenarios demonstrate that robot can autonomously discover a large range of possible behaviors, without any prior knowledge about their morphology and environment. Furthermore, these behaviors are deemed to be similar to handcrafted solutions that uses domain knowledge and significantly more diverse than when using existing unsupervised methods. ","Autonomous skill discovery with Quality-Diversity and Unsupervised
Descriptors",2,"['Glad to announce that I will be presenting my new (and first single-authored) paper on âAutonomous skill discovery with Quality-Diversity and Unsupervised Descriptorsâ @GECCO2019.\nThe preprint is available here: @ImperialAI @ICComputing ', 'It introduces AURORA: AUtonomous RObots that Realize their Abilities.\nIt uses auto-encoders to define descriptors in QD algorithms, while using QD algorithms to generate training datasets for the AE. Combined together this allows robots to discover their range of skills.']",19,05,522
413,160,1518595735935422465,913799254237437952,Shruti Phadke,"New #ICWSM22 @icwsm paper with @hide_yourself & @tanmit modeling various radicalization phases in online conspiracy theory discussion participants. We aim at providing empirical insights into conspiracy radicalization as well as the recovery process. We find that users with increasing conspiracy theory discussion engagement progress successively through various radicalization phases, whereas users who eventually disengage from conspiracy theory discussions show distinct behavior. Specifically, users who disengage, limit their conspiracy theory discussions to specialized topics, show low language conformity with conspiracy theory discussion communities, and participate in diverse discussion groups.",https://arxiv.org/abs/2204.10729,"The disruptive offline mobilization of participants in online conspiracy theory (CT) discussions has highlighted the importance of understanding how online users may form radicalized conspiracy beliefs. While prior work researched the factors leading up to joining online CT discussions and provided theories of how conspiracy beliefs form, we have little understanding of how conspiracy radicalization evolves after users join CT discussion communities. In this paper, we provide the empirical modeling of various radicalization phases in online CT discussion participants. To unpack how conspiracy engagement is related to radicalization, we first characterize the users' journey through CT discussions via conspiracy engagement pathways. Specifically, by studying 36K Reddit users through their 169M contributions, we uncover four distinct pathways of conspiracy engagement: steady high, increasing, decreasing, and steady low. We further model three successive stages of radicalization guided by prior theoretical works. Specific sub-populations of users, namely those on steady high and increasing conspiracy engagement pathways, progress successively through various radicalization stages. In contrast, users on the decreasing engagement pathway show distinct behavior: they limit their CT discussions to specialized topics, participate in diverse discussion groups, and show reduced conformity with conspiracy subreddits. By examining users who disengage from online CT discussions, this paper provides promising insights about conspiracy recovery process. ","Pathways through Conspiracy: The Evolution of Conspiracy Radicalization
through Engagement in Online Conspiracy Discussions",3,"['New #ICWSM22 @icwsm paper with @hide_yourself & @tanmit modeling various radicalization phases in online conspiracy theory discussion participants. \nWe aim at providing empirical insights into conspiracy radicalization as well as the recovery process.', 'We find that users with increasing conspiracy theory discussion engagement progress successively through various radicalization phases, whereas users who eventually disengage from conspiracy theory discussions show distinct behavior.', 'Specifically, users who disengage, limit their conspiracy theory discussions to specialized topics, show low language conformity with conspiracy theory discussion communities, and participate in diverse discussion groups.']",22,04,714
414,40,1331614517634224132,322460769,Yoav Artzi,"New NLP+robotics+vision paper: Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following @ CoRL 2020 Work done by Valts Blukis Three core contributions make this happen, letâs upack them ... First contribution: a general few-shot language-conditioned object segmentation method. We align references to objects in language to their observations in the world via a database of exemplars. Need to reason about new objects? Add exemplars to the database! Training requires quite a bit of data to generalize well to unseen objects, *but* the nice part: the whole thing is trained from automatically generated augmented reality data using synthetic ShapeNet objects Second contribution: given the alignment, we formalize and build a learned object context map. The map masks over object identities and easily handles new objects. It explicitly encodes text context information into positions in the spatial environment. Third contribution: integrating the map into a complete policy (natural language instructions, raw observations) -> continuous velocity control. The policy generates plans on top of our context maps as visitation distributions. First stage mapping is through a series of projections and using our LingUNet architecture to predict the visitation distributions on top of the map (we first introduced this method in our CoRL 2018 paper) Agent never saw all these objects during learning, just a few exemplars. Bonus 1: modeling of concepts like objects and trajectories is strong inductive bias â> outperform systems that learn with hundreds of demonstrations with new objects. Letâs look at the complete video again Bonus2: interpretability via explicit alignments! Another example ... Thereâs a much longer talk with more details from the recent SpLU workshop and also a brief overview from CoRL ",https://arxiv.org/abs/2011.07384,"We study the problem of learning a robot policy to follow natural language instructions that can be easily extended to reason about new objects. We introduce a few-shot language-conditioned object grounding method trained from augmented reality data that uses exemplars to identify objects and align them to their mentions in instructions. We present a learned map representation that encodes object locations and their instructed use, and construct it from our few-shot grounding output. We integrate this mapping approach into an instruction-following policy, thereby allowing it to reason about previously unseen objects at test-time by simply adding exemplars. We evaluate on the task of learning to map raw observations and instructions to continuous control of a physical quadcopter. Our approach significantly outperforms the prior state of the art in the presence of new objects, even when the prior approach observes all objects during training. ","Few-shot Object Grounding and Mapping for Natural Language Robot
Instruction Following",9,"['New NLP+robotics+vision paper: Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following @ CoRL 2020\n\nWork done by Valts Blukis\nThree core contributions make this happen, letâs upack them ... ', 'First contribution: a general few-shot language-conditioned object segmentation method. We align references to objects in language to their observations in the world via a database of exemplars. Need to reason about new objects? Add exemplars to the database! https://t.co/ZZYmO1dxkK', 'Training requires quite a bit of data to generalize well to unseen objects, *but* the nice part: the whole thing is trained from automatically generated augmented reality data using synthetic ShapeNet objects https://t.co/kDAf7aXzLU', 'Second contribution: given the alignment, we formalize and build a learned object context map. The map masks over object identities and easily handles new objects. It explicitly encodes text context information into positions in the spatial environment. https://t.co/qjlpE7g29g', 'Third contribution: integrating the map into a complete policy (natural language instructions, raw observations) -> continuous velocity control. The policy generates plans on top of our context maps as visitation distributions. https://t.co/I8wb6Pfhin', 'First stage mapping is through a series of projections and using our LingUNet architecture to predict the visitation distributions on top of the map (we first introduced this method in our CoRL 2018 paper) https://t.co/UWYuHvvRJs', 'Agent never saw all these objects during learning, just a few exemplars. Bonus 1: modeling of concepts like objects and trajectories is strong inductive bias â> outperform systems that learn with hundreds of demonstrations with new objects. Letâs look at the complete video again https://t.co/pBZ8qYakTV', 'Bonus2: interpretability via explicit alignments! Another example ... https://t.co/M1jzdfz3Wz', 'Thereâs a much longer talk with more details https://t.co/41Isbg4JEt from the recent SpLU workshop and also a brief overview from CoRL https://t.co/fk8Fw7seXN']",20,11,1920
415,62,1230127951548690434,3325105445,Ben F. Maier,"In our new paper we show why lab-confirmed case numbers of the #Coronavirus #COVID19 do not grow exponentially: Answer: Effective containment policies isolate the healthy population well enough for case numbers to follow an algebraic scaling law. 1/9 classically, uncontained epidemic outbreaks can be modeled by means of the SIR model: individuals are either (S)usceptible, (I)nfectious, or (R)emoved from the process. ""S"" enter the ""I"" compartment with rate alpha, and ""I"" are transferred to ""R"" with rate beta 2/9 In this basic system, the number of infecteds grows exponentially, initially. So why don't we see that in the current outbreak? 3/9 The first thing one may think of is that symptomatic infecteds are usually quarantined. One can model that by including a quarantine rate kappa with which infecteds are isolated from the transmission process 4/9 However, this would still yield an exponential growth behavior, albeit with a slower velocity 5/9 Now, the implemented containment policies in several provinces in China did not only affect infecteds but also the general public. We can model that by including a public isolation rate kappa0 6/9 Doing this, we see that the exponential growth is suppressed by the depletion of susceptible individuals from the transmission process 7/9 additionally, the lab-confirmed cases are not reflected by the (I) compartment -- rather, we have to assume that infecteds can only be counted after a time lag, approximately when they are quarantined in a new compartment (X) which follows the empirically observed behavior. 8/9 our analysis material is available online: Thank you, @DirkBrockmann, for this fruitful and fast collaboration! 9/9 @balzacdiet @DirkBrockmann yes. this would be reflected in the infecteds' quarantine rate. we also tested a variation of that model where confirmation is decoupled from quarantine and find similar growth patterns. The growth behavior is dominated by the removal of healthy individuals from the process. @bene_beck @DirkBrockmann @ewyler so far, this seems to be the case. However, the model assumes that eventually all susceptibles become and remain isolated. This is impossible in reality, so we'll see what happens in the following few weeks. I suspect the model underestimates total number of cases (see Discuss.)",http://arxiv.org/abs/2002.07572,"The recent outbreak of COVID-19 in Mainland China is characterized by a distinctive algebraic, sub-exponential increase of confirmed cases during the early phase of the epidemic, contrasting an initial exponential growth expected for an unconstrained outbreak with sufficiently large reproduction rate. Although case counts vary significantly between affected provinces in Mainland China, the scaling law $t^{\mu}$ is surprisingly universal, with a range of exponents $\mu=2.1\pm0.3$. The universality of this behavior indicates that despite social, regional, demographical, geographical, and socio-economical heterogeneities of affected Chinese provinces, this outbreak is dominated by fundamental mechanisms that are not captured by standard epidemiological models. We show that the observed scaling law is a direct consequence of containment policies that effectively deplete the susceptible population. To this end we introduce a parsimonious model that captures both, quarantine of symptomatic infected individuals as well as population wide isolation in response to mitigation policies or behavioral changes. For a wide range of parameters, the model reproduces the observed scaling law in confirmed cases and explains the observed exponents. Quantitative fits to empirical data permit the identification of peak times in the number of asymptomatic or oligo-symptomatic, unidentified infected individuals, as well as estimates of local variations in the basic reproduction number. The model implies that the observed scaling law in confirmed cases is a direct signature of effective contaiment strategies and/or systematic behavioral changes that affect a substantial fraction of the susceptible population. These insights may aid the implementation of containment strategies in potential export induced COVID-19 secondary outbreaks elsewhere or similar future outbreaks of other emergent infectious diseases. ","Effective containment explains sub-exponential growth in confirmed cases
of recent COVID-19 outbreak in Mainland China",11,"['In our new paper we show why lab-confirmed case numbers of the #Coronavirus #COVID19 do not grow exponentially: \n\nAnswer: Effective containment policies isolate the healthy population well enough for case numbers to follow an algebraic scaling law. 1/9 ', 'classically, uncontained epidemic outbreaks can be modeled by means of the SIR model: individuals are either (S)usceptible, (I)nfectious, or (R)emoved from the process. \n\n""S"" enter the ""I"" compartment with rate alpha, and ""I"" are transferred to ""R"" with rate beta\n\n2/9 https://t.co/KRhNfSrMEq', ""In this basic system, the number of infecteds grows exponentially, initially.\n\nSo why don't we see that in the current outbreak?\n\n3/9 https://t.co/b3MRF664mO"", 'The first thing one may think of is that symptomatic infecteds are usually quarantined. One can model that by including a quarantine rate kappa with which infecteds are isolated from the transmission process\n\n4/9 https://t.co/1kJJDzFv50', 'However, this would still yield an exponential growth behavior, albeit with a slower velocity\n\n5/9 https://t.co/qVcV6BuSly', 'Now, the implemented containment policies in several provinces in China did not only affect infecteds but also the general public. We can model that by including a public isolation rate kappa0\n\n6/9 https://t.co/xVg0G1BVUP', 'Doing this, we see that the exponential growth is suppressed by the depletion of susceptible individuals from the transmission process\n\n7/9 https://t.co/nYBuuGrCb6', 'additionally, the lab-confirmed cases are not reflected by the (I) compartment -- rather, we have to assume that infecteds can only be counted after a time lag, approximately when they are quarantined in a new compartment (X) which follows the empirically observed behavior.\n\n8/9 https://t.co/6fPRq5IjBM', 'our analysis material is available online: \n\nhttps://t.co/jAobWd7nd8\n\nThank you, @DirkBrockmann, for this fruitful and fast collaboration!\n\n9/9', ""@balzacdiet @DirkBrockmann yes. this would be reflected in the infecteds' quarantine rate. we also tested a variation of that model where confirmation is decoupled from quarantine and find similar growth patterns. The growth behavior is dominated by the removal of healthy individuals from the process."", ""@bene_beck @DirkBrockmann @ewyler so far, this seems to be the case. However, the model assumes that eventually all susceptibles become and remain isolated. This is impossible in reality, so we'll see what happens in the following few weeks. I suspect the model underestimates total number of cases (see Discuss.)""]",20,02,2378
416,163,1433630032077885451,1106481910853701632,Kazuyuki Sekizawa,"A new #preprint from our collaboration with #BARC, India! We report an exploration of orientation effects in multinucleon transfer reaction: More ambitious we are, more delay in publishing a paper - we have to take a compromise between hope and reality..",https://arxiv.org/abs/2109.00674,"Background: Multinucleon transfer reactions at energies around the Coulomb barrier offer a vital opportunity to study rich physics of nuclear structure and dynamics. Despite the continuous development in the field, we have still limited knowledge about how deformation - one of the representative nuclear structures - affects multinucleon transfer reactions. Purpose: To shed light on the effect of deformation in multinucleon transfer processes, we study the $^{16}$O+$^{154}$Sm reaction at $E_{\rm lab}$=85 MeV (around the Coulomb barrier) and 134 MeV (substantially above the Coulomb barrier), where the target nucleus, $^{154}$Sm, is a well-established, deformed nucleus. Results: Angular distributions for elastic scattering and for various transfer channels were measured over a wide angular range at the BARC-TIFR pelletron-Linac accelerator facility, Mumbai. The $Q$-value- and angle-integrated isotope production cross sections have been extracted from the measured angular distributions. The experimental data are analyzed along with time-dependent Hartree-Fock calculations. For the lower incident energy case, we find a reasonable agreement between the measurements and the TDHF calculations for a-few-nucleon transfer channels; whereas TDHF underestimates cross sections for many-nucleon transfers, consistent with earlier works. On the other side, we find that calculated cross sections for secondary reaction products for the higher incident energy case, qualitatively explains the measured trends of isotopic distributions observed for the lower energy. The latter observation indicates possible underestimation of excitation energies in the present TDHF+GEMINI analysis. Although certain orientation effects were observed in TDHF results, it is difficult to disentangle them from the $Q$-value- and angle-integrated production cross sections. (Shortened due to the arXiv's length limit.) ","Reaction mechanism study for multinucleon transfer processes in
collisions of spherical and deformed nuclei at energies near and above the
Coulomb barrier: The $^{16}$O+$^{154}$Sm reaction",1,"['A new #preprint from our collaboration with #BARC, India!\n\nWe report an exploration of orientation effects in multinucleon transfer reaction: \n\nMore ambitious we are, more delay in publishing a paper - we have to take a compromise between hope and reality..']",21,09,261
417,213,1277770305390415872,103370610,Juniper Lovato,"My first paper on the arXiv! With @all_are @reharp @LHDnets, we propose a âdistributed consentâ model, consent conditional on that of your network connections, and we look at its impact on privacy and observability in social networks. #dataethics #privacy P.S. #firstpaperjitters",https://arxiv.org/abs/2006.16140,"Personal data are not discrete in socially-networked digital environments. A user who consents to allow access to their profile can expose the personal data of their network connections to non-consented access. Therefore, the traditional consent model (informed and individual) is not appropriate in social networks where informed consent may not be possible for all users affected by data processing and where information is distributed across users. Here, we outline the adequacy of consent for data transactions. Informed by the shortcomings of individual consent, we introduce both a platform-specific model of ""distributed consent"" and a cross-platform model of a ""consent passport."" In both models, individuals and groups can coordinate by giving consent conditional on that of their network connections. We simulate the impact of these distributed consent models on the observability of social networks and find that low adoption would allow macroscopic subsets of networks to preserve their connectivity and privacy. ","Limits of Individual Consent and Models of Distributed Consent in Online
Social Networks",2,"['My first paper on the arXiv! With @all_are @reharp @LHDnets, we propose a âdistributed consentâ model, consent conditional on that of your network connections, and we look at its impact on privacy and observability in social networks. #dataethics #privacy ', 'P.S. #firstpaperjitters']",20,06,293
418,102,1457535272678166528,60893773,James Bullock,"New paper led by @UCIPhysAstro PhD student @danphysic uses FIRE simulations to predict J-factors for DM-indirect detection @ Galactic Center âĄïžVelocity-dependent p & d-wave annihilation amplified compared to dark-matter-only expectations @UCIrvineGD For p-wave annihilation ~(v/c)^2, standard expectation is these models not detectable because v<<c in Milky Way. We find signal is ~10 times higher than you'd expect from dark-matter-only => p-wave models w/ correct thermal abundance may be within the range of detection! The reason for enhancement is that dark-matter particle velocities are significantly enhanced in the Galactic Center compared to dark-matter-only simulations - a result of galaxy formation What fantastic work by Daniel McKeown @danphysic and wonderful collaborators @AstronoMerc_ @ZachHafen @MBKplus @AndrewWetzel @linoush95 @PFHopkins_Astro @AstroBananna - Congratulations! @TillSawala @UCIPhysAstro @danphysic @UCIrvineGD For p & d-wave, yes. Dwarfs harder. Clusters easier though. @ProfTimTait @danphysic @AstronoMerc_ @ZachHafen @MBKplus @AndrewWetzel @linoush95 @PFHopkins_Astro @AstroBananna Yes! Simona was on @danphysic advancement so has a heads up!",https://arxiv.org/abs/2111.03076,"We use FIRE-2 zoom cosmological simulations of Milky Way size galaxy halos to calculate astrophysical J-factors for dark matter annihilation and indirect detection studies. In addition to velocity-independent (s-wave) annihilation cross sections $\sigma_v$, we also calculate effective J-factors for velocity-dependent models, where the annihilation cross section is either either p-wave ($\propto v^2/c^2$) or d-wave ($\propto v^4/c^4$). We use 12 pairs of simulations, each run with dark-matter-only (DMO) physics and FIRE-2 physics. We observe FIRE runs produce central dark matter velocity dispersions that are systematically larger than in DMO runs by factors of $\sim 2.5-4$. They also have a larger range of central ($\sim 400$ pc) dark matter densities than the DMO runs ($\rho_{\rm FIRE}/\rho_{\rm DMO} \simeq 0.5 - 3$) owing to the competing effects of baryonic contraction and feedback. At 3 degrees from the Galactic Center, FIRE J-factors are $5-50$ (p-wave) and $15-500$ (d-wave) times higher than in the DMO runs. The change in s-wave signal at 3 degrees is more modest and can be higher or lower ($\sim 0.3-6$), though the shape of the emission profile is flatter (less peaked towards the Galactic Center) and more circular on the sky in FIRE runs. Our results for s-wave are broadly consistent with the range of assumptions in most indirect detection studies. We observe p-wave J-factors that are significantly enhanced compared to most past estimates. We find that thermal models with p-wave annihilation may be within range of detection in the near future. ","Amplified J-factors in the Galactic Center for velocity-dependent
darkmatter annihilation in FIRE simulations",6,"['New paper led by @UCIPhysAstro PhD student @danphysic uses FIRE simulations to predict J-factors for DM-indirect detection @ Galactic Center âĄïžVelocity-dependent p & d-wave annihilation amplified compared to dark-matter-only expectations @UCIrvineGD ', ""For p-wave annihilation ~(v/c)^2, standard expectation is these models not detectable because v<<c in Milky Way. We find signal is ~10 times higher than you'd expect from dark-matter-only => p-wave models w/ correct thermal abundance may be within the range of detection! https://t.co/QAT5V6A4jF"", 'The reason for enhancement is that dark-matter particle velocities are significantly enhanced in the Galactic Center compared to dark-matter-only simulations - a result of galaxy formation https://t.co/cjtJ9xRyDs', 'What fantastic work by Daniel McKeown @danphysic and wonderful collaborators @AstronoMerc_ @ZachHafen @MBKplus @AndrewWetzel @linoush95 @PFHopkins_Astro @AstroBananna - Congratulations! https://t.co/IzgRLdyiwO', '@TillSawala @UCIPhysAstro @danphysic @UCIrvineGD For p & d-wave, yes. Dwarfs harder. Clusters easier though.', '@ProfTimTait @danphysic @AstronoMerc_ @ZachHafen @MBKplus @AndrewWetzel @linoush95 @PFHopkins_Astro @AstroBananna Yes! Simona was on @danphysic advancement so has a heads up!']",21,11,1224
419,123,1179579092112265217,1034666951258193920,Tim Waters,"The starting point of accretion theory is the famous Bondi solution. In my new paper I show this solution is unattainable - even in an idealized universe. The tiniest density inhomogeneity introduced at ""infinity"" creates outflows I posted movies of these simulations here ",https://arxiv.org/abs/1910.01106,"The classic Bondi solution remains a common starting point both for studying black hole growth across cosmic time in cosmological simulations and for smaller scale simulations of AGN feedback. In nature, however, there will be inhomogenous distributions of rotational velocity and density along the outer radius ($R_o$) marking the sphere of influence of a black hole. While there have been many studies of how the Bondi solution changes with a prescribed angular momentum boundary condition, they have all assumed a constant density at $R_o$. In this Letter, we show that a non-uniform density at $R_o$ causes a meridional flow and due to conservation of angular momentum, the Bondi solution qualitatively changes into an inflow-outflow solution. Using physical arguments, we analytically identify the critical logarithmic density gradient $|\partial{\ln{\rho}}/\partial{\theta}|$ above which this change of the solution occurs. For realistic $R_o$, this critical gradient is less than 0.01 and tends to 0 as $R_o \rightarrow \infty$. We show using numerical simulations that, unlike for solutions with an imposed rotational velocity, the accretion rate for solutions under an inhomogenous density boundary condition remains constant at nearly the Bondi rate $\dot{M}_B$, while the outflow rate can greatly exceed $\dot{M}_B$. ",Outflows from inflows: the nature of Bondi-like accretion,2,"['The starting point of accretion theory is the famous Bondi solution. In my new paper I show this solution is unattainable - even in an idealized universe. The tiniest density inhomogeneity introduced at ""infinity"" creates outflows ', 'I posted movies of these simulations here https://t.co/P4KC4Uokkl']",19,10,286
420,19,1278025915683962880,804397363402067968,Timothy Raben,"Cool new paper out I'm really proud of this one! What's the idea: build a lattice version of ""radial quantization"" in order to study a theory like the 3d Ising model. This is the nth in a series of papers my collaborators have been writing about doing lattice field theory on non-euclidean manifolds. What is radial quantization? Basically it's like the conformal map that takes you from the plane to the sphere (really a cylinder with spherical cross section) Why is it interesting? Traditional euclidean lattice FT has discrete transnational symmetry, but isn't rotationally invariant and you can't see large scales without using huge lattices (computationally a pain!) Radial quantization includes exponentially large scales and we show our QFE procedure restores rotational symmetry. One of the fun things we found is that you have to include a Ricci curvature term to get ""good"" results. This term vanishes in the continuum limit, but including it leads to much, much faster convergence. The end result is that we are ready to do real high precision tests of the 3d Ising Model. This is a complimentary approach to things like the ""conformal bootstrap"". We are directly probing the specific theory of 3d Ising.",https://arxiv.org/abs/2006.15636,"The quantum extension of classical finite elements, referred to as quantum finite elements ({\bf QFE})~\cite{Brower:2018szu,Brower:2016vsl}, is applied to the radial quantization of 3d $\phi^4$ theory on a simplicial lattice for the $\mathbb R \times \mathbb S^2$ manifold. Explicit counter terms to cancel the one- and two-loop ultraviolet defects are implemented to reach the quantum continuum theory. Using the Brower-Tamayo~\cite{Brower:1989mt} cluster Monte Carlo algorithm, numerical results support the QFE ansatz that the critical conformal field theory (CFT) is reached in the continuum with the full isometries of $\mathbb R \times \mathbb S^2$ restored. The Ricci curvature term, while technically irrelevant in the quantum theory, is shown to dramatically improve the convergence opening, the way for high precision Monte Carlo simulation to determine the CFT data: operator dimensions, trilinear OPE couplings and the central charge. ",Radial Lattice Quantization of 3D $\phi^4$ Field Theory,7,"['Cool new paper out \n\nI\'m really proud of this one!\n\nWhat\'s the idea: build a lattice version of ""radial quantization"" in order to study a theory like the 3d Ising model.', 'This is the nth in a series of papers my collaborators have been writing about doing lattice field theory on non-euclidean manifolds.', ""What is radial quantization?\n\nBasically it's like the conformal map that takes you from the plane to the sphere (really a cylinder with spherical cross section)"", ""Why is it interesting? Traditional euclidean lattice FT has discrete transnational symmetry, but isn't rotationally invariant and you can't see large scales without using huge lattices (computationally a pain!)"", 'Radial quantization includes exponentially large scales and we show our QFE procedure restores rotational symmetry.', 'One of the fun things we found is that you have to include a Ricci curvature term to get ""good"" results. This term vanishes in the continuum limit, but including it leads to much, much faster convergence.', 'The end result is that we are ready to do real high precision tests of the 3d Ising Model. This is a complimentary approach to things like the ""conformal bootstrap"". We are directly probing the specific theory of 3d Ising.']",20,06,1222
421,110,1304207633868402688,29251447,Tuan Do,"Where did the millions of stars at the Galactic center come from? In a new paper, we find evidence that some of the stars probably came from a globular cluster or dwarf galaxy that fell in and got trapped. 1/3 How do we know this? We find that stars with low metal abundance move differently around the black hole than the stars with more metals! It is very unusual if they formed in the same place, but could be explained if the stars fell in. In a companion paper, my theory colleagues used computer simulations throwing star clusters and small galaxies into the center of our galaxy to explore this process. The simulations show that this infall might have happened over 3 billion years ago. @ProfAnnikaPeter It was fun to be able to do some dynamical modeling again!",https://arxiv.org/abs/2009.02335,"The Milky Way nuclear star cluster (MW NSC) has been used as a template to understand the origin and evolution of galactic nuclei and the interaction of nuclear star clusters with supermassive black holes. It is the only nuclear star cluster with a supermassive black hole where we can resolve individual stars to measure their kinematics and metal abundance to reconstruct its formation history. Here, we present results of the first chemo-dynamical model of the inner 1 pc of the MW NSC using metallicity and radial velocity data from the KMOS spectrograph on the Very Large Telescope. We find evidence for two kinematically and chemically distinct components in this region. The majority of the stars belong to a previously known super-solar metallicity component with a rotation axis perpendicular to the Galactic plane. However, we identify a new kinematically distinct sub-solar metallicity component which contains about 7\% of the stars and appears to be rotating faster than the main component with a rotation axis that may be misaligned. This second component may be evidence for an infalling star cluster or remnants of a dwarf galaxy, merging with the MW NSC. These measurements show that the combination of chemical abundances with kinematics is a promising method to directly study the MW NSC's origin and evolution. ","Revealing the Formation of the Milky Way Nuclear Star Cluster via
Chemo-Dynamical Modeling",4,"['Where did the millions of stars at the Galactic center come from? In a new paper, we find evidence that some of the stars probably came from a globular cluster or dwarf galaxy that fell in and got trapped. \n\n1/3\n\n', 'How do we know this? We find that stars with low metal abundance move differently around the black hole than the stars with more metals! It is very unusual if they formed in the same place, but could be explained if the stars fell in. https://t.co/R0MEgRKntF', 'In a companion paper, my theory colleagues used computer simulations throwing star clusters and small galaxies into the center of our galaxy to explore this process. The simulations show that this infall might have happened over 3 billion years ago. \n\nhttps://t.co/zA05uMjVRS', '@ProfAnnikaPeter It was fun to be able to do some dynamical modeling again!']",20,09,793
422,178,1509083448690065414,957689165902118912,Alexandre Dauphin,"Fresh from the arXivs: We study the fate of the topological Mott Insulator in a cold-atom setup with dressed-Rydberg interactions. We derive the phase diagram in the presence of long-range interactions and study its stability with respect to temperature. We also verify with DMRG calculations that the quantum anomalous Hall phase also appears on quasi 2D ladders. Finally,we discuss realistic experimental parameters where the phase could be observed. It has been a nice collaboration with @lorenzocarda , S. JuliĂĄ-FarrĂ© and M. MĂŒller",https://arxiv.org/abs/2203.14818,"The interplay between many-body interactions and the kinetic energy gives rise to rich phase diagrams hosting, among others, interaction-induced topological phases. These phases are characterized by both a local order parameter and a global topological invariant, and can exhibit exotic ground states such as self-trapped polarons and interaction-induced edge states. In this work, we investigate a realistic scenario for the quantum simulation of such systems using cold Rydberg-dressed atoms in optical lattices. We consider spinless fermions on a checkerboard lattice, interacting via the tunable-range effective potential induced by the Rydberg dressing. We perform a detailed analysis of the phase diagram at half- and incommensurate fillings, in the mean-field approximation. We furthermore study the stability of the phases with respect to temperature within the mean-field approximation and with respect to quantum fluctuations using the density matrix renormalization group method. Finally, we propose an implementation protocol, and in particular identify attainable regimes of experimental parameters in which the topological properties of the model become accessible. Our work, thereby, opens a realistic pathway to the outstanding experimental observation of this predicted phase in state-of-the-art cold atom quantum simulators. ","Accessing the topological Mott insulator in cold atom quantum simulators
with realistic Rydberg dressing",2,"['Fresh from the arXivs: \nWe study the fate of the topological Mott Insulator in a cold-atom setup with dressed-Rydberg interactions. We derive the phase diagram in the presence of long-range interactions and study its stability with respect to temperature. ', 'We also verify with DMRG calculations that the quantum anomalous Hall phase also appears on quasi 2D ladders. Finally,we discuss realistic experimental parameters where the phase could be observed. It has been a nice collaboration with @lorenzocarda , S. JuliĂĄ-FarrĂ© and M. MĂŒller']",22,03,549
423,72,1339160692440567809,216729597,Marcel S. Pawlowski,"I'm super excited for our new paper today (by @VoltarCH, @lellifede, @KatjaFah, @satellitegalaxy et al.): The title already states our main finding: ""The coherent motion of Cen A dwarf satellite galaxies remains a challenge for ÎCDM cosmology"" The study is a follow-up on our 2018 paper () which cause some debate when we found that of the Centaurus A satellites galaxies which are arranged in a flattened structure, 14 out of 16 have velocities indicative of a rotating plane of satellites. This is highly unlikely to find in cosmological simulations; less than <0.5% of host galaxies should be surrounded by such an extreme satellite galaxy structure. Baryonic physics doesn't offer an easy way out, so this constitutes a severe challenge to the cosmological model. We now test our past findings after almost doubling the number of satellites with velocity information. Guess what: 1) the kinematic signal is still present (21/28 sats follow trend)! 2) Its tension with cosmological simulations (we now use Illustris TNG) is at the 0.2% level! @VoltarCH has a great thread with more details on the paper, so let me just add a couple outtakes. The orientation that maximizes the kinematic signal aligns with the spatial flattening of the satellite distribution, like a rotating satellite plane. Dashed line: major axis of sat distr. Thick green line: minor axis Dotted lines: indicate where kinematic coherence is maximized Furthermore, while we select the most luminous (or massive) satellite galaxies in the simulations for our main analysis, the results don't change if we instead randomly pick 28 out of N top-ranked sats. The tension with LCDM is robust and not only present for special satellites. One of the concerns regarding the original study was that there were only 16 satellites with known velocities, i.e.: ""we need more data!"" We got more data. It confirms our previous findings. So, observations don't make this LCDM tension go away in Centaurus A. Combined with similar structures around the MW and M31, we really need to think hard about what we might be missing in our cosmological simulations, or even in the underlying model.",https://arxiv.org/abs/2012.08138,"The plane-of-satellites problem is one of the most severe small-scale challenges for the standard $\Lambda$CDM cosmological model: several dwarf galaxies around the Milky Way and Andromeda co-orbit in thin, planar structures. A similar case has been identified around the nearby elliptical galaxy Centaurus A (Cen A). In this Letter, we study the satellite system of Cen A adding twelve new galaxies with line-of-sight velocities from VLT/MUSE observations. We find 21 out of 28 dwarf galaxies with measured velocities share a coherent motion. Similarly flattened and coherently moving structures are found only in 0.2% of Cen A analogs in the Illustris-TNG100 cosmological simulation, independently of whether we use its dark-matter-only or hydrodynamical run. These analogs are not co-orbiting, and arise only by chance projection, thus they are short-lived structures in such simulations. Our findings indicate that the observed co-rotating planes of satellites are a persistent challenge for $\Lambda$CDM, which is largely independent from baryon physics. ","The coherent motion of Cen A dwarf satellite galaxies remains a
challenge for $\Lambda$CDM cosmology",9,"['I\'m super excited for our new paper today (by @VoltarCH, @lellifede, @KatjaFah, @satellitegalaxy et al.): \n\nThe title already states our main finding: ""The coherent motion of Cen A dwarf satellite galaxies remains a challenge for ÎCDM cosmology"" ', 'The study is a follow-up on our 2018 paper (https://t.co/hour7mFDYi) which cause some debate when we found that of the Centaurus A satellites galaxies which are arranged in a flattened structure, 14 out of 16 have velocities indicative of a rotating plane of satellites. https://t.co/dCaL47xDDy', ""This is highly unlikely to find in cosmological simulations; less than <0.5% of host galaxies should be surrounded by such an extreme satellite galaxy structure. Baryonic physics doesn't offer an easy way out, so this constitutes a severe challenge to the cosmological model. https://t.co/eoR7QtdvVX"", 'We now test our past findings after almost doubling the number of satellites with velocity information. Guess what: \n\n1) the kinematic signal is still present (21/28 sats follow trend)!\n\n2) Its tension with cosmological simulations (we now use Illustris TNG) is at the 0.2% level! https://t.co/vV9MAPlguG', '@VoltarCH has a great thread with more details on the paper, so let me just add a couple outtakes.\n\nhttps://t.co/3xClZkUzxI', 'The orientation that maximizes the kinematic signal aligns with the spatial flattening of the satellite distribution, like a rotating satellite plane.\n\nDashed line: major axis of sat distr.\nThick green line: minor axis\nDotted lines: indicate where kinematic coherence is maximized https://t.co/eurZFFXcqk', ""Furthermore, while we select the most luminous (or massive) satellite galaxies in the simulations for our main analysis, the results don't change if we instead randomly pick 28 out of N top-ranked sats. The tension with LCDM is robust and not only present for special satellites. https://t.co/5g48bBT95R"", 'One of the concerns regarding the original study was that there were only 16 satellites with known velocities, i.e.: ""we need more data!""\n\nWe got more data. \n\nIt confirms our previous findings.', ""So, observations don't make this LCDM tension go away in Centaurus A. Combined with similar structures around the MW and M31, we really need to think hard about what we might be missing in our cosmological simulations, or even in the underlying model.""]",20,12,2215
424,173,1488434222775902210,1364749022,Haitham Bou Ammar,"Designing fast antibodies is critical in successfully countering diseases. In this paper, we collaborate with @victorgreiff team and with @DanyBouAmmar from @AUBMC_Official to propose an effective Bayesian optimisation solver: We show: 1. A new way to search for antibody CDRH3s using Bayesian optimisation with various kernels, including Protein Bert warps. 2. We devise a constraint acquisition maximiser that finds new CDRH3 within a trust region while being feasible w.r.t antibody development constraints. 3. We show high-affinity and super+ high-affinity binders in only 38 and 85 trials, respectively. @ImanisMind @KhanAsif__",https://arxiv.org/abs/2201.12570,"Antibodies are canonically Y-shaped multimeric proteins capable of highly specific molecular recognition. The CDRH3 region located at the tip of variable chains of an antibody dominates antigen-binding specificity. Therefore, it is a priority to design optimal antigen-specific CDRH3 regions to develop therapeutic antibodies to combat harmful pathogens. However, the combinatorial nature of CDRH3 sequence space makes it impossible to search for an optimal binding sequence exhaustively and efficiently, especially not experimentally. Here, we present AntBO: a Combinatorial Bayesian Optimisation framework enabling efficient in silico design of the CDRH3 region. Ideally, antibodies should bind to their target antigen and be free from any harmful outcomes. Therefore, we introduce the CDRH3 trust region that restricts the search to sequences with feasible developability scores. To benchmark AntBO, we use the Absolut! software suite as a black-box oracle because it can score the target specificity and affinity of designed antibodies in silico in an unconstrained fashion. The results across 188 antigens demonstrate the benefit of AntBO in designing CDRH3 regions with diverse biophysical properties. In under 200 protein designs, AntBO can suggest antibody sequences that outperform the best binding sequence drawn from 6.9 million experimentally obtained CDRH3s and a commonly used genetic algorithm baseline. Additionally, AntBO finds very-high affinity CDRH3 sequences in only 38 protein designs whilst requiring no domain knowledge. We conclude AntBO brings automated antibody design methods closer to what is practically viable for in vitro experimentation. ","AntBO: Towards Real-World Automated Antibody Design with Combinatorial
Bayesian Optimisation",5,"['Designing fast antibodies is critical in successfully countering diseases. In this paper, we collaborate with @victorgreiff team and with @DanyBouAmmar from @AUBMC_Official to propose an effective Bayesian optimisation solver: ', 'We show: \n1. A new way to search for antibody CDRH3s using Bayesian optimisation with various kernels, including Protein Bert warps.', '2. We devise a constraint acquisition maximiser that finds new CDRH3 within a trust region while being feasible w.r.t antibody development constraints.', '3. We show high-affinity and super+ high-affinity binders in only 38 and 85 trials, respectively.', '@ImanisMind @KhanAsif__']",22,01,646
425,194,1458442467011878917,186701821,Aldo Pacchiano,"(1/3 ) Happy to finally post this paper on arxiv! In this work we propose an algorithm with logarithmic instance dependent regret guarantees for the Multi-Player Multi-Armed bandit problem. (2/3) Our results are based on two innovations. First, we show that a simple modification to a successive elimination strategy can be used by the players to estimate their suboptimality gaps, up to constant factors, in the absence of collisions. (3/3) Second, we leverage the first result to design a communication protocol that successfully uses the small reward of collisions to coordinate among players, while preserving meaningful instance-dependent logarithmic regret guarantees.",https://arxiv.org/abs/2111.04873,"We study the problem of information sharing and cooperation in Multi-Player Multi-Armed bandits. We propose the first algorithm that achieves logarithmic regret for this problem. Our results are based on two innovations. First, we show that a simple modification to a successive elimination strategy can be used to allow the players to estimate their suboptimality gaps, up to constant factors, in the absence of collisions. Second, we leverage the first result to design a communication protocol that successfully uses the small reward of collisions to coordinate among players, while preserving meaningful instance-dependent logarithmic regret guarantees. ","An Instance-Dependent Analysis for the Cooperative Multi-Player
Multi-Armed Bandit",3,"['(1/3 ) Happy to finally post this paper on arxiv! In this work we propose an algorithm with logarithmic instance dependent regret guarantees for the Multi-Player Multi-Armed bandit problem. \n', '(2/3) Our results are based on two innovations. First, we show that a simple modification to a successive elimination strategy can be used by the players to estimate their suboptimality gaps, up to constant factors, in the absence of collisions.', '(3/3) Second, we leverage the first result to design a communication protocol that successfully uses the small reward of collisions to coordinate among players, while preserving meaningful instance-dependent logarithmic regret guarantees.']",21,11,681
426,51,1217983086136303616,104529881,Diogo Souto,"Our new paper is now available on the arXiv! ""Stellar Characterization of M-dwarfs from the @APOGEEsurvey Survey: A Calibrator Sample for the M-dwarf Metallicities"" đ @APOGEEsurvey We studied 21 M-dwarfs where 11 are in binaries systems with a warmer companion and the other 10 had interferometric measurements of the stellar radius from the literature. @APOGEEsurvey We determine atmospheric parameters and metallicity for all stars based only on the spectra. The metallicity results compare pretty well with the literature for the warmer stars. @APOGEEsurvey The results presented here will be used to future calibrations of the APOGEE/ASPCAP pipeline, if we need so. Check it out! =D",https://arxiv.org/abs/2001.05597,"We present spectroscopic determinations of the effective temperatures, surface gravities and metallicities for 21 M-dwarfs observed at high-resolution (R $\sim$ 22,500) in the \textit{H}-band as part of the SDSS-IV APOGEE survey. The atmospheric parameters and metallicities are derived from spectral syntheses with 1-D LTE plane parallel MARCS models and the APOGEE atomic/molecular line list, together with up-to-date H$_{2}$O and FeH molecular line lists. Our sample range in $T_{\rm eff}$ from $\sim$ 3200 to 3800K, where eleven stars are in binary systems with a warmer (FGK) primary, while the other 10 M-dwarfs have interferometric radii in the literature. We define an $M_{K_{S}}$--Radius calibration based on our M-dwarf radii derived from the detailed analysis of APOGEE spectra and Gaia DR2 distances, as well as a mass-radius relation using the spectroscopically-derived surface gravities. A comparison of the derived radii with interferometric values from the literature finds that the spectroscopic radii are slightly offset towards smaller values, with $\Delta$ = -0.01 $\pm$ 0.02 $R{\star}$/$R_{\odot}$. In addition, the derived M-dwarf masses based upon the radii and surface gravities tend to be slightly smaller (by $\sim$5-10\%) than masses derived for M-dwarf members of eclipsing binary systems for a given stellar radius. The metallicities derived for the 11 M-dwarfs in binary systems, compared to metallicities obtained for their hotter FGK main-sequence primary stars from the literature, shows excellent agreement, with a mean difference of [Fe/H](M-dwarf - FGK primary) = +0.04 $\pm$ 0.18 dex, confirming the APOGEE metallicity scale derived here for M-dwarfs. ","Stellar Characterization of M-dwarfs from the APOGEE Survey: A
Calibrator Sample for the M-dwarf Metallicities",4,"['Our new paper is now available on the arXiv!\n""Stellar Characterization of M-dwarfs from the \n@APOGEEsurvey\n Survey: A Calibrator Sample for the M-dwarf Metallicities""\n đ', '@APOGEEsurvey We studied 21 M-dwarfs where 11 are in binaries systems with a warmer companion and the other 10 had interferometric measurements of the stellar radius from the literature.', '@APOGEEsurvey We determine atmospheric parameters and metallicity for all stars based only on the spectra. The metallicity results compare pretty well with the literature for the warmer stars.', '@APOGEEsurvey The results presented here will be used to future calibrations of the APOGEE/ASPCAP pipeline, if we need so. \nCheck it out! =D']",20,01,694
427,109,1414969834115411968,869532833764831232,Thibaut Vidal,"New exponential-size neighborhoods for the pickup-and-delivery problem. Optimal solutions are attained for most problems with up to 30 nodes in a *single* local-search descent, giving 1,000x speedup over previous algorithms! Check our paper here: #ORMS work with Toni Pacheco, Rafael Martinelli (@rafaelmartinel8), Tulio Toffolo (@tuliotoffolo) and Anand Subramanian (@Subjectto_) đ@pacheco_toni",https://arxiv.org/abs/2107.05189,"Neighborhood search is a cornerstone of state-of-the-art traveling salesman and vehicle routing metaheuristics. While neighborhood exploration procedures are well developed for problems with individual services, their counterparts for one-to-one pickup-and-delivery problems have been more scarcely studied. A direct extension of classic neighborhoods is often inefficient or complex due to the necessity of jointly considering service pairs. To circumvent these issues, we introduce major improvements to existing neighborhood searches for the pickup-and-delivery traveling salesman problem and new large neighborhoods. We show that the classical Relocate-Pair neighborhood can be fully explored in $O(n^2)$ instead of $O(n^3)$ time. We adapt the 4-Opt and Balas-Simonetti neighborhoods to consider precedence constraints. Moreover, we introduce an exponential-size neighborhood called 2k-Opt, which includes all solutions generated by multiple nested 2-Opts and can be searched in $O(n^2)$ time using dynamic programming. We conduct extensive computational experiments, highlighting the significant contribution of these new neighborhoods and speed-up strategies within two classical metaheuristics. Notably, our approach permits to repeatedly solve small pickup-and-delivery problem instances to optimality or near-optimality within milliseconds, and therefore it represents a valuable tool for time-critical applications such as meal delivery or mobility of demand. ","Exponential-Size Neighborhoods for the Pickup-and-Delivery Traveling
Salesman Problem",3,"['New exponential-size neighborhoods for the pickup-and-delivery problem. Optimal solutions are attained for most problems with up to 30 nodes in a *single* local-search descent, giving 1,000x speedup over previous algorithms! Check our paper here:\n\n#ORMS ', 'work with Toni Pacheco, Rafael Martinelli (@rafaelmartinel8), Tulio Toffolo (@tuliotoffolo) and Anand Subramanian (@Subjectto_)', 'đ@pacheco_toni']",21,07,409
428,41,1219805362133569536,36438289,Dr. PatrĂcia,My new paper is now on Arxiv: it is about NGC 613! One of the richest galactic nuclei! Meu paper estĂĄ no Arxiv: Ă© sobre NGC 613! Um dos nĂșcleos galĂĄcticos mais ricos jĂĄ estudado! #astronomy #MNRAS #astroph #arxiv #agn #astronomia #astrophysics,https://arxiv.org/abs/2001.07328,"In this paper, we report a detailed study with a variety of data from optical, near-infrared, X-ray, and radio telescopes of the nuclear region of the galaxy NGC 613 with the aim of understanding its complexity. We detected an extended stellar emission in the nucleus that, at first, appears to be, in the optical band, two stellar nuclei separated by a stream of dust. The active galactic nucleus (AGN) is identified as a variable point-like source between these two stellar components. There is a central hard X-ray emission and an extended soft X-ray emission that closely coincides with the ionization cone, as seen in the [O III]$\lambda$5007 emission. The centroid of the [O I]$\lambda$6300 emission does not coincide with the AGN, being shifted by 0.24 arcsec towards the ionization cone; this shift is probably caused by a combination of differential dust extinction together with emission and reflection in the ionization cone. The optical spectra extracted from the central region are typical of low-ionization nuclear emission-line regions. We also identify 10 H II regions, eight of them in a star forming ring that is visible in Br$\gamma$, [Fe II]$\lambda$16436 and molecular CO(3-2) images observed in previous studies. Such a ring also presents weak hard X-ray emission, probably associated with supernova remnants, not detected in other studies. The position of the AGN coincides with the centre of a nuclear spiral (detected in previous works) that brings gas and dust from the bar to the nucleus, causing the high extinction in this area. ",The nuclear region of NGC 613. I -- Multiwavelength analysis,1,['My new paper is now on Arxiv: it is about NGC 613! One of the richest galactic nuclei! \n\nMeu paper estĂĄ no Arxiv: Ă© sobre NGC 613! Um dos nĂșcleos galĂĄcticos mais ricos jĂĄ estudado!\n\n\n\n#astronomy #MNRAS #astroph #arxiv #agn #astronomia #astrophysics'],20,01,251
429,51,1519025647540224002,40299444,Alexey Petrov,"A new paper! I propose a new state, the glueball molecule, which is a molecular state of a glueball and an ordinary meson. I also study phenomenology of such states. See it at @Jordy_de_Vries Hi Jordy. Yes, indeed, I cannot predict the absolute value for the binding energy because of that counter term. So, just like in deuteron, I assume that pi(1800) is a molecule and get \lambda_R. But once I do, I can do things for other mesons⊠@Jordy_de_Vries So I use the formalism only to establish that there is a pole, I.e. a bound state @Jordy_de_Vries But I donât think that this system is as fine-tuned as a deuteron or an X(3872)",https://arxiv.org/abs/2204.11269,"Experimental searches for pure glueball states have proven challenging and so far yielded no results. This is believed to occur because glueballs mix with the ordinary $q\bar q$ states with the same quantum numbers. We will discuss an alternative mechanism, the formation of the glueball-meson molecular states. We will argue that the wave functions of already observed excited meson states may contain a significant part due to such molecular states. We discuss the phenomenology of glueball molecules and comment on a possible charmless component of the $XYZ$ states. ",Glueball molecules,4,"['A new paper! I propose a new state, the glueball molecule, which is a molecular state of a glueball and an ordinary meson. I also study phenomenology of such states. See it at ', '@Jordy_de_Vries Hi Jordy. Yes, indeed, I cannot predict the absolute value for the binding energy because of that counter term. So, just like in deuteron, I assume that pi(1800) is a molecule and get \\lambda_R. But once I do, I can do things for other mesonsâŠ', '@Jordy_de_Vries So I use the formalism only to establish that there is a pole, I.e. a bound state', '@Jordy_de_Vries But I donât think that this system is as fine-tuned as a deuteron or an X(3872)']",22,04,643
430,37,1088106875617464320,192826908,Jorge Lillo-Box,"Is TOI-178.01-02 the first pair of #coorbital exoplanets? In this new TROY paper by Adrien Leleu, we analyze this intriguing system detected by @NASA_TESS with a potential 3:2:2 resonance planetary system. Thread đ (1/4) Three planets were detected by @NASA_TESS around the star TOI-178. The outer two planet candidates displayed very similar periods of 9.95d and 10.35d which transit 60Âș apart.... (2/4) The inner planet is located in a 3:2 resonance with other two. We demonstrate that if the planets are confirmed, only the co-orbital scenario for the outer planets can maintain this system long-term stable (Credit image: Helena Morais & Fathi Namouni) (3/4) Depending on the mass ratio of the two coorbital planets (m1 and m2) and the star (m0), they would be configured in tadpole (orange region) or horshoe (purple region) orbits. (4/4) And this is not the only system with these characteristics! Other systems from the Kepler mission were discarded because of this weird similar-period pair. In this paper we demonstrate that those system can exist and provide the tool to confirm their nature through TTVs! ",https://arxiv.org/abs/1901.07250,"Despite the existence of co-orbital bodies in the solar system, and the prediction of the formation of co-orbital planets by planetary system formation models, no co-orbital exoplanets (also called trojans) have been detected thus far. Here we study the signature of co-orbital exoplanets in transit surveys when two planet candidates in the system orbit the star with similar periods. Such pair of candidates could be discarded as false positives because they are not Hill-stable. However, horseshoe or long libration period tadpole co-orbital configurations can explain such period similarity. This degeneracy can be solved by considering the Transit Timing Variations (TTVs) of each planet. We then focus on the three planet candidates system TOI-178: the two outer candidates of that system have similar orbital period and had an angular separation near $\pi/3$ during the TESS observation of sector 2. Based on the announced orbits, the long-term stability of the system requires the two close-period planets to be co-orbitals. Our independent detrending and transit search recover and slightly favour the three orbits close to a 3:2:2 resonant chain found by the TESS pipeline, although we cannot exclude an alias that would put the system close to a 4:3:2 configuration. We then analyse in more detail the co-orbital scenario. We show that despite the influence of an inner planet just outside the 2:3 mean-motion resonance, this potential co-orbital system can be stable on the Giga-year time-scale for a variety of planetary masses, either on a trojan or a horseshoe orbit. We predict that large TTVs should arise in such configuration with a period of several hundred days. We then show how the mass of each planet can be retrieved from these TTVs. ",Co-orbital exoplanets from close period candidates: The TOI-178 case,5,"['Is TOI-178.01-02 the first pair of #coorbital exoplanets? In this new TROY paper by Adrien Leleu, we analyze this intriguing system detected by @NASA_TESS with a potential 3:2:2 resonance planetary system. Thread đ', '(1/4) Three planets were detected by @NASA_TESS around the star TOI-178. The outer two planet candidates displayed very similar periods of 9.95d and 10.35d which transit 60Âș apart.... https://t.co/Y1rPV7OzRj', '(2/4) The inner planet is located in a 3:2 resonance with other two. We demonstrate that if the planets are confirmed, only the co-orbital scenario for the outer planets can maintain this system long-term stable (Credit image: Helena Morais & Fathi Namouni) https://t.co/jqeeMIATJ7', '(3/4) Depending on the mass ratio of the two coorbital planets (m1 and m2) and the star (m0), they would be configured in tadpole (orange region) or horshoe (purple region) orbits. https://t.co/CLNMd8DyeG', '(4/4) And this is not the only system with these characteristics! Other systems from the Kepler mission were discarded because of this weird similar-period pair. In this paper we demonstrate that those system can exist and provide the tool to confirm their nature through TTVs! https://t.co/f6jI6BxzJg']",19,01,1150
431,24,953599022870261760,27047369,CĂ©sar A. Hidalgo,"New paper exploring (i) how individual countries jump in the product space, (ii) when they jump far (answer: at an intermediate level of development), and (iii) whether jumping far is beneficial for economic growth (it provides a slight short term boost). ",https://arxiv.org/abs/1801.05352,"Countries tend to diversify their exports by entering products that are related to their current exports. Yet this average behavior is not representative of every diversification path. In this paper, we introduce a method to identify periods when countries enter unrelated products. We analyze the economic diversification paths of 93 countries between 1965 and 2014 and find that countries enter unrelated products in only about 7.2% of all observations. We find that countries enter more unrelated products when they are at an intermediate level of economic development, and when they have higher levels of human capital. Finally, we ask whether countries entering more unrelated products grow faster than those entering only related products. The data shows that countries that enter more unrelated activities experience a small but significant increase in future economic growth, compared to countries with a similar level of income, human capital, capital stock per worker, and economic complexity. ","Shooting High or Low: Do Countries Benefit from Entering Unrelated
Activities?",1,"['New paper exploring (i) how individual countries jump in the product space, (ii) when they jump far (answer: at an intermediate level of development), and (iii) whether jumping far is beneficial for economic growth (it provides a slight short term boost). ']",18,01,269
432,59,1329481995542552579,1240355312,Yu Su,"GrailQA: A new large-scale, high-quality dataset for QA on knowledge bases - 64K questions over 3.7K relations across 86 domains - Support evaluation of i.i.d., compositional, and zero-shot generalization paper: leaderboard: My student @yugu_nlp will discuss the work at the Interactive and Executable Semantic Parsing Workshop@EMNLP (@intexsempar2020) at 23:00 (UTC) / 18:00 (EST). Joint work with Sue Kase, Michelle Vanni, Brian Sadler, @percyliang, Xifeng Yan We also propose new BERT-based KBQA models and demonstrate, for the first time, the critical role of joint contextualized representation of utterances and KB schema for non-i.i.d. generalization of KBQA models! More details in the paper.",https://arxiv.org/abs/2011.07743,"Existing studies on question answering on knowledge bases (KBQA) mainly operate with the standard i.i.d assumption, i.e., training distribution over questions is the same as the test distribution. However, i.i.d may be neither reasonably achievable nor desirable on large-scale KBs because 1) true user distribution is hard to capture and 2) randomly sample training examples from the enormous space would be highly data-inefficient. Instead, we suggest that KBQA models should have three levels of built-in generalization: i.i.d, compositional, and zero-shot. To facilitate the development of KBQA models with stronger generalization, we construct and release a new large-scale, high-quality dataset with 64,331 questions, GrailQA, and provide evaluation settings for all three levels of generalization. In addition, we propose a novel BERT-based KBQA model. The combination of our dataset and model enables us to thoroughly examine and demonstrate, for the first time, the key role of pre-trained contextual embeddings like BERT in the generalization of KBQA. ","Beyond I.I.D.: Three Levels of Generalization for Question Answering on
Knowledge Bases",3,"['GrailQA: A new large-scale, high-quality dataset for QA on knowledge bases\n\n- 64K questions over 3.7K relations across 86 domains\n- Support evaluation of i.i.d., compositional, and zero-shot generalization\n\npaper: \nleaderboard: ', 'My student @yugu_nlp will discuss the work at the Interactive and Executable Semantic Parsing Workshop@EMNLP (@intexsempar2020) at 23:00 (UTC) / 18:00 (EST).\n\nJoint work with Sue Kase, Michelle Vanni, Brian Sadler, @percyliang, Xifeng Yan', 'We also propose new BERT-based KBQA models and demonstrate, for the first time, the critical role of joint contextualized representation of utterances and KB schema for non-i.i.d. generalization of KBQA models! More details in the paper.']",20,11,714
433,214,1277496891371028481,293983295,Vincent Traag,"In this preprint we study the agreement between metrics and peer review at the institutional level and also quantify the internal agreement of peer review. Our analysis is based on a sample of the Italian VQR exercise organised by @anvur_ita The agreement between metrics and peer review is on par with the internal agreement among two reviewers for certain fields of science. (Figure shows median absolute differences on a scale 3-30) The level of agreement is generally lower at the publication level level than at the institutional level, for both metrics and peer review. This work follows up on earlier work on the UK REF All data for replication is available from ",https://arxiv.org/abs/2006.14830,"In the past decades, many countries have started to fund academic institutions based on the evaluation of their scientific performance. In this context, peer review is often used to assess scientific performance. Bibliometric indicators have been suggested as an alternative. A recurrent question in this context is whether peer review and metrics tend to yield similar outcomes. In this paper, we study the agreement between bibliometric indicators and peer review at the institutional level. Additionally, we also quantify the internal agreement of peer review at the institutional level. We find that the level of agreement is generally higher at the institutional level than at the publication level. Overall, the agreement between metrics and peer review is on par with the internal agreement among two reviewers for certain fields of science. This suggests that for some fields, bibliometric indicators may possibly be considered as an alternative to peer review for national research assessment exercises. ",Metrics and peer review agreement at the institutional level,5,"['In this preprint we study the agreement between metrics and peer review at the institutional level and also quantify the internal agreement of peer review. Our analysis is based on a sample of the Italian VQR exercise organised by @anvur_ita ', 'The agreement between metrics and peer review is on par with the internal agreement among two reviewers for certain fields of science. (Figure shows median absolute differences on a scale 3-30) https://t.co/xOM5Orfgjh', 'The level of agreement is generally lower at the publication level level than at the institutional level, for both metrics and peer review. https://t.co/BU5WfiXH9J', 'This work follows up on earlier work on the UK REF https://t.co/DR6e4o3nP4', 'All data for replication is available from\nhttps://t.co/8fj8WxPc3s']",20,06,704
434,106,1324789973682282497,888216099757490176,Maithra Raghu,"New paper Teaching with Commentaries We introduce commentaries, metalearned information to help neural net training & give insights on learning process, dataset & model representations Led by @RaghuAniruddh & w/ @skornblith @DavidDuvenaud @geoffreyhinton ",https://arxiv.org/abs/2011.03037,"Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In this paper, we take steps towards extending the scope of teaching. We propose a flexible teaching framework using commentaries, learned meta-information helpful for training on a particular task. We present gradient-based methods to learn commentaries, leveraging recent work on implicit differentiation for scalability. We explore diverse applications of commentaries, from weighting training examples, to parameterising label-dependent data augmentation policies, to representing attention masks that highlight salient image regions. We find that commentaries can improve training speed and/or performance, and provide insights about the dataset and training process. We also observe that commentaries generalise: they can be reused when training new models to obtain performance benefits, suggesting a use-case where commentaries are stored with a dataset and leveraged in future for improved model training. ",Teaching with Commentaries,1,"['New paper Teaching with Commentaries \n\nWe introduce commentaries, metalearned information to help neural net training & give insights on learning process, dataset & model representations\n\nLed by @RaghuAniruddh & w/ @skornblith @DavidDuvenaud @geoffreyhinton ']",20,11,268
435,90,960912151736090624,308587014,Robert Feldt,"Our latest results on Psychological Safety (PS) seems to support Google's result that PS is important in SW teams. We also find clarity of team norms to be critical. Not yet peer-reviewed (but if you want to be cutting edge :)): Data is from 200+ individuals, 38 teams and 5 SW companies. All in Sweden. :) Independent vars are (self-assessed) team performance & job satisfaction. Both psych safety & norm clarity are strong predictors especially with job satisfaction. Note! Not content of norms but if they are clear/explicit/shared. @1danilo Yes, it has been part of social science research on teams for long but we have not found much empirical investigstion of it in SE. If youâve seen anything please share.",https://arxiv.org/abs/1802.01378,"In the software engineering industry today, companies primarily conduct their work in teams. To increase organizational productivity, it is thus crucial to know the factors that affect team effectiveness. Two team-related concepts that have gained prominence lately are psychological safety and team norms. Still, few studies exist that explore these in a software engineering context. Therefore, with the aim of extending the knowledge of these concepts, we examined if psychological safety and team norm clarity associate positively with software developers' self-assessed team performance and job satisfaction, two important elements of effectiveness. We collected industry survey data from practitioners (N = 217) in 38 development teams working for five different organizations. The result of multiple linear regression analyses indicates that both psychological safety and team norm clarity predict team members' self-assessed performance and job satisfaction. The findings also suggest that clarity of norms is a stronger (30\% and 71\% stronger, respectively) predictor than psychological safety. This research highlights the need to examine, in more detail, the relationship between social norms and software development. The findings of this study could serve as an empirical baseline for such, future work. ",Psychological Safety and Norm Clarity in Software Engineering Teams,4,"[""Our latest results on Psychological Safety (PS) seems to support Google's result that PS is important in SW teams. We also find clarity of team norms to be critical. Not yet peer-reviewed (but if you want to be cutting edge :)): "", 'Data is from 200+ individuals, 38 teams and 5 SW companies. All in Sweden. :)', 'Independent vars are (self-assessed) team performance & job satisfaction. Both psych safety & norm clarity are strong predictors especially with job satisfaction. \nNote! Not content of norms but if they are clear/explicit/shared.', '@1danilo Yes, it has been part of social science research on teams for long but we have not found much empirical investigstion of it in SE. If youâve seen anything please share.']",18,02,720
436,49,1384691983214592004,310916361,Farnik Nikakhtar,"đšNew paper alertđš delighted to share our new paper ""New families in our Solar neighborhood: applying Gaussian Mixture models for objective classification of structures in the Milky Way and in simulations"" We have constructed a Gaussian mixture model for both simulated (Gaia-APOGEE mocks from FIRE-2) and observed (Gaia-APOGEE DR16) stars in the solar neighborhood using velocities and metallicities; in both catalogs, the best-fit GMM uses *five* independent components! This paper also serves as the data release paper for the new APOGEE-Gaia synthetic surveys using ananke (), which will be made public as a value-added catalog with SDSS-IV DR17 Many thanks to my co-authors @astrorobyn; @AndrewWetzel; @sloebman; @sanjib_sharma1; @rareflwr41; @ted_mackereth; Vijith Jacob Poovelil; @zastronomerski; @anabonaca; @_sarahmartell_; Henrik Jönsson; and Claude-Andre Faucher-Giguere",https://arxiv.org/abs/2104.08394,"The standard picture of galaxy formation motivates the decomposition of the Milky Way into 3--4 stellar populations with distinct kinematic and elemental abundance distributions: the thin disk, thick disk, bulge, and stellar halo. To test this idea, we construct a Gaussian mixture model (GMM) for both simulated and observed stars in the Solar neighborhood, using measured velocities and iron abundances (i.e., an augmented Toomre diagram) as the distributions to be decomposed. We compare results for the Gaia-APOGEE DR16 crossmatch catalog of the Solar neighborhood with those from a suite of synthetic Gaia-APOGEE crossmatches constructed from FIRE-2 cosmological simulations of Milky Way-mass galaxies. We find that in both the synthetic and real data, the best-fit GMM uses five independent components, some of whose properties resemble the standard populations predicted by galaxy formation theory. Two components can be identified unambiguously as the thin disk and another as the halo. However, instead of a single counterpart to the thick disk, there are three intermediate components with different age and alpha abundance distributions (although these data are not used to construct the model). We use decompositions of the synthetic data to show that the classified components indeed correspond to stars with different origins. By analogy with the simulated data, we show that our mixture model of the real Gaia-APOGEE crossmatch distinguishes the following components: (1) a classic thin disk of young stars on circular orbits (46%), (2) thin disk stars heated by interactions with satellites (22%), (3, 4) two components representing the velocity asymmetry of the alpha-enhanced thick disk (27%), and (5) a stellar halo consistent with early, massive accretion (4%). ","New families in our Solar neighborhood: applying Gaussian Mixture models
for objective classification of structures in the Milky Way and in
simulations",4,"['đšNew paper alertđš delighted to share our new paper ""New families in our Solar neighborhood: applying Gaussian Mixture models for objective classification of structures in the Milky Way and in simulations""\n\n', 'We have constructed a Gaussian mixture model for both simulated (Gaia-APOGEE mocks from FIRE-2) and observed (Gaia-APOGEE DR16) stars in the solar neighborhood using velocities and metallicities; in both catalogs, the best-fit GMM uses *five* independent components! https://t.co/5cvwLSBigP', 'This paper also serves as the data release paper for the new APOGEE-Gaia synthetic surveys using ananke (https://t.co/7rXQ318tb0), which will be made public as a value-added catalog with SDSS-IV DR17', 'Many thanks to my co-authors @astrorobyn; @AndrewWetzel; @sloebman; @sanjib_sharma1; @rareflwr41; @ted_mackereth; Vijith Jacob Poovelil; @zastronomerski; @anabonaca; @_sarahmartell_; Henrik Jönsson; and Claude-Andre Faucher-Giguere']",21,04,900
437,134,1370703967437557767,1339309323839700992,Shuaiwen Leon Song,Our paper on system-level design for future virtual reality systems will appear at @ASPLOSConf. This work offers a new class of intelligent mobile/cloud collaborative systems that enable future high-quality low-latency planet-scale VR experiences. @drmbt ,https://arxiv.org/abs/2102.13191,"High Quality Mobile Virtual Reality (VR) is what the incoming graphics technology era demands: users around the world, regardless of their hardware and network conditions, can all enjoy the immersive virtual experience. However, the state-of-the-art software-based mobile VR designs cannot fully satisfy the realtime performance requirements due to the highly interactive nature of user's actions and complex environmental constraints during VR execution. Inspired by the unique human visual system effects and the strong correlation between VR motion features and realtime hardware-level information, we propose Q-VR, a novel dynamic collaborative rendering solution via software-hardware co-design for enabling future low-latency high-quality mobile VR. At software-level, Q-VR provides flexible high-level tuning interface to reduce network latency while maintaining user perception. At hardware-level, Q-VR accommodates a wide spectrum of hardware and network conditions across users by effectively leveraging the computing capability of the increasingly powerful VR hardware. Extensive evaluation on real-world games demonstrates that Q-VR can achieve an average end-to-end performance speedup of 3.4x (up to 6.7x) over the traditional local rendering design in commercial VR devices, and a 4.1x frame rate improvement over the state-of-the-art static collaborative rendering. ","Q-VR: System-Level Design for Future Mobile Collaborative Virtual
Reality",1,['Our paper on system-level design for future virtual reality systems will appear at @ASPLOSConf. This work offers a new class of intelligent mobile/cloud collaborative systems that enable future high-quality low-latency planet-scale VR experiences. @drmbt \n\n'],21,02,262
438,3,1379530104636796930,92182169,Ian Manchester,"Koopman operator and contraction analysis are two frameworks for exact analysis of nonlinear dynamical systems (hard) via analysis of linear dynamical systems (easy). In our new paper, we show that they are in fact equivalent for a wide class of problems: ",https://arxiv.org/abs/2103.15033,"In this paper we prove new connections between two frameworks for analysis and control of nonlinear systems: the Koopman operator framework and contraction analysis. Each method, in different ways, provides exact and global analyses of nonlinear systems by way of linear systems theory. The main results of this paper show equivalence between contraction and Koopman approaches for a wide class of stability analysis and control design problems. In particular: stability or stablizability in the Koopman framework implies the existence of a contraction metric (resp. control contraction metric) for the nonlinear system. Further in certain cases the converse holds: contraction implies the existence of a set of observables with which stability can be verified via the Koopman framework. We provide results for the cases of autonomous and time-varying systems, as well as orbital stability of limit cycles. Furthermore, the converse claims are based on a novel relation between the Koopman method and construction of a Kazantzis-Kravaris-Luenberger observer. We also provide a byproduct of the main results, that is, a new method to learn contraction metrics from trajectory data via linear system identification. ","On the equivalence of contraction and Koopman approaches for nonlinear
stability and control",1,"['Koopman operator and contraction analysis are two frameworks for exact analysis of nonlinear dynamical systems (hard) via analysis of linear dynamical systems (easy). In our new paper, we show that they are in fact equivalent for a wide class of problems:\n']",21,03,262
439,27,1075059407258554369,3491609373,David Zimmerer,Excited to share our new paper on Unsupervised Anomaly Detection đ @MIC_DKFZ. Able to simultaneously detect and localize anomalies in medical images and achieves new unsupervised SoTA performance on two public segmentation datasets đ. Check it out: ,https://arxiv.org/abs/1812.05941,"Unsupervised learning can leverage large-scale data sources without the need for annotations. In this context, deep learning-based auto encoders have shown great potential in detecting anomalies in medical images. However, state-of-the-art anomaly scores are still based on the reconstruction error, which lacks in two essential parts: it ignores the model-internal representation employed for reconstruction, and it lacks formal assertions and comparability between samples. We address these shortcomings by proposing the Context-encoding Variational Autoencoder (ceVAE) which combines reconstruction- with density-based anomaly scoring. This improves the sample- as well as pixel-wise results. In our experiments on the BraTS-2017 and ISLES-2015 segmentation benchmarks, the ceVAE achieves unsupervised ROC-AUCs of 0.95 and 0.89, respectively, thus outperforming state-of-the-art methods by a considerable margin. ","Context-encoding Variational Autoencoder for Unsupervised Anomaly
Detection",1,['Excited to share our new paper on Unsupervised Anomaly Detection đ @MIC_DKFZ. Able to simultaneously detect and localize anomalies in medical images and achieves new unsupervised SoTA performance on two public segmentation datasets đ. \nCheck it out: '],18,12,263
440,152,1358763306895486985,1277975431723900936,Laura Rogers,"New co-authored paper on arXiv today! We find evidence that exoplanetary bodies have the same refractory composition as their host star Some white dwarfs show evidence that they have accreted planetary material. Analysing their spectra reveals the bulk composition of the exoplanetary material that they have accreted. We analyse the abundances of a wide binary system consisting of a K-dwarf and a white dwarf which has accreted planetary material. As binary pairs are chemically homogeneous, the K-dwarf is used as a proxy for the progenitor composition of the white dwarf. The abundances of the K-dwarf and the planetary material polluting the atmosphere of the white dwarf are consistent with the hypothesis that exoplanetary bodies have the same refractory composition as their host stars. Check out the paper for more detail, discussions and caveats @MarcoMarco1822 Potentially! Although planet formation is very complicated. It may help to target certain stars when looking for potential exoplanets where life as we know it could exist! @MarcoMarco1822 We can use observations of polluted white dwarfs to infer if there was a core/mantle or even water sometimes, however, these planets have been destroyed so this makes it hard to find life there!",https://arxiv.org/abs/2102.02843,"Planets and stars ultimately form out of the collapse of the same cloud of gas. Whilst planets, and planetary bodies, readily loose volatiles, a common hypothesis is that they retain the same refractory composition as their host star. This is true within the Solar System. The refractory composition of chondritic meteorites, Earth and other rocky planetary bodies are consistent with solar, within the observational errors. This work aims to investigate whether this hypothesis holds for exoplanetary systems. If true, the internal structure of observed rocky exoplanets can be better constrained using their host star abundances. In this paper, we analyse the abundances of the K-dwarf, G200-40, and compare them to its polluted white dwarf companion, WD 1425+540. The white dwarf has accreted planetary material, most probably a Kuiper belt-like object, from an outer planetary system surviving the star's evolution to the white dwarf phase. Given that binary pairs are chemically homogeneous, we use the binary companion, G200-40, as a proxy for the composition of the progenitor to WD 1425+540. We show that the elemental abundances of the companion star and the planetary material accreted by WD 1425+540 are consistent with the hypothesis that planet and host-stars have the same true abundances, taking into account the observational errors. ","Host-star and exoplanet compositions: a pilot study usinga wide binary
with a polluted white dwarf",7,"['New co-authored paper on arXiv today! \nWe find evidence that exoplanetary bodies have the same refractory composition as their host star\n', 'Some white dwarfs show evidence that they have accreted planetary material. Analysing their spectra reveals the bulk composition of the exoplanetary material that they have accreted.', 'We analyse the abundances of a wide binary system consisting of a K-dwarf and a white dwarf which has accreted planetary material. As binary pairs are chemically homogeneous, the K-dwarf is used as a proxy for the progenitor composition of the white dwarf.', 'The abundances of the K-dwarf and the planetary material polluting the atmosphere of the white dwarf are consistent with the hypothesis that exoplanetary bodies have the same refractory composition as their host stars.', 'Check out the paper for more detail, discussions and caveats', '@MarcoMarco1822 Potentially! Although planet formation is very complicated. It may help to target certain stars when looking for potential exoplanets where life as we know it could exist!', '@MarcoMarco1822 We can use observations of polluted white dwarfs to infer if there was a core/mantle or even water sometimes, however, these planets have been destroyed so this makes it hard to find life there!']",21,02,1261
441,107,1055295872500527104,65876824,Jascha Sohl-Dickstein,"Learned optimizers with less mind numbing pain! We analyze, and propose a solution to, pathologies in meta-training via unrolled optimization. Then we meta-train an optimizer targeting CNN training that outperforms SGD/Adam by 5x (!!!) wall clock time. Interestingly, we can also meta-train an optimizer to target *validation* rather than train loss, in which case we achieve better validation loss than first order methods. Work with @Luke_Metz @niru_m @JvNixon Daniel Freeman.",https://arxiv.org/abs/1810.10180,"Deep learning has shown that learned functions can dramatically outperform hand-designed functions on perceptual tasks. Analogously, this suggests that learned optimizers may similarly outperform current hand-designed optimizers, especially for specific problems. However, learned optimizers are notoriously difficult to train and have yet to demonstrate wall-clock speedups over hand-designed optimizers, and thus are rarely used in practice. Typically, learned optimizers are trained by truncated backpropagation through an unrolled optimization process resulting in gradients that are either strongly biased (for short truncations) or have exploding norm (for long truncations). In this work we propose a training scheme which overcomes both of these difficulties, by dynamically weighting two unbiased gradient estimators for a variational loss on optimizer performance, allowing us to train neural networks to perform optimization of a specific task faster than tuned first-order methods. We demonstrate these results on problems where our learned optimizer trains convolutional networks faster in wall-clock time compared to tuned first-order methods and with an improvement in test loss. ","Understanding and correcting pathologies in the training of learned
optimizers",2,"['Learned optimizers with less mind numbing pain! We analyze, and propose a solution to, pathologies in meta-training via unrolled optimization. Then we meta-train an optimizer targeting CNN training that outperforms SGD/Adam by 5x (!!!) wall clock time. ', 'Interestingly, we can also meta-train an optimizer to target *validation* rather than train loss, in which case we achieve better validation loss than first order methods. Work with @Luke_Metz @niru_m @JvNixon Daniel Freeman.']",18,10,492
442,63,1083410795407372293,70874545,Josh Lothringer,"Check out a new paper () led by Ian Crossfield, where we detect 3(!) isotopologues of CO in both stars of an M-dwarf binary. This gives us C12/C13 and O16/O18 ratios, telling us this binary was enriched by a massive core-collapse SN. You can hear Ian talk about this in Session 420 this morning at 11:20 at #aas233, 10 minutes after I'm done giving my dissertation talk in Session 404! @StellarTayar That's a good question- I don't have a good intuition when it comes to stellar abundances so I would probably chalk it up to uncertainty. Maybe that is good reason to get add'l observations though.",https://arxiv.org/abs/1901.02607,"Low-mass M dwarfs represent the most common outcome of star formation, but their complex emergent spectra hinder detailed studies of their composition and initial formation. The measurement of isotopic ratios is a key tool that has been used to unlock the formation of our Solar System, the Sun, and the nuclear processes within more massive stars. We observed GJ 745AB, two M dwarfs orbiting in a wide binary, with the IRTF/iSHELL spectrograph. Our spectroscopy of CO in these stars at the 4.7 micron fundamental and 2.3 micron first-overtone rovibrational bandheads reveals 12C16O, 13C16O, and 12C18O in their photospheres. Since the stars are fully convective, the atomic constituents of these isotopologues should be uniformly mixed throughout the stars' interiors. We find that in these M dwarfs, both 12C/13C and 16O/18O greatly exceed the Solar values. These measurements cannot be explained solely by models of Galactic chemical evolution, but require that the stars formed from an ISM significantly enriched by material ejected from an exploding core-collape supernova. These isotopic measurements complement the elemental abundances provided by large-scale spectroscopic surveys, and open a new window onto studies of Galactic evolution, stellar populations, and individual systems. ",Unusual Isotopic Abundances in a Fully-Convective Stellar Binary,3,"['Check out a new paper () led by Ian Crossfield, where we detect 3(!) isotopologues of CO in both stars of an M-dwarf binary. This gives us C12/C13 and O16/O18 ratios, telling us this binary was enriched by a massive core-collapse SN.', ""You can hear Ian talk about this in Session 420 this morning at 11:20 at #aas233, 10 minutes after I'm done giving my dissertation talk in Session 404!"", ""@StellarTayar That's a good question- I don't have a good intuition when it comes to stellar abundances so I would probably chalk it up to uncertainty. Maybe that is good reason to get add'l observations though.""]",19,01,603
443,125,1390533430677540866,1113856096119197699,Lucas Lamata,"New paper today! A rewarding collaboration among three nuclear physicists and one quantum technology theorist. Four Andalusian scientists, three from @unisevilla and one from @UniHuelva . Looking forward to further works along these lines! ",https://arxiv.org/abs/2105.02834,"A digital quantum simulation of the Agassi model from nuclear physics is proposed and analyzed. The proposal is worked out for the case with four different sites. Numerical simulations and analytical estimations are presented to illustrate the feasibility of this proposal with current technology. The proposed approach is fully scalable to a larger number of sites. The use of a quantum correlation function as a probe to explore the quantum phases by quantum simulating the time dynamics, with no need of computing the ground state, is also studied. Evidence is given showing that the amplitude of the time dynamics of a correlation function in this quantum simulation is linked to the different quantum phases of the system. This approach establishes an avenue for the digital quantum simulation of useful models in nuclear physics. ",A digital quantum simulation of the Agassi model,1,"['New paper today! A rewarding collaboration among three nuclear physicists and one quantum technology theorist. Four Andalusian scientists, three from \n@unisevilla and one from @UniHuelva . Looking forward to further works along these lines!\n ']",21,05,253
444,49,1151515352121004032,68538286,Dan Hendrycks,"Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months. Paper: Dataset and code: @neuroecology @rbhar90 From the introduction, @goodfellow_ian et al. define adversarial examples as âinputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake.â @gwern @neuroecology @rbhar90 Yes, this is hard example mining, just as adversarial l_p perturbations are perturbations mined from the l_p ball.",https://arxiv.org/abs/1907.07174,"We introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade. The datasets are collected with a simple adversarial filtration technique to create datasets with limited spurious cues. Our datasets' real-world, unmodified examples transfer to various unseen models reliably, demonstrating that computer vision models have shared weaknesses. The first dataset is called ImageNet-A and is like the ImageNet test set, but it is far more challenging for existing models. We also curate an adversarial out-of-distribution detection dataset called ImageNet-O, which is the first out-of-distribution detection dataset created for ImageNet models. On ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%, and its out-of-distribution detection performance on ImageNet-O is near random chance levels. We find that existing data augmentation techniques hardly boost performance, and using other public training datasets provides improvements that are limited. However, we find that improvements to computer vision architectures provide a promising path towards robust models. ",Natural Adversarial Examples,3,"['Natural Adversarial Examples are real-world and unmodified examples which cause classifiers to be consistently confused. The new dataset has 7,500 images, which we personally labeled over several months.\nPaper: \nDataset and code: ', '@neuroecology @rbhar90 From the introduction, @goodfellow_ian et al. define adversarial examples as âinputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake.â', '@gwern @neuroecology @rbhar90 Yes, this is hard example mining, just as adversarial l_p perturbations are perturbations mined from the l_p ball.']",19,07,612
445,80,1060116932836409344,1230746966,Adam PĆoszaj,"More flight connections and proximity of airport increase the number of coauthored scientific papers. More in my & @katycns & @everyxs new paper ""The impact of air transport availability on research collaboration: A case study of four universities"": ",https://arxiv.org/abs/1811.02106,This paper analyzes the impact of air transport connectivity and accessibility on scientific collaboration. ,"The impact of air transport availability on research collaboration: A
case study of four universities",1,"['More flight connections and proximity of airport increase the number of coauthored scientific papers. More in my & @katycns & @everyxs new paper ""The impact of air transport availability on research collaboration: A case study of four universities"": ']",18,11,263
446,116,1391353801953746951,1215058162551873537,Elizabeth Bondi-Kelly,"Very excited to share a new paper, entitled ""Envisioning Communities: A Participatory Approach Towards AI for Social Good,"" at @AIESConf / #AIES2021. Thanks to my amazing co-authors @lilyxu0, @dacostanavas, @JacksonAKillian. Preprint posted at (1/4) In our paper, we present PACT, a framework for including community members as partners throughout an AI for social good project, including to define ""good"" in their context. (2/4) We use the capabilities and participatory approaches as conceptual tools. The capabilities approach focuses on the notion that all human beings should have a set of substantive liberties that allow them to function in society in the ways they choose. (3/4) We also provide guiding questions for AI researchers implementing this framework. We, as AI researchers dedicated to the advancement of social good, must make a PACT with communities to find our way forward together. (4/4)",http://arxiv.org/abs/2105.01774,"Research in artificial intelligence (AI) for social good presupposes some definition of social good, but potential definitions have been seldom suggested and never agreed upon. The normative question of what AI for social good research should be ""for"" is not thoughtfully elaborated, or is frequently addressed with a utilitarian outlook that prioritizes the needs of the majority over those who have been historically marginalized, brushing aside realities of injustice and inequity. We argue that AI for social good ought to be assessed by the communities that the AI system will impact, using as a guide the capabilities approach, a framework to measure the ability of different policies to improve human welfare equity. Furthermore, we lay out how AI research has the potential to catalyze social progress by expanding and equalizing capabilities. We show how the capabilities approach aligns with a participatory approach for the design and implementation of AI for social good research in a framework we introduce called PACT, in which community members affected should be brought in as partners and their input prioritized throughout the project. We conclude by providing an incomplete set of guiding questions for carrying out such participatory AI research in a way that elicits and respects a community's own definition of social good. ","Envisioning Communities: A Participatory Approach Towards AI for Social
Good",4,"['Very excited to share a new paper, entitled ""Envisioning Communities: A Participatory Approach Towards AI for Social Good,"" at @AIESConf / #AIES2021. Thanks to my amazing co-authors @lilyxu0, @dacostanavas, @JacksonAKillian. Preprint posted at (1/4) ', 'In our paper, we present PACT, a framework for including community members as partners throughout an AI for social good project, including to define ""good"" in their context. (2/4)', 'We use the capabilities and participatory approaches as conceptual tools. The capabilities approach focuses\non the notion that all human beings should have a set of substantive liberties that allow them to function in society in the ways they choose. (3/4)', 'We also provide guiding questions for AI researchers implementing this framework. We, as AI researchers dedicated to the advancement of social good, must make a PACT with communities to find our way forward together. (4/4)']",21,05,923
447,129,1412619664157446144,1076577039669489664,Andreas Blattmann,"Have you ever looked at a picture and wished you could bring it to life? If so, you should definitely check out our new paper âiPOKE: Poking a Still Image for Controlled Stochastic Video Synthesisâ. arxiv: Project page: We present iPOKE, a model for locally controlled, stochastic video synthesis based on poking a single pixel in a static scene, that enables users to animate still images only with simple mouse drags. J/w @timoMil, @mdorkenw and B.Ommer. code: ",https://arxiv.org/abs/2107.02790,"How would a static scene react to a local poke? What are the effects on other parts of an object if you could locally push it? There will be distinctive movement, despite evident variations caused by the stochastic nature of our world. These outcomes are governed by the characteristic kinematics of objects that dictate their overall motion caused by a local interaction. Conversely, the movement of an object provides crucial information about its underlying distinctive kinematics and the interdependencies between its parts. This two-way relation motivates learning a bijective mapping between object kinematics and plausible future image sequences. Therefore, we propose iPOKE -- invertible Prediction of Object Kinematics -- that, conditioned on an initial frame and a local poke, allows to sample object kinematics and establishes a one-to-one correspondence to the corresponding plausible videos, thereby providing a controlled stochastic video synthesis. In contrast to previous works, we do not generate arbitrary realistic videos, but provide efficient control of movements, while still capturing the stochastic nature of our environment and the diversity of plausible outcomes it entails. Moreover, our approach can transfer kinematics onto novel object instances and is not confined to particular object classes. Our project page is available at this https URL ",iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis,2,"['Have you ever looked at a picture and wished you could bring it to life? If so, you should definitely check out our new paper âiPOKE: Poking a Still Image for Controlled Stochastic Video Synthesisâ.\n\narxiv: \nProject page: ', 'We present iPOKE, a model for locally controlled, stochastic video synthesis based on poking a single pixel in a static scene, that enables users to animate still images only with simple mouse drags.\n\nJ/w @timoMil, @mdorkenw and B.Ommer.\ncode: https://t.co/eRARsZmtXv https://t.co/ScrOrrEdcj']",21,07,497
448,88,1491741211064946688,1364176717477273600,Fergus Cullen,"New paper on the arXiv today, lead by PhD student Ryan Begley. In it we estimate the LyC escape fraction in star-forming galaxies at z=3.5, combining data from VANDELS (for robust spec-z's) with the deep VIMOS U-band imaging in CDFS (tracing LyC). Using detailed IGM+CGM modelling to marginalise of the full sightline distribution, we fit the ionizing to non-ionizing flux ratio distribution of our sample and find an average escape fraction of f_esc = 0.07 +/- 0.02. We also find trends with various galaxy physical properties: galaxies with larger LyA EW's, lower dust and fainter intrinsic UV luminosities have higher f_esc. In contrast, we see no clear stellar mass trend. I think the paper nicely demonstrates that we can measure f_esc from broad-band photometry as long as we have a large enough samples with spec-z's, accurately model the IGM+CGM, and fit the full flux ratio distribution (e.g., rather than stacking images). .. now we just need larger sample sizes so that we can fit more complex underlying f_esc distributions!",https://arxiv.org/abs/2202.04088,"We present a study designed to measure the average LyC escape fraction ($\langle f_{\rm esc}\rangle$) of star-forming galaxies at z=3.5. We assemble a sample of 148 galaxies from the VANDELS survey at $3.35\leq z_{\rm spec}\leq3.95$, selected to minimize line-of-sight contamination of their photometry. For this sample, we use ultra-deep, ground-based, $U-$band imaging and HST $V-$band imaging to robustly measure the distribution of $\mathcal{R_{\rm obs}}$ $=(L_{\rm LyC}/L_{\rm UV})_{\rm obs}$. We then model the distribution as a function of $\langle f_{\rm esc}\rangle$, carefully accounting for attenuation by dust, and the IGM (and CGM). A maximum likelihood fit to the $\mathcal{R_{\rm obs}}$ distribution returns a best-fitting value of $\langle f_{\rm esc}\rangle =0.07\pm0.02$, a result confirmed using an alternative Bayesian inference technique (both exclude $\langle f_{\rm esc}\rangle=0.0$ at $> 3\sigma$). By splitting our sample in two, we find evidence that $\langle f_{\rm esc}\rangle$ is positively correlated with Ly$\alpha$ equivalent width, with high and low sub-samples returning best fits of $\langle f_{\rm esc}\rangle=0.12^{+0.06}_{-0.04}$ and $\langle f_{\rm esc} \rangle=0.02^{+0.02}_{-0.01}$, respectively. In contrast, we find evidence that $\langle f_{\rm esc}\rangle$ is anti-correlated with intrinsic UV luminosity and UV dust attenuation; with low UV luminosity and dust attenuation sub-samples returning best fits in the range $0.10 \leq \langle f_{\rm esc}\rangle \leq 0.22$. We do not find evidence for a clear correlation between $f_{\rm esc}$ and galaxy stellar mass, suggesting it is not a primary indicator of leakage. Although larger samples are needed to further explore these trends, they suggest that it is entirely plausible that the low dust and metallicity galaxies found at z > 6 will display the $\langle f_{\rm esc}\rangle\geq0.1$ required to drive reionization. ","The VANDELS survey: a measurement of the average Lyman-continuum escape
fraction of star-forming galaxies at z=3.5",5,"[""New paper on the arXiv today, lead by PhD student Ryan Begley. In it we estimate the LyC escape fraction in star-forming galaxies at z=3.5, combining data from VANDELS (for robust spec-z's) with the deep VIMOS U-band imaging in CDFS (tracing LyC). "", 'Using detailed IGM+CGM modelling to marginalise of the full sightline distribution, we fit the ionizing to non-ionizing flux ratio distribution of our sample and find an average escape fraction of f_esc = 0.07 +/- 0.02. https://t.co/VUxdFV3BXp', ""We also find trends with various galaxy physical properties: galaxies with larger LyA EW's, lower dust and fainter intrinsic UV luminosities have higher f_esc. In contrast, we see no clear stellar mass trend. https://t.co/Bm9WGlpQXE"", ""I think the paper nicely demonstrates that we can measure f_esc from broad-band photometry as long as we have a large enough samples with spec-z's, accurately model the IGM+CGM, and fit the full flux ratio distribution (e.g., rather than stacking images)."", '.. now we just need larger sample sizes so that we can fit more complex underlying f_esc distributions!']",22,02,1057
449,182,1284035229200658432,2491211646,Matthieu Bethermin,"Is there already a lot of dust-obscured (orange and red) star formation 1 Gyr after the Big Bang? (uncorrected UV in blue) The new paper of Yana Khusanova (former PhD student at @LAM_Marseille) and the ALPINE collaboration suggest that it is probably true! We also used the same dataset to constrain the position of the ""main-sequence"" of star-forming galaxies using both UV and FIR data in the 4<z<6 range. No big difference with dust-corrected UV. Phew! ",https://arxiv.org/abs/2007.08384,"Star formation rate (SFR) measurements at z>4 have relied mostly on rest-frame far-ultraviolet (FUV) observations. The corrections for dust attenuation based on IRX-$\beta$ relation are highly uncertain and are still debated in the literature. Hence, rest-frame far-infrared (FIR) observations are necessary to constrain the dust-obscured component of the SFR. In this paper, we exploit the rest-frame FIR continuum observations collected by the ALMA Large Program to INvestigate [CII] at Early times (ALPINE) to directly constrain the obscured SFR in galaxies at 4.4 ', 'We also used the same dataset to constrain the position of the ""main-sequence"" of star-forming galaxies using both UV and FIR data in the 4<z<6 range. No big difference with dust-corrected UV. Phew! https://t.co/phe62urECd']",20,07,482
450,110,1141770573305044992,1868950795,Ruth Misener,"New work on robust optimization for the pooling problem by @johanneswiebe (joint w/ Schlumberger's InĂȘs CecĂlio & me). Preprint: Journal: The paper will be part of a special issue titled ""I&ECR 2019 Class of Influential Researchers"" ",https://arxiv.org/abs/1906.07612,"The pooling problem has applications, e.g., in petrochemical refining, water networks, and supply chains and is widely studied in global optimization. To date, it has largely been treated deterministically, neglecting the influence of parametric uncertainty. This paper applies two robust optimization approaches, reformulation and cutting planes, to the non-linear, non-convex pooling problem. Most applications of robust optimization have been either convex or mixed-integer linear problems. We explore the suitability of robust optimization in the context of global optimization problems which are concave in the uncertain parameters by considering the pooling problem with uncertain inlet concentrations. We compare the computational efficiency of reformulation and cutting plane approaches for three commonly-used uncertainty set geometries on 14 pooling problem instances and demonstrate how accounting for uncertainty changes the optimal solution. ",Robust optimization for the pooling problem,1,"['New work on robust optimization for the pooling problem by @johanneswiebe (joint w/ Schlumberger\'s InĂȘs CecĂlio & me). Preprint: Journal: The paper will be part of a special issue titled ""I&ECR 2019 Class of Influential Researchers"" ']",19,06,253
451,22,1443268136053067776,4639078397,John Wise,"New paper day! Many thanks to my co-authors (@jaregan @bwoshea @TurloughDownes, Norman, Willott), in particular the lead author Tyrone Woods If some of the first stars were really massive, this is how we'd detect them with JWST and LUVOIR ",https://arxiv.org/abs/2109.13279,"Identifying stars formed in pristine environments (Pop III) within the first billion years is vital to uncovering the earliest growth and chemical evolution of galaxies. Pop III galaxies, however, are typically expected to be too faint and too few in number to be detectable by forthcoming instruments without extremely long integration times and/or extreme lensing. In an environment, however, where star formation is suppressed until a halo crosses the atomic cooling limit (e.g., by a modest Lyman-Werner flux, high baryonic streaming velocities, and/or dynamical heating effects),primordial halos can form substantially more numerous and more massive stars. Some of these stars will in-turn be accreting more rapidly than they can thermally relax at any given time. Using high resolution cosmological zoom-in simulations of massive star formation in high-z halos, we find that such rapidly accreting stars produce prominent spectral features which would be detectable by {\it JWST}. The rapid accretion episodes within the halo lead to stochastic reprocessing of 0--20\% of the total stellar emission into the rest-frame optical over long timescales, a unique signature which may allow deep observations to identify such objects out to $z \sim 10-13$ using mid- and wide-band NIRCam colors alone. ","Some First Stars Were Red: Detecting Signatures of Massive Population
III Formation Through Long-Term Stochastic Color Variations",1,"[""New paper day! Many thanks to my co-authors (@jaregan @bwoshea @TurloughDownes, Norman, Willott), in particular the lead author Tyrone Woods\n\nIf some of the first stars were really massive, this is how we'd detect them with JWST and LUVOIR ""]",21,09,252
452,20,1299306215491670020,933713808018862082,Cian Scannell,New paper online using domain-adversarial learning to train a U-Net to segment cardiac MR images that generalises well across different domains (MR scanners). Accepted at STACOM @MICCAI20 for the M&Ms challenge. @achir76 @mitkoveta The idea (Ganin et al. JMLR 2016) is to simultaneously train the segmentation U-Net and a domain classifier to discriminate between input domains based on the internal representations of the U-Net. Then to update the weights of U-Net to maximise the loss of the domain classifier This ensures that the domain of the input cannot be recovered from the features learned by the U-Net. It learns features that are less dependent on domain information than conventional training (t-SNE embeddings shown) and performance doesn't drop off when applied to new domains ,https://arxiv.org/abs/2008.11776,"Cine cardiac magnetic resonance (CMR) has become the gold standard for the non-invasive evaluation of cardiac function. In particular, it allows the accurate quantification of functional parameters including the chamber volumes and ejection fraction. Deep learning has shown the potential to automate the requisite cardiac structure segmentation. However, the lack of robustness of deep learning models has hindered their widespread clinical adoption. Due to differences in the data characteristics, neural networks trained on data from a specific scanner are not guaranteed to generalise well to data acquired at a different centre or with a different scanner. In this work, we propose a principled solution to the problem of this domain shift. Domain-adversarial learning is used to train a domain-invariant 2D U-Net using labelled and unlabelled data. This approach is evaluated on both seen and unseen domains from the M\&Ms challenge dataset and the domain-adversarial approach shows improved performance as compared to standard training. Additionally, we show that the domain information cannot be recovered from the learned features. ","Domain-Adversarial Learning for Multi-Centre, Multi-Vendor, and
Multi-Disease Cardiac MR Image Segmentation",3,"['New paper online using domain-adversarial learning to train a U-Net to segment cardiac MR images that generalises well across different domains (MR scanners). Accepted at STACOM @MICCAI20 for the M&Ms challenge. @achir76 @mitkoveta \n', 'The idea (Ganin et al. JMLR 2016) is to simultaneously train the segmentation U-Net and a domain classifier to discriminate between input domains based on the internal representations of the U-Net. Then to update the weights of U-Net to maximise the loss of the domain classifier https://t.co/1b4oYmaPPv', ""This ensures that the domain of the input cannot be recovered from the features learned by the U-Net.\nIt learns features that are less dependent on domain information than conventional training (t-SNE embeddings shown) and performance doesn't drop off when applied to new domains https://t.co/HNOv31zCXO""]",20,08,812
453,32,1376901591870017536,769188626324398080,Heiga Zen (ć
š çłæČł),"New paper from our team: Ye Jia, Heiga Zen, Jonathan Shen, Yu Zhang, Yonghui Wu ""PnG BERT: Augmented BERT on Phonemes and Graphemes for Neural TTS"" Arxiv: Samples: ""This paper introduces PnG BERT, a new encoder model for neural TTS. This model is augmented from the original BERT model, by taking both phoneme and grapheme representations of text as input, as well as the word-level alignment between them. It can be pre-trained on a large text corpus in a self-supervised manner, and fine-tuned in a TTS task. Experimental results show that a neural TTS model using a pre-trained PnG BERT as its encoder yields more natural prosody and more accurate pronunciation than a baseline model using only phoneme input with no pre-training. Subjective side-by-side preference evaluations show that raters have no statistically significant preference between the speech synthesized using a PnG BERT and ground truth recordings from professional speakers.""",https://arxiv.org/abs/2103.15060,"This paper introduces PnG BERT, a new encoder model for neural TTS. This model is augmented from the original BERT model, by taking both phoneme and grapheme representations of text as input, as well as the word-level alignment between them. It can be pre-trained on a large text corpus in a self-supervised manner, and fine-tuned in a TTS task. Experimental results show that a neural TTS model using a pre-trained PnG BERT as its encoder yields more natural prosody and more accurate pronunciation than a baseline model using only phoneme input with no pre-training. Subjective side-by-side preference evaluations show that raters have no statistically significant preference between the speech synthesized using a PnG BERT and ground truth recordings from professional speakers. ",PnG BERT: Augmented BERT on Phonemes and Graphemes for Neural TTS,4,"['New paper from our team: \n\nYe Jia, Heiga Zen, Jonathan Shen, Yu Zhang, Yonghui Wu\n""PnG BERT: Augmented BERT on Phonemes and Graphemes for Neural TTS""\n\nArxiv: \nSamples: ', '""This paper introduces PnG BERT, a new encoder model for neural TTS. This model is augmented from the original BERT model, by taking both phoneme and grapheme representations of text as input, as well as the word-level alignment between them.', 'It can be pre-trained on a large text corpus in a self-supervised manner, and fine-tuned in a TTS task. Experimental results show that a neural TTS model using a pre-trained PnG BERT as its encoder yields more natural prosody and more accurate pronunciation than a baseline model', 'using only phoneme input with no pre-training. Subjective side-by-side preference evaluations show that raters have no statistically significant preference between the speech synthesized using a PnG BERT and ground truth recordings from professional speakers.""']",21,03,962
454,66,1406918712058433540,1091042849561473031,Alexander Kolesnikov đșđŠ,"New paper 𧔠: How to train your ViT? It is common to train vision transformers on ImageNet-1k (~1.3m images) for 300 epochs. We show that you are better off investing the same compute budget for training on ImageNet-21k (~13m images) for 30 epochs. But this is just one out of many highlights. We also conduct extremely thorough study on the interplay of model size, dataset size, regularizations and augmentations. Plus extensive transfer learning experiments. Check out our paper if you want to learn how to train ViT models. We are sure there are lots of interesting insights to be drawn from our collection of trained ViTs. So we release them all: >50'000 models! JAX repo: . JAX colab: Or also available in pytorch through timm: . Also check out this great thread with the summary of our insights: Work done with stellar Andreas Steiner, @XiaohuaZhai, @wightmanr, @kyosu and @giffmana.",http://arxiv.org/abs/2106.10270,"Vision Transformers (ViT) have been shown to attain highly competitive performance for a wide range of vision applications, such as image classification, object detection and semantic image segmentation. In comparison to convolutional neural networks, the Vision Transformer's weaker inductive bias is generally found to cause an increased reliance on model regularization or data augmentation (``AugReg'' for short) when training on smaller training datasets. We conduct a systematic empirical study in order to better understand the interplay between the amount of training data, AugReg, model size and compute budget. As one result of this study we find that the combination of increased compute and AugReg can yield models with the same performance as models trained on an order of magnitude more training data: we train ViT models of various sizes on the public ImageNet-21k dataset which either match or outperform their counterparts trained on the larger, but not publicly available JFT-300M dataset. ","How to train your ViT? Data, Augmentation, and Regularization in Vision
Transformers",5,"['New paper 𧔠: How to train your ViT? It is common to train vision transformers on ImageNet-1k (~1.3m images) for 300 epochs. We show that you are better off investing the same compute budget for training on ImageNet-21k (~13m images) for 30 epochs. ', 'But this is just one out of many highlights. We also conduct extremely thorough study on the interplay of model size, dataset size, regularizations and augmentations. Plus extensive transfer learning experiments. Check out our paper if you want to learn how to train ViT models. https://t.co/fYX1M2XIny', ""We are sure there are lots of interesting insights to be drawn from our collection of trained ViTs. So we release them all: >50'000 models!\n\nJAX repo: https://t.co/9SocJQaQtb.\nJAX colab: https://t.co/NwIS8nPXHD\n\nOr also available in pytorch through timm: https://t.co/HZie2tyfhQ."", 'Also check out this great thread with the summary of our insights: https://t.co/wrGNnZFuab', 'Work done with stellar Andreas Steiner, @XiaohuaZhai, @wightmanr, @kyosu and @giffmana.']",21,06,940
455,112,1273229571475812352,1080138653807058945,Bertrand Charpentier,"Assuming the knowledge of anomalous data at train time is unrealistic. In our new PostNet paper (), we use Dirichlet distributions and NF to produce uncertainty-aware predictions without training on OOD data. With @DanielZuegner and @guennemann ",https://arxiv.org/abs/2006.09239,"Accurate estimation of aleatoric and epistemic uncertainty is crucial to build safe and reliable systems. Traditional approaches, such as dropout and ensemble methods, estimate uncertainty by sampling probability predictions from different submodels, which leads to slow uncertainty estimation at inference time. Recent works address this drawback by directly predicting parameters of prior distributions over the probability predictions with a neural network. While this approach has demonstrated accurate uncertainty estimation, it requires defining arbitrary target parameters for in-distribution data and makes the unrealistic assumption that out-of-distribution (OOD) data is known at training time. In this work we propose the Posterior Network (PostNet), which uses Normalizing Flows to predict an individual closed-form posterior distribution over predicted probabilites for any input sample. The posterior distributions learned by PostNet accurately reflect uncertainty for in- and out-of-distribution data -- without requiring access to OOD data at training time. PostNet achieves state-of-the art results in OOD detection and in uncertainty calibration under dataset shifts. ","Posterior Network: Uncertainty Estimation without OOD Samples via
Density-Based Pseudo-Counts",1,"['Assuming the knowledge of anomalous data at train time is unrealistic. In our new PostNet paper (), we use Dirichlet distributions and NF to produce uncertainty-aware predictions without training on OOD data.\n\nWith @DanielZuegner and @guennemann ']",20,06,257
456,16,1474390633141968899,1345813205440995333,Ben Freed,"Check out our new paper! We automatically decompose large MA problems to be solved efficiently with #reinforcementlearning while maintaining cooperation đŠŸ.(with @_AdityaKapoor_, @ianabraha, @jeffschneider, Jeff Schneider, and @howiechoset) #deeplearning ",http://arxiv.org/abs/2112.12740,"One of the preeminent obstacles to scaling multi-agent reinforcement learning to large numbers of agents is assigning credit to individual agents' actions. In this paper, we address this credit assignment problem with an approach that we call \textit{partial reward decoupling} (PRD), which attempts to decompose large cooperative multi-agent RL problems into decoupled subproblems involving subsets of agents, thereby simplifying credit assignment. We empirically demonstrate that decomposing the RL problem using PRD in an actor-critic algorithm results in lower variance policy gradient estimates, which improves data efficiency, learning stability, and asymptotic performance across a wide array of multi-agent RL tasks, compared to various other actor-critic approaches. Additionally, we relate our approach to counterfactual multi-agent policy gradient (COMA), a state-of-the-art MARL algorithm, and empirically show that our approach outperforms COMA by making better use of information in agents' reward streams, and by enabling recent advances in advantage estimation to be used. ",Learning Cooperative Multi-Agent Policies with Partial Reward Decoupling,1,"['Check out our new paper! We automatically decompose large MA problems to be solved efficiently with #reinforcementlearning while maintaining cooperation đŠŸ.(with @_AdityaKapoor_, @ianabraha, @jeffschneider, Jeff Schneider, and @howiechoset) #deeplearning\n\n']",21,12,260
457,58,1518761665847775233,1019760963569049601,Almog Yalinewich,"New paper on the arxiv, in which we propose a model for fast radio bursts, in which fast spinning neutron stars with low magnetic fields are the progenitors, rather than magnetars (which are the progenitors in most other models) @di_goldene_pave oops, should be equation 19. Thanks for catching it! The emission mechanism for the counterpart is the same as for the radio emission, the only difference is that the shock radius is smaller, and the Lorentz factor is higher, so the photons get boosted to a higher energy.",https://arxiv.org/abs/2204.11663,"Recent observations of coherent radiation from the Crab pulsar (Bij et al 2021) suggest the emission is driven by an ultra - relativistic ($\gamma \sim 10^4$), cold plasma flow. A relativistically expanding plasma shell can compress the ambient magnetic field, like a moving mirror, and thus produce coherent radiation whose wavelength is shorter than that of the ambient medium by $\gamma^2$. This mechanism has been studied in the past by Colgate and Noerdelinger (1971), in the context of radio loud supernova explosions. In this work we propose that a similar mechanism drives the coherent emission in fast radio bursts. The high Lorenz factors dramatically lower the implied energy and magnetic field requirements, allowing the spin down energy of regular (or even recycled), fast spinning pulsars, rather than slow spinning magnetars, to explain FRBs. We show that this model can explain the frequency and the time evolution of observed FRBs, as well as their duration, energetics and absence of panchromatic counterparts. We also predict that the peak frequency of sub pulses decline with observation time as $\omega_{\rm obs} \propto t_{\rm obs}^{-1/2}$. Unfortunately, with current capabilities it is not possible to constrain the shape of the curve $\omega_{\rm obs} \left(t_{\rm obs} \right)$. Finally, we find that a variation of this model can explain weaker radio transients, such as the one observed from a galactic magnetar. In this variant, the shock wave produces low frequency photons which are then Compton scattered to the GHz range. ",The Moving Mirror model for Fast Radio Bursts,2,"['New paper on the arxiv, in which we propose a model for fast radio bursts, in which fast spinning neutron stars with low magnetic fields are the progenitors, rather than magnetars (which are the progenitors in most other models)\n', '@di_goldene_pave oops, should be equation 19. Thanks for catching it!\nThe emission mechanism for the counterpart is the same as for the radio emission, the only difference is that the shock radius is smaller, and the Lorentz factor is higher, so the photons get boosted to a higher energy.']",22,04,525
458,15,879318419325190144,1545756036,Mike Boylan-Kolchin,"New paper from V. Robles on cores in dwarf galaxies: sensitive to feedback in CDM, robust to feedback in SIDM. Although, (also out today) argues cores may be observational artifacts. Can't imagine that will be controversial.... @caprastro I like it! Having a few more comparable systems would be reassuring. Hard for me to believe that cores are all erroneous measurements. You? @caprastro Good to know, thanks. Do they have reasonably precise age estimates (say, w/i 2 Gyr)? Formation time seems to be important for core argument",https://arxiv.org/abs/1706.07514v1,"We compare a suite of four simulated dwarf galaxies formed in 10$^{10} M_{\odot}$ haloes of collisionless Cold Dark Matter (CDM) with galaxies simulated in the same haloes with an identical galaxy formation model but a non-zero cross-section for dark matter self-interactions. These cosmological zoom-in simulations are part of the Feedback In Realistic Environments (FIRE) project and utilize the FIRE-2 model for hydrodynamics and galaxy formation physics. We find the stellar masses of the galaxies formed in Self-Interacting Dark Matter (SIDM) with $\sigma/m= 1\, cm^2/g$ are very similar to those in CDM (spanning $M_{\star} \approx 10^{5.7 - 7.0} M_{\odot}$) and all runs lie on a similar stellar mass -- size relation. The logarithmic dark matter density slope ($\alpha=d\log \rho / d\log r$) in the central $250-500$ pc remains steeper than $\alpha= -0.8$ for the CDM-Hydro simulations with stellar mass $M_{\star} \sim 10^{6.6} M_{\odot}$ and core-like in the most massive galaxy. In contrast, every SIDM hydrodynamic simulation yields a flatter profile, with $\alpha >-0.4$. Moreover, the central density profiles predicted in SIDM runs without baryons are similar to the SIDM runs that include FIRE-2 baryonic physics. Thus, SIDM appears to be much more robust to the inclusion of (potentially uncertain) baryonic physics than CDM on this mass scale, suggesting SIDM will be easier to falsify than CDM using low-mass galaxies. Our FIRE simulations predict that galaxies less massive than $M_{\star} < 3 \times 10^6 M_{\odot}$ provide potentially ideal targets for discriminating models, with SIDM producing substantial cores in such tiny galaxies and CDM producing cusps. ","] SIDM on FIRE: Hydrodynamical Self-Interacting Dark Matter simulations of
low-mass dwarf galaxies",4,"['New paper from V. Robles on cores in dwarf galaxies: sensitive to feedback in CDM, robust to feedback in SIDM. ', ""Although, https://t.co/13I0AGcDYJ (also out today) argues cores may be observational artifacts. Can't imagine that will be controversial...."", '@caprastro I like it! Having a few more comparable systems would be reassuring. Hard for me to believe that cores are all erroneous measurements. You?', '@caprastro Good to know, thanks. Do they have reasonably precise age estimates (say, w/i 2 Gyr)? Formation time seems to be important for core argument']",17,06,551
459,46,1471723686923227138,929973145,Dr David Sobral đ«đ,"New paper just accepted in ApJđĄ:đ«The LEGA-C of nature and nurture in stellar populations of galaxies at z~0.6-1.0đ«: D4000 and H-delta reveal different assembly histories for quiescent galaxies in different environments: đ Galaxy evolution is driven by a variety of physical processes that proceed at different rates for different dark matter haloes and environments. A record of this evolution is preserved in galaxy stellar populations, which we can access using absorption-line spectroscopy đ:đ In this paper we explore the outstanding LEGA-C survey (DR3; see ) to investigate the role of the environment and stellar mass on stellar populations (using D4000 and H-delta) at z~0.6-1.0 in the COSMOS field, from poor fields to some rich clusters đ We separate galaxies in terms of star-forming and quiescent, and also centrals and satellites, and leverage the statistical power and depth of LEGA-C to investigate the roles of stellar mass and environment in setting the quiescent fraction, 7 Gyrs ago: We reveal significant gradients in D4000 and H-delta equivalent width (EW) distributions over the stellar mass vs environment 2D spaces at z~0.6-1.0. D4000 and H-delta EWs primarily depend on mass, but they also *depend on environment at fixed stellar mass* for massive galaxies: By splitting the sample into star-forming galaxies and quiescent galaxies we reveal that the significant environmental trends of D4000 and H-delta EW when controlling for stellar mass are *fully* driven by quiescent galaxies đČđ Perhaps surprisingly: đ€ regardless of being centrals or satellites, star-forming galaxies reveal D4000 and H-delta EWs which depend strongly on their stellar mass and are completely independent of the environment at 0.6<z<1.0: Interestingly, the *environmental trends seen for satellite galaxies are fully driven by the trends that hold only for quiescent galaxies*, combined with the strong environmental dependency of the quiescent fraction at fixed stellar mass đ What next? LEGA-C allows to go beyond just 2 indicators. By modelling the full star formation histories of each galaxy we will be able to gain a much deeper insight, so stay tuned and check out other recent LEGA-C papers (+ download the public catalogue ) ",https://arxiv.org/abs/2112.08372,"Galaxy evolution is driven by a variety of physical processes which are predicted to proceed at different rates for different dark matter haloes and environments across cosmic times. A record of this evolution is preserved in galaxy stellar populations, which we can access using absorption-line spectroscopy. Here we explore the large LEGA-C survey (DR3) to investigate the role of the environment and stellar mass on stellar populations at z~0.6-1.0 in the COSMOS field. Leveraging the statistical power and depth of LEGA-C, we reveal significant gradients in D4000 and H-delta equivalent widths (EWs) distributions over the stellar mass vs environment 2D spaces for the massive galaxy population (M>10^10 M$_{\odot}$) at z~0.6-1.0. D4000 and H-delta EWs primarily depend on stellar mass, but they also depend on environment at fixed stellar mass. By splitting the sample into centrals and satellites, and in terms of star-forming galaxies and quiescent galaxies, we reveal that the significant environmental trends of D4000 and H-delta EW when controlling for stellar mass are driven by quiescent galaxies. Regardless of being centrals or satellites, star-forming galaxies reveal D4000 and H-delta EWs which depend strongly on their stellar mass and are completely independent of the environment at 0.6 đ ', 'Galaxy evolution is driven by a variety of physical processes that proceed at different rates for different dark matter haloes and environments. A record of this evolution is preserved in galaxy stellar populations, which we can access using absorption-line spectroscopy đ:đ https://t.co/3UQyD2BxE5', 'In this paper we explore the outstanding LEGA-C survey (DR3; see https://t.co/VN78t0MsMl) to investigate the role of the environment and stellar mass on stellar populations (using D4000 and H-delta) at z~0.6-1.0 in the COSMOS field, from poor fields to some rich clusters đ https://t.co/8Cq2Mm4l3s', 'We separate galaxies in terms of star-forming and quiescent, and also centrals and satellites, and leverage the statistical power and depth of LEGA-C to investigate the roles of stellar mass and environment in setting the quiescent fraction, 7 Gyrs ago: https://t.co/RMJXYA4peM', 'We reveal significant gradients in D4000 and H-delta equivalent width (EW) distributions over the stellar mass vs environment 2D spaces at z~0.6-1.0. D4000 and H-delta EWs primarily depend on mass, but they also *depend on environment at fixed stellar mass* for massive galaxies: https://t.co/oijL6AA2xY', 'By splitting the sample into star-forming galaxies and quiescent galaxies we reveal that the significant environmental trends of D4000 and H-delta EW when controlling for stellar mass are *fully* driven by quiescent galaxies đČđ https://t.co/Cfw7wIam9u', 'Perhaps surprisingly: đ€ regardless of being centrals or satellites, star-forming galaxies reveal D4000 and H-delta EWs which depend strongly on their stellar mass and are completely independent of the environment at 0.6<z<1.0: https://t.co/fvn1vVlY3X', 'Interestingly, the *environmental trends seen for satellite galaxies are fully driven by the trends that hold only for quiescent galaxies*, combined with the strong environmental dependency of the quiescent fraction at fixed stellar mass đ https://t.co/F7lwhXnYL4', 'What next? LEGA-C allows to go beyond just 2 indicators. By modelling the full star formation histories of each galaxy we will be able to gain a much deeper insight, so stay tuned and check out other recent LEGA-C papers (+ download the public catalogue https://t.co/VN78t0MsMl) https://t.co/4FOxqXiyv0']",21,12,2320
460,113,1273128407392571394,2566842619,Michal Ć ustr,"Search has played a fundamental role in computer game research since the very beginning. But what is sound search in imperfect information games? Our new theoretical paper ! Unlike in perfect info games, players may need to randomize their strategies in order to play optimally. Think of Poker - if you always bet with strong cards or fold with weak ones, your opponent will figure this out during repeated play and learn to exploit you. In two-player zero-sum games we typically look for strong strategies with guarantees against worst-case adversaries: Nash equilibria (or rather their epsilon-approximation). In ""Sound Search in Imperfect Information Games"" we argue that the fixed-strategy definitions of exploitability and epsilon-Nash equilibria are however ill suited to measure the worst-case performance of an **online** (search) algorithm. In fact, the issues with evaluating the worst-case performance are quite subtle and they are very easy to overlook! This has led to a number of discussions with co-authors :) One of the basic problems is that the play strategy may be only locally consistent: it corresponds to some epsilon-equilibrium for a state, but ""stitching"" different strategies together across all states may not yield epsilon-equilibrium anymore! We thus formalize epsilon-soundness, a concept that connects the worst-case performance of an online algorithm to the performance of an (offline) epsilon-Nash equilibrium. We introduce also a consistency framework -- a hierarchy that connects the behavior of an online algorithm to a Nash equilibrium. Our multiple levels of consistency describe in what sense an online algorithm plays ""just like a fixed Nash equilibrium''. Our definition of soundness and the consistency hierarchy finally provide appropriate tools to analyze online algorithms in imp. info. games. We inspect some of the previous online algorithms in a new light, bringing new insights into their worst case performance guarantees. Finally, I'd like to express my gratitude to @Lifrordi, M. Moravcik, N. Burch, @sharky6000 and @MichaelHBowling for the collaboration! Thank you all for your input, comments and discussions, it is very much appreciated.",https://arxiv.org/abs/2006.08740,"Search has played a fundamental role in computer game research since the very beginning. And while online search has been commonly used in perfect information games such as Chess and Go, online search methods for imperfect information games have only been introduced relatively recently. This paper addresses the question of what is a sound online algorithm in an imperfect information setting of two-player zero-sum games. We argue that the~fixed-strategy~definitions of exploitability and $\epsilon$-Nash equilibria are ill-suited to measure an online algorithm's worst-case performance. We thus formalize $\epsilon$-soundness, a concept that connects the worst-case performance of an online algorithm to the performance of an $\epsilon$-Nash equilibrium. As $\epsilon$-soundness can be difficult to compute in general, we introduce a consistency framework -- a hierarchy that connects an online algorithm's behavior to a Nash equilibrium. These multiple levels of consistency describe in what sense an online algorithm plays ""just like a fixed Nash equilibrium"". These notions further illustrate the difference between perfect and imperfect information settings, as the same consistency guarantees have different worst-case online performance in perfect and imperfect information games. The definitions of soundness and the consistency hierarchy finally provide appropriate tools to analyze online algorithms in repeated imperfect information games. We thus inspect some of the previous online algorithms in a new light, bringing new insights into their worst-case performance guarantees. ",Sound Algorithms in Imperfect Information Games,10,"['Search has played a fundamental role in computer game research since the very beginning. But what is sound search in imperfect information games? \n\nOur new theoretical paper ! ', 'Unlike in perfect info games, players may need to randomize their strategies in order to play optimally. Think of Poker - if you always bet with strong cards or fold with weak ones, your opponent will figure this out during repeated play and learn to exploit you. https://t.co/VnpBNhZmg7', 'In two-player zero-sum games we typically look for strong strategies with guarantees against worst-case adversaries: Nash equilibria (or rather their epsilon-approximation).', 'In ""Sound Search in Imperfect Information Games"" we argue that the fixed-strategy definitions of exploitability and epsilon-Nash equilibria are however ill suited to measure the worst-case performance of an **online** (search) algorithm.', 'In fact, the issues with evaluating the worst-case performance are quite subtle and they are very easy to overlook! This has led to a number of discussions with co-authors :)', 'One of the basic problems is that the play strategy may be only locally consistent: it corresponds to some epsilon-equilibrium for a state, but ""stitching"" different strategies together across all states may not yield epsilon-equilibrium anymore!', 'We thus formalize epsilon-soundness, a concept that connects the worst-case performance of an online algorithm to the performance of an (offline) epsilon-Nash equilibrium.', 'We introduce also a consistency framework -- a hierarchy that connects the behavior of an online algorithm to a Nash equilibrium. Our multiple levels of consistency describe in what sense an online algorithm plays ""just like a fixed Nash equilibrium\'\'.', 'Our definition of soundness and the consistency hierarchy finally provide appropriate tools to analyze online algorithms in imp. info. games. We inspect some of the previous online algorithms in a new light, bringing new insights into their worst case performance guarantees.', ""Finally, I'd like to express my gratitude to @Lifrordi, M. Moravcik, N. Burch, @sharky6000 and @MichaelHBowling for the collaboration! Thank you all for your input, comments and discussions, it is very much appreciated.""]",20,06,2214
461,108,1219961900324327430,36396172,Diego R. Amancio,"Our new preprint is out: ""Modeling Economic Networks with Firm-to-Firm Wire Transfers"" ""We study a novel economic network comprised of wire transfers (electronic payment transactions) among the universe of firms in Brazil (6.2 million firms). """,https://arxiv.org/abs/2001.06889,"We study a novel economic network (supply chain) comprised of wire transfers (electronic payment transactions) among the universe of firms in Brazil (6.2 million firms). We construct a directed and weighted network in which vertices represent cities and edges connote pairwise economic dependence between cities. Cities (vertices) represent the collection of all firms in that location and links denote intercity wire transfers. We find a high degree of economic integration among cities in the trade network, which is consistent with the high degree of specialization found across Brazilian cities. We are able to identify which cities have a dominant role in the entire supply chain process using centrality network measures. We find that the trade network has a disassortative mixing pattern, which is consistent with the power-law shape of the firm size distribution in Brazil. After the Brazilian recession in 2014, we find that the disassortativity becomes even stronger as a result of the death of many small firms and the consequent concentration of economic flows on large firms. Our results suggest that recessions have a large impact on the trade network with meaningful and heterogeneous economic consequences across municipalities. We run econometric exercises and find that courts efficiency plays a dual role. From the customer perspective, it plays an important role in reducing contractual frictions as it increases economic transactions between different cities. From the supplier perspective, cities that are central suppliers to the supply chain seem to use courts inefficiency as a lawsuit barrier from their customers. ",Modeling Supply-Chain Networks with Firm-to-Firm Wire Transfers,1,"['Our new preprint is out: ""Modeling Economic Networks with Firm-to-Firm Wire Transfers"" \n\n""We study a novel economic network comprised of wire transfers (electronic payment transactions) among the universe of firms in Brazil (6.2 million firms). ""']",20,01,251
462,8,1476629180766998540,384900803,Shantanu Basu,"New paper from our #research group, an analytic model for why young stars undergo episodic bursts of mass accumulation. Led by our outstanding PhD candidate Indrani Das! Nice way to close out the year. @westernuPhysAst @westernuScience #astronomy ",https://arxiv.org/abs/2112.13856,"We develop a semi-analytic formalism for the determination of the evolution of the stellar mass accretion rate for specified density and velocity profiles that emerge from the runaway collapse of a prestellar cloud core. In the early phase, when the infall of matter from the surrounding envelope is substantial, the star accumulates mass primarily because of envelope-induced gravitational instability in a protostellar disc. In this phase, we model the envelope mass accretion rate from the isothermal free-fall collapse of a molecular cloud core. The disc gains mass from the envelope, and also transports matter to the star via a disc accretion mechanism that includes episodic gravitational instability and mass accretion bursts according to the Toomre $Q$-criterion. In the early phase the envelope accretion is dominant, whereas in the late phase the disc accretion is dominant. In the disc accretion phase, mass is accreted on to the star due to gravitational torques within the spiral structures in the disc, in a manner that analytic theory suggests has a mass accretion rate $\propto t^{-6/5}$. Our model provides a self-consistent evolution of the mass accretion rate by joining the spherical envelope accretion with the disc accretion and accounts for the presence of episodic accretion bursts at appropriate times. We show using a simple example that the burst mode is essential to explain the long-standing 'luminosity problem' of young stellar objects. The bursts are needed to provide a good match to the observed distribution of bolometric luminosities. In contrast, a smoothly time-dependent mass accretion rate, whether monotonically increasing or decreasing, is unable to do so. Our framework reproduces key elements of detailed numerical simulations of disc accretion and can aid in developing intuition about the basic physics as well as in comparing theory with observations. ","A semi-analytic model for the temporal evolution of the episodic
disc-to-star accretion rate during star formation",1,"['New paper from our #research group, an analytic model for why young stars undergo episodic bursts of mass accumulation. Led by our outstanding PhD candidate Indrani Das! Nice way to close out the year. @westernuPhysAst @westernuScience #astronomy ']",21,12,253
463,32,1397908669501632519,1220766363549126656,Alain Rossier,"New paper ""Scaling Properties of Deep Residual Networks"" has been accepted for publication as a conference paper at ICML 2021! @alainsamjr @icmlconf Collaboration with @instadeepai 1/n As you may know, residual network (ResNet) is a rockstar among Deep Learning architectures. It leads to huge improvements over feedforward networks for computer vision and speech recognition. 2/n The forward rule of a ResNet (top) looks oddly familiar to the Euler scheme of a differential equation (bottom). This insight culminated in the now famous Neural ODE model @RickyTQChen that uses various ODE schemes to train their model. 3/n For those of you familiar with numerical scheme of ODEs, the link between the two equations only holds under the particular conditions below... and they may not be satisfied in practice. 4/n We introduce a more general framework for the trained weights of ResNets that admits the Neural ODE assumptions as a special case. We test the framework on synthetic and standardized datasets (coming at you, CIFAR 10). 5/n We found that depending on the activation function, the shape of the trained weights for an absurdly deep network (L ~ 10'000) looks drastically different! left: sigma=tanh; right: sigma=relu 6/n For those of you who know about stochastic calculus, the image on the right looks pretty much like white noise. So let's integrate it... We get a process that looks oddly familiar with a diffusion process (think: Brownian motion)! 7/n In our paper, we lay down two general assumptions that describe the behaviour of trained weights of ResNets. We prove the convergence of their limiting equations, which can be either the Neural ODE, a linear ODE or a linear-quadratic SDE (the interesting stuff below). 8/n Whatâs the point of all this? Well, knowing the scaling limit of trained weights will help you choose your initialization strategy, as well as directly train the scaling limit in the Neural ODE fashion. 9/n Is this the end of the story? Of course not! Our results hold for fully-connected networks, but we observed that convolutional networks have different scaling behaviour (sparse limit?)... stay tuned for the next episode! 10/10, like this thread",https://arxiv.org/abs/2105.12245,"Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit. ",Scaling Properties of Deep Residual Networks,10,"['New paper ""Scaling Properties of Deep Residual Networks"" has been accepted for publication as a conference paper at ICML 2021! @alainsamjr @icmlconf \nCollaboration with @instadeepai \n\n\n1/n ', 'As you may know, residual network (ResNet) is a rockstar among Deep Learning architectures. It leads to huge improvements over feedforward networks for computer vision and speech recognition.\n\n2/n https://t.co/B5yKkhLeDh', 'The forward rule of a ResNet (top) looks oddly familiar to the Euler scheme of a differential equation (bottom). This insight culminated in the now famous Neural ODE model @RickyTQChen that uses various ODE schemes to train their model.\n\n3/n https://t.co/bbMbF8Gawp', 'For those of you familiar with numerical scheme of ODEs, the link between the two equations only holds under the particular conditions below... and they may not be satisfied in practice.\n\n4/n https://t.co/Sx6VEtn26E', 'We introduce a more general framework for the trained weights of ResNets that admits the Neural ODE assumptions as a special case. We test the framework on synthetic and standardized datasets (coming at you, CIFAR 10).\n\n5/n', ""We found that depending on the activation function, the shape of the trained weights for an absurdly deep network (L ~ 10'000) looks drastically different!\nleft: sigma=tanh; right: sigma=relu\n\n6/n https://t.co/otS7NZlW2R"", ""For those of you who know about stochastic calculus, the image on the right looks pretty much like white noise. So let's integrate it... We get a process that looks oddly familiar with a diffusion process (think: Brownian motion)!\n\n7/n https://t.co/2OAta45u0R"", 'In our paper, we lay down two general assumptions that describe the behaviour of trained weights of ResNets. We prove the convergence of their limiting equations, which can be either the Neural ODE, a linear ODE or a linear-quadratic SDE (the interesting stuff below).\n\n8/n https://t.co/Lg0W74Xs86', 'Whatâs the point of all this? Well, knowing the scaling limit of trained weights will help you choose your initialization strategy, as well as directly train the scaling limit in the Neural ODE fashion.\n\n9/n', 'Is this the end of the story? Of course not! Our results hold for fully-connected networks, but we observed that convolutional networks have different scaling behaviour (sparse limit?)... stay tuned for the next episode!\nhttps://t.co/9PXGsuObLp\n\n10/10, like this thread']",21,05,2254
464,146,1317020465072930817,445739626,Gozde Unal,"An earlier paper from Alican Mertan's MSc thesis work ""Relative Depth Estimation as a Ranking Problem"", where a listwise loss is adapted to relative depth estimation, also proposed a new metric that considers pixel depth ranking accuracy. @ @siu2020medipol",http://arxiv.org/abs/2010.06944,"We present a formulation of the relative depth estimation from a single image problem, as a ranking problem. By reformulating the problem this way, we were able to utilize literature on the ranking problem, and apply the existing knowledge to achieve better results. To this end, we have introduced a listwise ranking loss borrowed from ranking literature, weighted ListMLE, to the relative depth estimation problem. We have also brought a new metric which considers pixel depth ranking accuracy, on which our method is stronger. ",Relative Depth Estimation as a Ranking Problem,1,"['An earlier paper from Alican Mertan\'s MSc thesis work ""Relative Depth Estimation as a Ranking Problem"", where a listwise loss is adapted to relative depth estimation, also proposed a new metric that considers pixel depth ranking accuracy. @ @siu2020medipol']",20,10,263
465,111,1052984056231559168,838292815,Ofir Nachum,"The Laplacian approach is extremely useful in discrete graphs (e.g. mixing times, min-cuts, etc.). How can we apply similar ideas to RL, where eigendecomposition is not easy and the graph is only known via sampling? Find out how in our recent paper: ",https://arxiv.org/abs/1810.04586,"The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph. In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the eigenvectors of the Laplacian provides a promising approach to state representation learning. However, existing methods for performing this approximation are ill-suited in general RL settings for two main reasons: First, they are computationally expensive, often requiring operations on large matrices. Second, these methods lack adequate justification beyond simple, tabular, finite-state settings. In this paper, we present a fully general and scalable method for approximating the eigenvectors of the Laplacian in a model-free RL context. We systematically evaluate our approach and empirically show that it generalizes beyond the tabular, finite-state setting. Even in tabular, finite-state settings, its ability to approximate the eigenvectors outperforms previous proposals. Finally, we show the potential benefits of using a Laplacian representation learned using our method in goal-achieving RL tasks, providing evidence that our technique can be used to significantly improve the performance of an RL agent. ","The Laplacian in RL: Learning Representations with Efficient
Approximations",1,"['The Laplacian approach is extremely useful in discrete graphs (e.g. mixing times, min-cuts, etc.). How can we apply similar ideas to RL, where eigendecomposition is not easy and the graph is only known via sampling? Find out how in our recent paper: ']",18,10,256
466,191,1257783060214530048,38844142,kartik goyal,"In our #acl2020nlp paper, we propose a generative model to analyze glyph shapes in early-modern printing. The interpretable variables account for spatial noise while the neurally parametrized uninterpretable latent variable explains other variation. 1/6 This model learns the underlying glyph templates and allows to cluster on the basis of subtle differences in glyph shapes in presence of multiple confounding sources of variance like inking. 2/6 In order to ensure that the expressive neurally parametrized variable does not learn to explain the spatial adjustment noise (translation, shear, rotation, scale) captured by the interpretable variables, it is crucial to restrict the inference network for q(z). 3/6 Our model discovers different fonts used in early modern documents in a completely unsupervised manner! These are the discovered glyph shapes for Fs and Rs in Leviathan (1651?) 4/6 The interpretable variables also provide explicit control. We used the estimated offsets, rotation/shear angles and scaling factors to project the noisy images to a fixed size frame resulting in aligned/comparable input images. 5/6 This is joint work with fantastic collaborators @BergKirkpatrick, @ChrisVVarren, @redpony and Max G'Sell. Code coming soon! 6/6",https://arxiv.org/abs/2005.01646,"We propose a deep and interpretable probabilistic generative model to analyze glyph shapes in printed Early Modern documents. We focus on clustering extracted glyph images into underlying templates in the presence of multiple confounding sources of variance. Our approach introduces a neural editor model that first generates well-understood printing phenomena like spatial perturbations from template parameters via interpertable latent variables, and then modifies the result by generating a non-interpretable latent vector responsible for inking variations, jitter, noise from the archiving process, and other unforeseen phenomena associated with Early Modern printing. Critically, by introducing an inference network whose input is restricted to the visual residual between the observation and the interpretably-modified template, we are able to control and isolate what the vector-valued latent variable captures. We show that our approach outperforms rigid interpretable clustering baselines (Ocular) and overly-flexible deep generative models (VAE) alike on the task of completely unsupervised discovery of typefaces in mixed-font documents. ","A Probabilistic Generative Model for Typographical Analysis of Early
Modern Printing",6,"['In our #acl2020nlp paper, we propose a generative model to analyze glyph shapes in early-modern printing. The interpretable variables account for spatial noise while the neurally parametrized uninterpretable latent variable explains other variation. 1/6 ', 'This model learns the underlying glyph templates and allows to cluster on the basis of subtle differences in glyph shapes in presence of multiple confounding sources of variance like inking. 2/6 https://t.co/DYqNiQvRWr', 'In order to ensure that the expressive neurally parametrized variable does not learn to explain the spatial adjustment noise (translation, shear, rotation, scale) captured by the interpretable variables, it is crucial to restrict the inference network for q(z). 3/6 https://t.co/43ShyS8BqS', 'Our model discovers different fonts used in early modern documents in a completely unsupervised manner! These are the discovered glyph shapes for Fs and Rs in Leviathan (1651?) 4/6 https://t.co/KTbKblska3', 'The interpretable variables also provide explicit control. We used the estimated offsets, rotation/shear angles and scaling factors to project the noisy images to a fixed size frame resulting in aligned/comparable input images. 5/6 https://t.co/PAMjeykCVj', ""This is joint work with fantastic collaborators @BergKirkpatrick, @ChrisVVarren, @redpony and Max G'Sell. Code coming soon! 6/6""]",20,05,1297
467,241,1280702492314087424,2369610869,David Setton,"My first publication, a paper I worked on with Jenny Greene, just went up tonight. We looked at the incidence of AGN activity in post-starburst galaxies and found that the youngest galaxies are the most likely to host [OIII] AGN It looks like this could be tied to the gas content; another paper coming out soon finds that younger post-starburst also have lots of gas that could feed their supermassive black holes. It also could be that weâre catching the tail end of whatever activity quenched these PSBs Whatever it ends up being, really glad to have worked on this with Jenny and I hope yâall enjoy reading it. More to come from SQuIGGLE very soon đ @dkeidar The cliffâs notes are that weâre looking at galaxies which recently shut off their star formation and trying to understand why, and we found a link where the galaxies that shut off their star formation most recently also have active black holes at their centers @dkeidar Thatâs close; the black holes are there in all these massive galaxies, but itâs the event that caused them to start being fed at high rates that may have shut off the star formation, and the gas feeding them now could not be forming new stars because of the energy from that @dkeidar Itâs really tough to link causality though because the timescales are so short (relative to the age of galaxies as a whole) and this data doesnât tell the whole story. Thatâs what weâre trying to get at in the future: what do these look like spatially resolved, in the radio, etc. @dkeidar Gladly will provide them :) @dkeidar The data for this work is publicly available as part of the Sloan Digital Sky Survey, which has taken on the order of millions of spectra of bright objects in the sky. Our sub sample has existed for ~4 years and weâre on the cusp of a few other pubs using it and ancillary data",https://arxiv.org/abs/2007.02967,"We study the incidence of nuclear activity in a large sample of massive post-starburst galaxies at z~0.7 selected from the Sloan Digital Sky Survey, and identify active galactic nuclei based on radio continuum and optical emission lines. Over our mass range of 10^10.6-10^11.5 Msun, the incidence of radio activity is weakly dependent on stellar mass and independent of stellar age, while radio luminosity depends strongly on stellar mass. Optical nuclear activity incidence depends most strongly on the Dn4000 line index, a proxy for stellar age, with an active fraction that is ~ten times higher in the youngest versus oldest post-starburst galaxies. Since a similar trend is seen between age and molecular gas fractions, we argue that, like in local galaxies, the age trend reflects a peak in available fueling rather than feedback from the central black hole on the surrounding galaxy. ","The Role of Active Galactic Nuclei in the Quenching of Massive Galaxies
in the SQuiGGLE Survey",8,"['My first publication, a paper I worked on with Jenny Greene, just went up tonight. We looked at the incidence of AGN activity in post-starburst galaxies and found that the youngest galaxies are the most likely to host [OIII] AGN ', 'It looks like this could be tied to the gas content; another paper coming out soon finds that younger post-starburst also have lots of gas that could feed their supermassive black holes. It also could be that weâre catching the tail end of whatever activity quenched these PSBs', 'Whatever it ends up being, really glad to have worked on this with Jenny and I hope yâall enjoy reading it. More to come from SQuIGGLE very soon đ', '@dkeidar The cliffâs notes are that weâre looking at galaxies which recently shut off their star formation and trying to understand why, and we found a link where the galaxies that shut off their star formation most recently also have active black holes at their centers', '@dkeidar Thatâs close; the black holes are there in all these massive galaxies, but itâs the event that caused them to start being fed at high rates that may have shut off the star formation, and the gas feeding them now could not be forming new stars because of the energy from that', '@dkeidar Itâs really tough to link causality though because the timescales are so short (relative to the age of galaxies as a whole) and this data doesnât tell the whole story. Thatâs what weâre trying to get at in the future: what do these look like spatially resolved, in the radio, etc.', '@dkeidar Gladly will provide them :)', '@dkeidar The data for this work is publicly available as part of the Sloan Digital Sky Survey, which has taken on the order of millions of spectra of bright objects in the sky. Our sub sample has existed for ~4 years and weâre on the cusp of a few other pubs using it and ancillary data']",20,07,1836
468,7,736213513484554241,70831441,Soumith Chintala,"""Discovering Causal Signals in Images"". Given an image, can you tell which objects cause others? Our new paper. wdyt? @RahelJhirad why use simulated images when you dont need to... @RahelJhirad i think using videos is an easier proposition. The arrow of time is a strong signal. @RahelJhirad Either ways, training on image-specific data is not needed, as shown in our paper.",http://arxiv.org/abs/1605.08179,"This paper establishes the existence of observable footprints that reveal the ""causal dispositions"" of the object categories appearing in collections of images. We achieve this goal in two steps. First, we take a learning approach to observational causal discovery, and build a classifier that achieves state-of-the-art performance on finding the causal direction between pairs of random variables, given samples from their joint distribution. Second, we use our causal direction classifier to effectively distinguish between features of objects and features of their contexts in collections of static images. Our experiments demonstrate the existence of a relation between the direction of causality and the difference between objects and their contexts, and by the same token, the existence of observable signals that reveal the causal dispositions of objects. ",Discovering Causal Signals in Images,4,"['""Discovering Causal Signals in Images"". Given an image, can you tell which objects cause others? Our new paper. wdyt?', '@RahelJhirad why use simulated images when you dont need to...', '@RahelJhirad i think using videos is an easier proposition. The arrow of time is a strong signal.', '@RahelJhirad Either ways, training on image-specific data is not needed, as shown in our paper.']",16,05,380
469,43,1176229083707072518,69282116,Fotios Petropoulos,"New paper: ""DĂ©jĂ vu: Forecasting with similarity"" A model-free way to forecasting using cross-learning. When history repeats itself, why relying on statistical models that assume specific data generation processes? @YanfeiKang @f3ngli @fsu_ntua",https://arxiv.org/abs/1909.00221,"Accurate forecasts are vital for supporting the decisions of modern companies. Forecasters typically select the most appropriate statistical model for each time series. However, statistical models usually presume some data generation process while making strong assumptions about the errors. In this paper, we present a novel data-centric approach -- `forecasting with similarity', which tackles model uncertainty in a model-free manner. Existing similarity-based methods focus on identifying similar patterns within the series, i.e., `self-similarity'. In contrast, we propose searching for similar patterns from a reference set, i.e., `cross-similarity'. Instead of extrapolating, the future paths of the similar series are aggregated to obtain the forecasts of the target series. Building on the cross-learning concept, our approach allows the application of similarity-based forecasting on series with limited lengths. We evaluate the approach using a rich collection of real data and show that it yields competitive accuracy in both points forecasts and prediction intervals. ","D\'ej\`a vu: A data-centric forecasting approach through time series
cross-similarity",1,"['New paper: ""DĂ©jĂ vu: Forecasting with similarity""\nA model-free way to forecasting using cross-learning. When history repeats itself, why relying on statistical models that assume specific data generation processes?\n @YanfeiKang @f3ngli @fsu_ntua']",19,09,251
470,117,1192440088610902016,1073390737868365824,Yoni Brande,"My first first-author paper is up! If you're into JWST direct imaging and nearby cool stars, check out how we did some simulations to figure out the best way to find Jovian planets with MIRI and JWST! And a huge thanks to .@mrtommyb, .@elsisrad, .@AstroEricL, and Josh Schlieder (no Twitter) for being the best advisor/collaborators I could've asked for on this project! @mrtommyb @elsisrad @AstroEricL @JoshuaSchlieder welp, I forgot his @. that's a good start to the day",https://arxiv.org/abs/1911.02022,"The upcoming launch of the James Webb Space Telescope (JWST) will dramatically increase our understanding of exoplanets, particularly through direct imaging. Microlensing and radial velocity surveys indicate that some M-dwarfs host long period giant planets. Some of these planets will likely be just a few parsecs away and a few AU from their host stars, a parameter space that cannot be probed by existing high-contrast imagers. We studied whether the coronagraphs on the Mid-Infrared Instrument on JWST can detect Jovian-type planets around nearby M-dwarfs. For a sample of 27 very nearby M-dwarfs, we simulated a sample of Saturn--Jupiter-mass planets with three atmospheric configurations, three orbital separations, observed in three different filters. We found that the f1550c $15.5\mu$m filter is best suited for detecting Jupiter-like planets. Jupiter-like planets with patchy cloud cover, 2 AU from their star, are detectable at $15.5\mu$m around 14 stars in our sample, while Jupiters with clearer atmospheres are detectable around all stars in the sample. Saturns were most detectable at 10.65 and $11.4\mu$m (f1065c and f1140c filters), but only with cloud-free atmospheres and within 3 pc (6 stars). Surveying all 27 stars would take $<170$ hours of JWST integration time, or just a few hours for a shorter survey of the most favorable targets. There is one potentially detectable known planet in our sample -- GJ~832~b. Observations aimed at detecting this planet should occur in 2024--2026, when the planet is maximally separated from the star. ","The Feasibility of Directly Imaging Nearby Cold Jovian Planets with
MIRI/JWST",3,"[""My first first-author paper is up! If you're into JWST direct imaging and nearby cool stars, check out how we did some simulations to figure out the best way to find Jovian planets with MIRI and JWST!\n\n"", ""And a huge thanks to .@mrtommyb, .@elsisrad, .@AstroEricL, and Josh Schlieder (no Twitter) for being the best advisor/collaborators I could've asked for on this project!"", ""@mrtommyb @elsisrad @AstroEricL @JoshuaSchlieder welp, I forgot his @. that's a good start to the day""]",19,11,479
471,0,1282746555963854849,4633095775,Ravi Tej Akella,"1/ Exciting news! Introducing two new policy gradient methods: (i) Deep Bayesian Quadrature Policy Gradient (DBQPG), and (ii) Uncertainty Aware Policy Gradient (UAPG). Joint work with @kazizzad, Mohammad Ghavamzadeh, @AnimaAnandkumar, and @yisongyue Paper: 2/ While there exist numerous policy gradient (PG) algorithms, most of them use the same PG estimator: Monte-Carlo (MC) method. In DBQPG, we replace MC with Bayesian quadrature (BQ) for estimating the PG. Project Link: Code: 3/ In comparison to MC, DBQPG provides (i) more accurate gradient estimates with a significantly lower variance, (ii) consistent improvement in the sample complexity and average return, and (iii) the uncertainty in policy gradient estimation. 4/ Further, we propose a new policy gradient method, UAPG, that uses the estimation uncertainty of the policy gradient to compute reliable policy updates with robust step-sizes.",https://arxiv.org/abs/2006.15637,"We study the problem of obtaining accurate policy gradient estimates using a finite number of samples. Monte-Carlo methods have been the default choice for policy gradient estimation, despite suffering from high variance in the gradient estimates. On the other hand, more sample efficient alternatives like Bayesian quadrature methods have received little attention due to their high computational complexity. In this work, we propose deep Bayesian quadrature policy gradient (DBQPG), a computationally efficient high-dimensional generalization of Bayesian quadrature, for policy gradient estimation. We show that DBQPG can substitute Monte-Carlo estimation in policy gradient methods, and demonstrate its effectiveness on a set of continuous control benchmarks. In comparison to Monte-Carlo estimation, DBQPG provides (i) more accurate gradient estimates with a significantly lower variance, (ii) a consistent improvement in the sample complexity and average return for several deep policy gradient algorithms, and, (iii) the uncertainty in gradient estimation that can be incorporated to further improve the performance. ",Deep Bayesian Quadrature Policy Optimization,4,"['1/ Exciting news! Introducing two new policy gradient methods: (i) Deep Bayesian Quadrature Policy Gradient (DBQPG), and (ii) Uncertainty Aware Policy Gradient (UAPG). Joint work with @kazizzad, Mohammad Ghavamzadeh, @AnimaAnandkumar, and @yisongyue\nPaper: ', '2/ While there exist numerous policy gradient (PG) algorithms, most of them use the same PG estimator: Monte-Carlo (MC) method. In DBQPG, we replace MC with Bayesian quadrature (BQ) for estimating the PG.\nProject Link: https://t.co/OyJ5eDihge\nCode: https://t.co/M2GBDQ2jYO', '3/ In comparison to MC, DBQPG provides (i) more accurate gradient estimates with a significantly lower variance, (ii) consistent improvement in the sample complexity and average return, and (iii) the uncertainty in policy gradient estimation.', '4/ Further, we propose a new policy gradient method, UAPG, that uses the estimation uncertainty of the policy gradient to compute reliable policy updates with robust step-sizes.']",20,06,923
472,284,1321444476250918912,978500233368760320,Biao Zhang,"[1/2] Happy to share our #WMT2020 paper ""Fast Interleaved Bidirectional Sequence Generation"". We propose IBDecoder that accelerates decoding by ~2x-6x with almost no quality loss. Paper: Code: w/ @iatitov and @RicoSennrich [2/2] IBDecoder combines bidirectional generation with semi-autoregressive modeling, where we show words from the left and right directions are loosely dependent. IBDecoder is compatible with shallow decoder students. With some quality loss, IBDecoder yields 4x-11x speedup! ",https://arxiv.org/abs/2010.14481,"Independence assumptions during sequence generation can speed up inference, but parallel generation of highly inter-dependent tokens comes at a cost in quality. Instead of assuming independence between neighbouring tokens (semi-autoregressive decoding, SA), we take inspiration from bidirectional sequence generation and introduce a decoder that generates target words from the left-to-right and right-to-left directions simultaneously. We show that we can easily convert a standard architecture for unidirectional decoding into a bidirectional decoder by simply interleaving the two directions and adapting the word positions and self-attention masks. Our interleaved bidirectional decoder (IBDecoder) retains the model simplicity and training efficiency of the standard Transformer, and on five machine translation tasks and two document summarization tasks, achieves a decoding speedup of ~2X compared to autoregressive decoding with comparable quality. Notably, it outperforms left-to-right SA because the independence assumptions in IBDecoder are more felicitous. To achieve even higher speedups, we explore hybrid models where we either simultaneously predict multiple neighbouring tokens per direction, or perform multi-directional decoding by partitioning the target sequence. These methods achieve speedups to 4X-11X across different tasks at the cost of <1 BLEU or <0.5 ROUGE (on average). Source code is released at this https URL ",Fast Interleaved Bidirectional Sequence Generation,2,"['[1/2] Happy to share our #WMT2020 paper ""Fast Interleaved Bidirectional Sequence Generation"". We propose IBDecoder that accelerates decoding by ~2x-6x with almost no quality loss. \n\nPaper: \nCode: \n\nw/ @iatitov and @RicoSennrich ', '[2/2] IBDecoder combines bidirectional generation with semi-autoregressive modeling, where we show words from the left and right directions are loosely dependent. IBDecoder is compatible with shallow decoder students. With some quality loss, IBDecoder yields 4x-11x speedup! https://t.co/Oj1qS3AvNb']",20,10,526
473,22,1265563243378085888,2352149191,Mimmo Nardiello,"My 10th paper as 1st author and maybe one of the hardest works I have done so far. Many new candidate worlds orbiting stars in open clusters!đȘ Paper #10 come primo autore, forse uno dei piu` impegnativi. Tanti nuovi candidati pianeti in ammassi aperti!đ This is also my first work as 1st author since when I work at @LAM_Marseille as @CNES fellow! đ @AdrienCoffinet Yes! Pathos 1 was discovered in paper I (Nardiello et al. 2019). An updated list of PATHOS candidates is now available here: ",https://arxiv.org/abs/2005.12281,"The scope of the project ""A PSF-based Approach to TESS High Quality data Of Stellar clusters"" (PATHOS) is the extraction and analysis of high-precision light curves of stars in stellar clusters and young associations for the identification of candidate exoplanets and variable stars. The cutting-edge tools used in this project allow us to measure the real flux of stars in dense fields, minimising the effects due to contamination by neighbour sources. We extracted about 200 000 light curves of stars in 645 open clusters located in the southern ecliptic hemisphere and observed by TESS during the first year of its mission. We searched for transiting signals and we found 33 objects of interest, 11 of them are strong candidate exoplanets. Because of the limited S/N, we did not find any Earth or super-Earth. We identified two Neptune-size planets orbiting stars with $R_{\star}<1.5\,R_{\odot}$, implying a frequency $f_{\star}=1.34 \pm 0.95\,\%$, consistent with the frequency around field stars. The 7 Jupiter candidates around stars with $R_{\star}<\,1.5R_{\odot}$ imply a frequency $f_{\star}=0.19\pm 0.07\,\%$, smaller than in the field. A more complete estimate of the survey completeness and false positive rate is needed to confirm these results. Light curves used in this work will be made available to the astronomical community on the Mikulski Archive for Space Telescope under the project PATHOS. ","A PSF-based Approach to TESS High quality data Of Stellar clusters
(PATHOS) -- II. Search for exoplanets in open clusters of the southern
ecliptic hemisphere and their frequency",3,"['My 10th paper as 1st author and maybe one of the hardest works I have done so far. Many new candidate worlds orbiting stars in open clusters!đȘ\nPaper #10 come primo autore, forse uno dei piu` impegnativi. Tanti nuovi candidati pianeti in ammassi aperti!đ\n', 'This is also my first work as 1st author since when I work at @LAM_Marseille as @CNES fellow! đ', '@AdrienCoffinet Yes! Pathos 1 was discovered in paper I (Nardiello et al. 2019). \nAn updated list of PATHOS candidates is now available here:\nhttps://t.co/tGwboKyTfg']",20,05,504
474,11,1323298445097054208,112053784,Kaley Brauer đ«,"new paper to kick off #supernovaember! đđđ Whereâd that gold in your jewelry come from? Many people think neutron stars đ€·ââïž but weâre here to say donât discount collapsars (esp when talking about the oldest gold in the universe)! đ„ paper in a picture: blatantly stealing #supernovaember hashtag from @sanjanacurtis & @YoniAstro cause the timing was far too good @sanjanacurtis @alexanderpji @annafrebel @mrdrout hahah IKR everyone so quick to forget about CCSN, the disrespect đȘ @YoniAstro @sanjanacurtis you're not wrong, it's a big boom đ€·ââïž @AstroBarker @alexanderpji @annafrebel @mrdrout Called out by Twitter đ°đ° @expwnential @alexanderpji @annafrebel @mrdrout also itâs a really rough time and gotta celebrate the little wins when they come, so hereâs my family when we had a mini publication party! feat. my brother who doesnât drink toasting with oreos đ„ @syndamn chocolate creme oreo! definitely good but for those wishing their oreo was more chocolatey, fudge covered classic is the superior choice @QuantumEmanuel @alexanderpji @annafrebel @mrdrout thanks Emanuel!!! đ",http://arxiv.org/abs/2010.15837,"It is unclear if neutron star mergers can explain the observed r-process abundances of metal-poor stars. Collapsars, defined here as rotating massive stars whose collapse results in a rapidly accreting disk around a black hole that can launch jets, are a promising alternative. We find that we can produce a self-consistent model in which a population of collapsars with stochastic europium yields synthesizes all of the r-process material in metal-poor ([Fe/H] < -2.5) stars. Our model reproduces the observed scatter and evolution of scatter of [Eu/Fe] abundances. We find that if collapsars are the dominant r-process site for metal-poor stars, r-process synthesis may be linked to supernovae that produce long gamma-ray bursts. Our results also allow for the possibility that core-collapse supernovae beyond those that launch gamma-ray bursts also produce r-process material (e.g., potentially a subset of Type Ic-BL supernovae). Furthermore, we identify collapsar jet properties (isotropic energy, engine luminosity, or engine time) which may trace r-process yield and verify that the amount of r-process yield produced per collapsar in our model (~0.07 Msun) is consistent with other independent estimates. In the future, achieving 0.05 dex precision on distribution scatter or a reliable selection function would further constrain our probe of r-process production. Our model would also hold for another prompt r-process site with a power-law yield, and work is needed to determine if, for example, fast-merging neutron stars can also explain abundance scatter. ","Collapsar R-Process Yields Can Reproduce [Eu/Fe] Abundance Scatter in
Metal-Poor Stars",9,"['new paper to kick off #supernovaember! đđđ\n\nWhereâd that gold in your jewelry come from? Many people think neutron stars đ€·\u200dâïž but weâre here to say donât discount collapsars (esp when talking about the oldest gold in the universe)! đ„\n\n\npaper in a picture: ', 'blatantly stealing #supernovaember hashtag from @sanjanacurtis & @YoniAstro cause the timing was far too good', '@sanjanacurtis @alexanderpji @annafrebel @mrdrout hahah IKR everyone so quick to forget about CCSN, the disrespect đȘ', ""@YoniAstro @sanjanacurtis you're not wrong, it's a big boom đ€·\u200dâïž"", '@AstroBarker @alexanderpji @annafrebel @mrdrout Called out by Twitter đ°đ°', '@expwnential @alexanderpji @annafrebel @mrdrout https://t.co/H2mN3aILy4', 'also itâs a really rough time and gotta celebrate the little wins when they come, so hereâs my family when we had a mini publication party! feat. my brother who doesnât drink toasting with oreos đ„ https://t.co/3LCtMqMvg0', '@syndamn chocolate creme oreo! definitely good but for those wishing their oreo was more chocolatey, fudge covered classic is the superior choice', '@QuantumEmanuel @alexanderpji @annafrebel @mrdrout thanks Emanuel!!! đ']",20,10,1107
475,31,707820004796973056,26716739,Mattias Villani,"Our new paper 'Block-Wise Pseudo-Marginal Metropolis-Hastings' @chris_naesseth yes, blocking the particles/subsample indicators and updating a single block jointly with the original model parameters. @chris_naesseth it is another way of correlating the particles. Less general, but has some interesting advantages.",http://arxiv.org/abs/1603.02485,"The pseudo-marginal (PM) approach is increasingly used for Bayesian inference in statistical models, where the likelihood is intractable but can be estimated unbiasedly. %Examples include random effect models, state-space models and data subsampling in big-data settings. Deligiannidis et al. (2016) show how the PM approach can be made much more efficient by correlating the underlying Monte Carlo (MC) random numbers used to form the estimate of the likelihood at the current and proposed values of the unknown parameters. Their approach greatly speeds up the standard PM algorithm, as it requires a much smaller number of samples or particles to form the optimal likelihood estimate. Our paper presents an alternative implementation of the correlated PM approach, called the block PM, which divides the underlying random numbers into blocks so that the likelihood estimates for the proposed and current values of the parameters only differ by the random numbers in one block. We show that this implementation of the correlated PM can be much more efficient for some specific problems than the implementation in Deligiannidis et al. (2016); for example when the likelihood is estimated by subsampling or the likelihood is a product of terms each of which is given by an integral which can be estimated unbiasedly by randomised quasi-Monte Carlo. Our article provides methodology and guidelines for efficiently implementing the block PM. A second advantage of the the block PM is that it provides a direct way to control the correlation between the logarithms of the estimates of the likelihood at the current and proposed values of the parameters than the implementation in Deligiannidis et al. (2016). We obtain methods and guidelines for selecting the optimal number of samples based on idealized but realistic assumptions. ",The Block Pseudo-Marginal Sampler,3,"[""Our new paper 'Block-Wise Pseudo-Marginal Metropolis-Hastings' "", '@chris_naesseth yes, blocking the particles/subsample indicators and updating a single block jointly with the original model parameters.', '@chris_naesseth it is another way of correlating the particles. Less general, but has some interesting advantages.']",16,03,321
476,92,1491699446970789890,1140222123006472194,Kasper Elm Heintz,A blast from the past! đ„ New paper lead by Andrea Rossi reporting the afterglow properties of the z~6.3 GRB detected last year. Amazing to think that this light has been traveling from a time when the Universe was less than a billion years old. Stay tuned for the next papers in the line reporting the abundances and the IGM neutral fraction in and around this GRB đ„,https://arxiv.org/abs/2202.04544,"We present the discovery of the very energetic GRB 210905A at the high redshift z=6.312 and its luminous X-ray and optical afterglow. We obtained photometric and spectroscopic follow-up in the optical and near-infrared (NIR), covering both the prompt and afterglow emission from a few minutes up to 7.5 Ms after burst. With an isotropic gamma-ray energy of Eiso=1.27x10^54erg, GRB 210905A lies in the top ~7% GRBs in terms of energy released. Its afterglow is among the most luminous ever observed and, in particular, it is one of the most luminous in the optical at t<0.5 d, in the rest frame. The afterglow starts with a shallow evolution that can be explained by energy injection, and is followed by a steeper decay, while the spectral energy distribution is in agreement with slow cooling in a constant-density environment within the standard fireball theory. A jet break at 39+-21 d has been observed in the X-ray light curve; however, it is hidden in the H-band, potentially due to a constant contribution from an unknown component, most likely a foreground intervening galaxy and/or the host galaxy. We derived a half-opening angle of 7.9+-1.6 degrees, the highest ever measured for a z>~6 burst but within the range covered by closer events. The resulting collimation-corrected gamma-ray energy of 10^52erg is also among the highest ever measured. The moderately large half-opening angle argues against recent claims of an inverse dependence of the half-opening angle on the redshift. The total jet energy is likely too large for a standard magnetar, and suggests that the central engine of this burst was a newly formed black hole. Despite the outstanding energetics and luminosity of both GRB 210905A and its afterglow, we demonstrate that they are consistent within 2swith those of less distant bursts, indicating that the powering mechanisms and progenitors do not evolve significantly with redshift. ",A blast from the infant Universe: the very high-z GRB 210905A,2,"['A blast from the past! đ„ New paper lead by Andrea Rossi reporting the afterglow properties of the z~6.3 GRB detected last year. Amazing to think that this light has been traveling from a time when the Universe was less than a billion years old. \n\n', 'Stay tuned for the next papers in the line reporting the abundances and the IGM neutral fraction in and around this GRB đ„']",22,02,374
477,2,914155676871651328,3301521293,Carlos Vega,"I'm glad to share our new paper: ""Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis."" which I'll present during the HPCC 2017 conference in Bangkok, Thailand this December. ",https://arxiv.org/abs/1709.06127,"Traditional data centers are designed with a rigid architecture of fit-for-purpose servers that provision resources beyond the average workload in order to deal with occasional peaks of data. Heterogeneous data centers are pushing towards more cost-efficient architectures with better resource provisioning. In this paper we study the feasibility of using disaggregated architectures for intensive data applications, in contrast to the monolithic approach of server-oriented architectures. Particularly, we have tested a proactive network analysis system in which the workload demands are highly variable. In the context of the dReDBox disaggregated architecture, the results show that the overhead caused by using remote memory resources is significant, between 66\% and 80\%, but we have also observed that the memory usage is one order of magnitude higher for the stress case with respect to average workloads. Therefore, dimensioning memory for the worst case in conventional systems will result in a notable waste of resources. Finally, we found that, for the selected use case, parallelism is limited by memory. Therefore, using a disaggregated architecture will allow for increased parallelism, which, at the same time, will mitigate the overhead caused by remote memory. ","Diluting the Scalability Boundaries: Exploring the Use of Disaggregated
Architectures for High-Level Network Data Analysis",1,"['I\'m glad to share our new paper: ""Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis."" which I\'ll present during the HPCC 2017 conference in Bangkok, Thailand this December.\n']",17,09,252
478,53,1321274937026531329,2930047588,Andrew Vanderburg,"Check out this great new paper from Lizhou Sha (@mentisoasis) confirming and characterizing two new Saturn-mass exoplanets from the @TESSatMIT and @KeplerGO missions! First thing's first, here are the family portraits. EPIC 246193072 b orbits its star every 12 days. It was discovered by K2 observations and confirmed with precise radial velocities from PFS, HARPS, and FEROS: TOI 954 b is a hotter planet, orbiting its star every 3.7 days. It was discovered in the TESS full frame images and confirmed via radial velocity observations from CHIRON, CORALIE, HARPS, PFS, and MINERVA-Australis: These two planets are in an interesting mass range and help us probe some of the questions surrounding giant planets. A major question in the field is why some of the hottest Jupiter-mass exoplanets are inflated, or larger than we would expect based on their mass alone. One way of probing inflation is to see whether planets can be re-inflated by additional heating when their host stars evolve into giants and leave the main sequence. @SKGrunblatt has done a lot of interesting work studying Jupiters that seem to be reinflated. TOI 954 b adds an interesting twist to this tale of planet inflation. Despite orbiting an evolving star, and being close to the reinflation regime, the planet does not seem to be larger than we would expect. So maybe reinflation is more efficient for Jupiters than Saturns. It's of course hard to draw any firm conclusions from just these two planets, but adding them to the exoplanet population will hopefully let us figure out the physics that causes hot Jupiters to be so huge. Finally, @mentisoasis did this work while working at the MIT TESS Science Office, but he has now just started as a grad student at the Univ. of Wisconsin-Madison. We're super excited to have him here!",https://arxiv.org/abs/2010.14436,"We report the discovery of two short-period Saturn-mass planets, one transiting the G subgiant TOI-954 (TIC 44792534, $ V = 10.343 $, $ T = 9.78 $) observed in TESS sectors 4 and 5, and one transiting the G dwarf K2-329 (EPIC 246193072, $ V = 12.70 $, $ K = 10.67 $) observed in K2 campaigns 12 and 19. We confirm and characterize these two planets with a variety of ground-based archival and follow-up observations, including photometry, reconnaissance spectroscopy, precise radial velocity, and high-resolution imaging. Combining all available data, we find that TOI-954 b has a radius of $0.852_{-0.062}^{+0.053} \, R_{\mathrm{J}}$ and a mass of $0.174_{-0.017}^{+0.018} \, M_{\mathrm{J}}$ and is in a 3.68 day orbit, while K2-329 b has a radius of $0.774_{-0.024}^{+0.026} \, R_{\mathrm{J}}$ and a mass of $0.260_{-0.022}^{+0.020} \, M_{\mathrm{J}}$ and is in a 12.46 day orbit. As TOI-954 b is 30 times more irradiated than K2-329 b but more or less the same size, these two planets provide an opportunity to test whether irradiation leads to inflation of Saturn-mass planets and contribute to future comparative studies that explore Saturn-mass planets at contrasting points in their lifetimes. ","TOI-954 b and K2-329 b: Short-Period Saturn-Mass Planets that Test
whether Irradiation Leads to Inflation",8,"['Check out this great new paper from Lizhou Sha (@mentisoasis) confirming and characterizing two new Saturn-mass exoplanets from the @TESSatMIT and @KeplerGO missions! ', ""First thing's first, here are the family portraits. EPIC 246193072 b orbits its star every 12 days. It was discovered by K2 observations and confirmed with precise radial velocities from PFS, HARPS, and FEROS: https://t.co/IdbnwI7FUF"", 'TOI 954 b is a hotter planet, orbiting its star every 3.7 days. It was discovered in the TESS full frame images and confirmed via radial velocity observations from CHIRON, CORALIE, HARPS, PFS, and MINERVA-Australis: https://t.co/b0WDTn0p3p', 'These two planets are in an interesting mass range and help us probe some of the questions surrounding giant planets. A major question in the field is why some of the hottest Jupiter-mass exoplanets are inflated, or larger than we would expect based on their mass alone.', 'One way of probing inflation is to see whether planets can be re-inflated by additional heating when their host stars evolve into giants and leave the main sequence. @SKGrunblatt has done a lot of interesting work studying Jupiters that seem to be reinflated.', 'TOI 954 b adds an interesting twist to this tale of planet inflation. Despite orbiting an evolving star, and being close to the reinflation regime, the planet does not seem to be larger than we would expect. So maybe reinflation is more efficient for Jupiters than Saturns.', ""It's of course hard to draw any firm conclusions from just these two planets, but adding them to the exoplanet population will hopefully let us figure out the physics that causes hot Jupiters to be so huge."", ""Finally, @mentisoasis did this work while working at the MIT TESS Science Office, but he has now just started as a grad student at the Univ. of Wisconsin-Madison. We're super excited to have him here!""]",20,10,1826
479,198,1488950496727429121,882511862524465152,Aleksander Madry,"Can we cast ML predictions as simple functions of individual training inputs? Yes! w/ @andrew_ilyas @smsampark @logan_engstrom @gpoleclerc, we introduce datamodels (), a framework to study how data + algs -> predictions. Blog: (1/6) We trained *hundreds of thousands* of models on random subsets of computer vision datasets using our library FFCV (). We then used this data to fit *linear* models that can successfully predict model outputs. (2/6) We then use datamodels to: (1) Predict data counterfactuals (i.e., what if I remove subset R from the train set?) and find that you can flip model predictions for *over 50%* of test examples on CIFAR-10 by removing only 200 (target-specific) training images (0.4% of total) (3/6) (2) Identify similar training examples to a given test example and use this to find (a non-trivial amount of) train-test leakage in both CIFAR-10 and FMoW datasets. [Some (of many) examples images from the same scene are below.] (4/6) (3) Use datamodels as *feature representation* and employ these embeddings to perform clustering that pinpoints *model-driven* data subpopulations (see a PCA-driven example below) and unlock a suite of graph algorithms for analyzing complex datasets. (5/6) Overall, datamodels are a versatile tool for *model-driven* understanding of the data. The fact that a simple linear model can predict outputs of end-to-end model training suggests there is much to be learned about generalization of DNNs here too. (See Sec. 7 of the paper.) (6/6)",http://arxiv.org/abs/2202.00622,"We present a conceptual framework, datamodeling, for analyzing the behavior of a model class in terms of the training data. For any fixed ""target"" example $x$, training set $S$, and learning algorithm, a datamodel is a parameterized function $2^S \to \mathbb{R}$ that for any subset of $S' \subset S$ -- using only information about which examples of $S$ are contained in $S'$ -- predicts the outcome of training a model on $S'$ and evaluating on $x$. Despite the potential complexity of the underlying process being approximated (e.g., end-to-end training and evaluation of deep neural networks), we show that even simple linear datamodels can successfully predict model outputs. We then demonstrate that datamodels give rise to a variety of applications, such as: accurately predicting the effect of dataset counterfactuals; identifying brittle predictions; finding semantically similar examples; quantifying train-test leakage; and embedding data into a well-behaved and feature-rich representation space. Data for this paper (including pre-computed datamodels as well as raw predictions from four million trained deep neural networks) is available at this https URL . ",Datamodels: Predicting Predictions from Training Data,6,"['Can we cast ML predictions as simple functions of individual training inputs? Yes! w/ @andrew_ilyas @smsampark @logan_engstrom @gpoleclerc, we introduce datamodels (), a framework to study how data + algs -> predictions. Blog: (1/6) ', 'We trained *hundreds of thousands* of models on random subsets of computer vision datasets using our library FFCV (https://t.co/QWUdL50g9i). We then used this data to fit *linear* models that can successfully predict model outputs. (2/6) https://t.co/ITedgWiCQi', 'We then use datamodels to: (1) Predict data counterfactuals (i.e., what if I remove subset R from the train set?) and find that you can flip model predictions for *over 50%* of test examples on CIFAR-10 by removing only 200 (target-specific) training images (0.4% of total) (3/6) https://t.co/SbkaAGJffl', '(2) Identify similar training examples to a given test example and use this to find (a non-trivial amount of) train-test leakage in both CIFAR-10 and FMoW datasets. [Some (of many) examples images from the same scene are below.] (4/6) https://t.co/919Ls0xlyJ', '(3) Use datamodels as *feature representation* and employ these embeddings to perform clustering that pinpoints *model-driven* data subpopulations (see a PCA-driven example below) and unlock a suite of graph algorithms for analyzing complex datasets. (5/6) https://t.co/Grbfy3ghc5', 'Overall, datamodels are a versatile tool for *model-driven* understanding of the data. The fact that a simple linear model can predict outputs of end-to-end model training suggests there is much to be learned about generalization of DNNs here too. (See Sec. 7 of the paper.) (6/6)']",22,02,1557
480,136,1227303297880657921,2733642475,Soheil Feizi,"Check out our new work on ""Curse of Dimensionality on Randomized Smoothing"". This is a joint work with @tomgoldsteincs and @umdcs students Aounon Kumar and Alex Levine. Paper: We have three main results (see the sub-tweets): (1) We show that the robustness radius of almost any i.i.d. smoothing distribution to defend against L_p attacks decreases as 1/(d^(1/2-1/p)). Note that unlike the standard case of L_2, for p>2, this decreases with the input dimension d. (2) Using an isotropic Gaussian smoothing as in Cohen @deepcohen , Kolter @zicokolte et al. is almost as good as any other i.i.d. smoothing distributions for p>2 within a constant factor of the robustness radius. The gap becomes tighter by using generalized Gaussian dists (3) We show that for smoothing over an L_{\infty} ball, this dependency to the input dimension d becomes worst as 1/d^(1-1/p) ",https://arxiv.org/abs/2002.03239,"Randomized smoothing, using just a simple isotropic Gaussian distribution, has been shown to produce good robustness guarantees against $\ell_2$-norm bounded adversaries. In this work, we show that extending the smoothing technique to defend against other attack models can be challenging, especially in the high-dimensional regime. In particular, for a vast class of i.i.d.~smoothing distributions, we prove that the largest $\ell_p$-radius that can be certified decreases as $O(1/d^{\frac{1}{2} - \frac{1}{p}})$ with dimension $d$ for $p > 2$. Notably, for $p \geq 2$, this dependence on $d$ is no better than that of the $\ell_p$-radius that can be certified using isotropic Gaussian smoothing, essentially putting a matching lower bound on the robustness radius. When restricted to {\it generalized} Gaussian smoothing, these two bounds can be shown to be within a constant factor of each other in an asymptotic sense, establishing that Gaussian smoothing provides the best possible results, up to a constant factor, when $p \geq 2$. We present experimental results on CIFAR to validate our theory. For other smoothing distributions, such as, a uniform distribution within an $\ell_1$ or an $\ell_\infty$-norm ball, we show upper bounds of the form $O(1 / d)$ and $O(1 / d^{1 - \frac{1}{p}})$ respectively, which have an even worse dependence on $d$. ","Curse of Dimensionality on Randomized Smoothing for Certifiable
Robustness",4,"['Check out our new work on ""Curse of Dimensionality on Randomized Smoothing"". This is a joint work with\xa0@tomgoldsteincs and\xa0@umdcs students\xa0Aounon Kumar and Alex Levine. \nPaper: \nWe have three main results (see the sub-tweets): ', '(1) We show that the robustness radius of almost any i.i.d. smoothing distribution to defend against L_p attacks decreases as 1/(d^(1/2-1/p)). Note that unlike the standard case of L_2, for p>2, this decreases with the input dimension d. https://t.co/Ssdu366X0y', '(2) Using an isotropic Gaussian smoothing as in\xa0Cohen @deepcohen , Kolter \n@zicokolte et al.\xa0is almost as good as any other\xa0i.i.d. smoothing distributions for p>2 within a constant factor of the robustness radius. The gap becomes tighter by using generalized Gaussian dists https://t.co/HwBsTyRL8c', '(3) We show that for smoothing over an L_{\\infty} ball, this dependency to the input dimension d becomes worst as 1/d^(1-1/p) https://t.co/ih2ipb4CMB']",20,02,902
481,57,982956416410505218,131879500,John Ilee,"CO overtone emission is a really useful tracer of small-scale discs around massive YSOs, but why do we only see it in ~25% of spectra? Manfred Mann said it best - weâre just blinded by the light. New paper out now: #Astronomy #Arxiv #Astroph đ ",https://arxiv.org/abs/1804.01934,"To date, there is no explanation as to why disc-tracing CO first overtone (or `bandhead') emission is not a ubiquitous feature in low- to medium-resolution spectra of massive young stellar objects, but instead is only detected toward approximately 25 per cent of their spectra. In this paper, we investigate the hypothesis that only certain mass accretion rates result in detectable bandhead emission in the near infrared spectra of MYSOs. Using an analytic disc model combined with an LTE model of the CO emission, we find that high accretion rates ($\gtrsim 10^{-4}\,{\rm M}_{\odot}{\mathrm{yr}}^{-1}$) result in large dust sublimation radii, a larger contribution to the $K$-band continuum from hot dust at the dust sublimation radius, and therefore correspondingly lower CO emission with respect to the continuum. On the other hand, low accretion rates ($\lesssim10^{-6}\,{\rm M}_{\odot}{\mathrm{yr}}^{-1}$) result in smaller dust sublimation radii, a correspondingly smaller emitting area of CO, and thus also lower CO emission with respect to the continuum. In general, moderate accretion rates produce the most prominent, and therefore detectable, CO first overtone emission. We compare our findings to a recent near-infrared spectroscopic survey of MYSOs, finding results consistent with our hypothesis. We conclude that the detection rate of CO bandhead emission in the spectra of MYSOs could be the result of MYSOs exhibiting a range of mass accretion rates, perhaps due to the variable accretion suggested by recent multi-epoch observations of these objects. ","Blinded by the light: on the relationship between CO first overtone
emission and mass accretion rate in massive young stellar objects",1,"['CO overtone emission is a really useful tracer of small-scale discs around massive YSOs, but why do we only see it in ~25% of spectra? \n\nManfred Mann said it best - weâre just blinded by the light. \n\nNew paper out now: \n\n#Astronomy #Arxiv #Astroph đ ']",18,04,259
482,36,1364295810717011974,373525906,Weijie Su,"New paper: In *Federated f-Differential Privacy* (), we proposed a new privacy notion tailored to the setting where the clients ally in the attack. This privacy concept is adapted from f-differential privacy. w/ Qinqing Zheng, @ShuxiaoC, and Qi Long.",https://arxiv.org/abs/2102.11158,"Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information without compromising much on the privacy of their local sensitive data. In this paper, we introduce federated $f$-differential privacy, a new notion specifically tailored to the federated setting, based on the framework of Gaussian differential privacy. Federated $f$-differential privacy operates on record level: it provides the privacy guarantee on each individual record of one client's data against adversaries. We then propose a generic private federated learning framework {PriFedSync} that accommodates a large family of state-of-the-art FL algorithms, which provably achieves federated $f$-differential privacy. Finally, we empirically demonstrate the trade-off between privacy guarantee and prediction performance for models trained by {PriFedSync} in computer vision tasks. ",Federated $f$-Differential Privacy,1,"['New paper: In *Federated f-Differential Privacy* (), we proposed a new privacy notion tailored to the setting where the clients ally in the attack. This privacy concept is adapted from f-differential privacy. w/ Qinqing Zheng, @ShuxiaoC, and Qi Long.']",21,02,256
483,102,1182010216314880000,3743366715,Aaron Mueller,"new #emnlp2019 paper with @aryamccarthy, @amuuueller, David Yarowsky, and others --- we quantify what it means to be a basic color term cross-lingually, finding that Berlin & Kay (1969) were *mostly* right, if a little Indo-Euro-centric: @aryamccarthy we also find that we can quantitatively recover the exact order of color term acquisition proposed in B&K using their criteria! Interestingly, none of the criteria alone are enough to recover these; we need all of them ",https://arxiv.org/abs/1910.01531,"There is an extensive history of scholarship into what constitutes a ""basic"" color term, as well as a broadly attested acquisition sequence of basic color terms across many languages, as articulated in the seminal work of Berlin and Kay (1969). This paper employs a set of diverse measures on massively cross-linguistic data to operationalize and critique the Berlin and Kay color term hypotheses. Collectively, the 14 empirically-grounded computational linguistic metrics we design---as well as their aggregation---correlate strongly with both the Berlin and Kay basic/secondary color term partition (gamma=0.96) and their hypothesized universal acquisition sequence. The measures and result provide further empirical evidence from computational linguistics in support of their claims, as well as additional nuance: they suggest treating the partition as a spectrum instead of a dichotomy. ",Modeling Color Terminology Across Thousands of Languages,2,"['new #emnlp2019 paper with @aryamccarthy, @amuuueller, David Yarowsky, and others --- we quantify what it means to be a basic color term cross-lingually, finding that Berlin & Kay (1969) were *mostly* right, if a little Indo-Euro-centric: ', '@aryamccarthy we also find that we can quantitatively recover the exact order of color term acquisition proposed in B&K using their criteria! Interestingly, none of the criteria alone are enough to recover these; we need all of them https://t.co/WPZsHaEpZf']",19,10,491
484,18,1443537952626331655,3087935529,Drew Stommes,"In a new working paper with P. M. Aronow and Fredrik SĂ€vje, we reanalyze all studies published in the top journals in political science that use a regression discontinuity (RD) design. [1/4] We show that the literature demonstrates some pathological behavior consistent with selective reporting of findings. The figure below demonstrates that findings cluster just at or just above the t-statistic threshold of 1.96 (i.e. a p-value of less than or equal to 0.05). [2/4] Reanalysis of these studies using modern, automated methods [see: ] shows that, while the point estimates are relatively stable, uncertainty has been been systematically understated. [3/4] Retrospective power analyses demonstrate that most studies were underpowered to detect all but large effect sizes. We conclude that many published findings using the RD design are exaggerated if not altogether spurious. [4/4] @guygrossman We included filtering by reviewers/editors in the phrase 'selective reporting.' Perhaps 'selective reporting or publishing' would have been clearer. That said, we do explicitly note publication filtering and selective reporting in the body of the paper.",https://arxiv.org/abs/2109.14526,"The regression discontinuity (RD) design offers identification of causal effects under weak assumptions, earning it the position as a standard method in modern political science research. But identification does not necessarily imply that the causal effects can be estimated accurately with limited data. In this paper, we highlight that estimation is particularly challenging with the RD design and investigate how these challenges manifest themselves in the empirical literature. We collect all RD-based findings published in top political science journals from 2009--2018. The findings exhibit pathological features; estimates tend to bunch just above the conventional level of statistical significance. A reanalysis of all studies with available data suggests that researcher's discretion is not a major driver of these pathological features, but researchers tend to use inappropriate methods for inference, rendering standard errors artificially small. A retrospective power analysis reveals that most of these studies were underpowered to detect all but large effects. The issues we uncover, combined with well-documented selection pressures in academic publishing, cause concern that many published findings using the RD design are exaggerated, if not entirely spurious. ","On the reliability of published findings using the regression
discontinuity design in political science",5,"['In a new working paper with P. M. Aronow and Fredrik SĂ€vje, we reanalyze all studies published in the top journals in political science that use a regression discontinuity (RD) design. [1/4]', 'We show that the literature demonstrates some pathological behavior consistent with selective reporting of findings. The figure below demonstrates that findings cluster just at or just above the t-statistic threshold of 1.96 (i.e. a p-value of less than or equal to 0.05). [2/4] https://t.co/AiONuxYIEu', 'Reanalysis of these studies using modern, automated methods [see: https://t.co/EPIW73zx1W] shows that, while the point estimates are relatively stable, uncertainty has been been systematically understated. [3/4]', 'Retrospective power analyses demonstrate that most studies were underpowered to detect all but large effect sizes. We conclude that many published findings using the RD design are exaggerated if not altogether spurious. [4/4]', ""@guygrossman We included filtering by reviewers/editors in the phrase 'selective reporting.' Perhaps 'selective reporting or publishing' would have been clearer. That said, we do explicitly note publication filtering and selective reporting in the body of the paper.""]",21,09,1171
485,101,1305508367083991043,76711005,"Antonio Pedro Ramos, PhD",New working paper: Explaining the Decline of Child Mortality in 44 Developing Countries: A Bayesian Extension of Oaxaca Decomposition Methods We investigate the decline of infant mortality in 42 low and middle income countries (LMIC) using detailed micro data from 84 Demographic and Health Surveys. We estimate infant mortality risk for each infant in our data and develop a novel extension of Oaxaca decomposition + to understand the sources of these changes. We find that the decline in infant mortality is due to a declining propensity for parents with given characteristics to experience the death of an infant rather than due to changes in the distributions of these characteristics + over time. Our results suggest that technical progress and policy health interventions in the form of public goods are the main drivers of the the recent decline in infant mortality in LMIC. #OaxacaDecomposition #epitwitter #poptwitter #DemographicsMatter #Stats,https://arxiv.org/abs/2009.05417,We investigate the decline of infant mortality in 42 low and middle income countries (LMIC) using detailed micro data from 84 Demographic and Health Surveys. We estimate infant mortality risk for each infant in our data and develop a novel extension of Oaxaca decomposition to understand the sources of these changes. We find that the decline in infant mortality is due to a declining propensity for parents with given characteristics to experience the death of an infant rather than due to changes in the distributions of these characteristics over time. Our results suggest that technical progress and policy health interventions in the form of public goods are the main drivers of the the recent decline in infant mortality in LMIC. ,"Explaining the Decline of Child Mortality in 44 Developing Countries: A
Bayesian Extension of Oaxaca Decomposition Methods",5,"['New working paper: Explaining the Decline of Child Mortality in 44 Developing Countries: A Bayesian Extension of Oaxaca Decomposition Methods ', 'We investigate the decline of infant mortality in 42 low and middle income countries (LMIC) using detailed micro data from 84 Demographic and Health Surveys. We estimate infant mortality risk for each infant in our data and develop a novel extension of Oaxaca decomposition +', 'to understand the sources of these changes. We find that the decline in infant mortality is due to a declining propensity for parents with given characteristics to experience the death of an infant rather than due to changes in the distributions of these characteristics +', 'over time. Our results suggest that technical progress and policy health interventions in the form of public goods are the main drivers of the the recent decline in infant mortality in LMIC.', '#OaxacaDecomposition #epitwitter #poptwitter #DemographicsMatter #Stats']",20,09,960
486,152,1309170590901829635,1116002690604130305,Juliette Becker,"The K2-266 system has a unique geometry, in which an ultra-short period (USP) planet resides significantly misaligned to a system of tightly packed inner planets (STIP). In our new paper (), we present two ways this geometry can form. Although we donât know which hypothesis is correct for K2-266, theyâre both possibilities for this system and other systems with similar geometries! One way for this to happen is to have an additional unseen planet in the system. Depending on its exact orbital parameters, such a companion can cause an initially coplanar USP-STIP system to become misaligned. A second option to get this geometry is for the system to assemble while the star is still somewhat young and has a significant quadrupole moment. This, combined with a slight stellar obliquity with respect to the planet-forming disk, can also cause the USP-STIP misalignment. For more details, please check out either the paper () or a short summary I wrote on my website (). Thanks to all my coauthors on this work, @kbatygin, Dan Fabrycky, Fred Adams, @astroplum, Andrew Vanderburg, and @Astro_JRod @ExoCytherean @kbatygin @astroplum @Astro_JRod Thanks! I'll be interested to hear what you think!",https://arxiv.org/abs/2009.10745,"Ultra-short period planets provide a window into the inner edge of the parameter space occupied by planetary orbits. In one particularly intriguing class of multi-planet systems, the ultra-short period planet is flanked by short-period companions, and the outer planets occupy a discernibly distinct dynamical state. In the observational database, this phenomenon is represented by a small number of stars hosting systems of tightly packed co-planar planets as well as an ultra-short period planet, whose orbit of is misaligned relative to the mutual plane of the former. In this work, we explore two different mechanisms that can produce an ultra-short period planet that is misaligned with the rest of its compact planetary system: natural decoupling between the inner and outer system via the stellar quadrupole moment, and decoupling forced by an external companion with finely-tuned orbital parameters. These two processes operate with different timescales, and can thus occur simultaneously. In this work, we use the K2-266 system as an illustrative example to elucidate the dynamics of these two processes, and highlight the types of constraints that may arise regarding the dynamical histories of systems hosting ultra-short period planets. ","The Origin of Systems of Tightly Packed Inner Planets with Misaligned,
Ultra-Short-Period Companions",6,"['The K2-266 system has a unique geometry, in which an ultra-short period (USP) planet resides significantly misaligned to a system of tightly packed inner planets (STIP). In our new paper (), we present two ways this geometry can form. ', 'Although we donât know which hypothesis is correct for K2-266, theyâre both possibilities for this system and other systems with similar geometries!', 'One way for this to happen is to have an additional unseen planet in the system. Depending on its exact orbital parameters, such a companion can cause an initially coplanar USP-STIP system to become misaligned. https://t.co/9IpUxkNa8D', 'A second option to get this geometry is for the system to assemble while the star is still somewhat young and has a significant quadrupole moment. This, combined with a slight stellar obliquity with respect to the planet-forming disk, can also cause the USP-STIP misalignment.', 'For more details, please check out either the paper (https://t.co/5JE6tkcy44) or a short summary I wrote on my website (https://t.co/976xrQ8ZRU). \nThanks to all my coauthors on this work, @kbatygin, Dan Fabrycky, Fred Adams, @astroplum, Andrew Vanderburg, and @Astro_JRod', ""@ExoCytherean @kbatygin @astroplum @Astro_JRod Thanks! I'll be interested to hear what you think!""]",20,09,1226
487,139,1350064428348297216,1349798626483187713,Ivan Esteban,"Paper out! In , Jordi Salvado & myself ask ourselves: why do we always assume ideal gases in cosmology? We study how long-range interactions change the energy density and equation of state (eos). And, what happens if neutrinos self-interact? This plot shows eos as a function of temperature for an interacting system: 1) It is ultrarelativistic (w=1/3) for temperatures well below the particle mass. For neutrinos, this means they would be relativistic longer! 2) Notice the dramatic difference wrt ideal gas [dashed]! When applied to self-interacting neutrinos, 1) There is *no* relevant cosmological neutrino mass bound. #KATRIN could detect something soon! 2) Future surveys as EUCLID could fail to detect neutrino masses. We have also made our cosmo code public! ",https://arxiv.org/abs/2101.05804,"Cosmology is well suited to study the effects of long range interactions due to the large densities in the early Universe. In this article, we explore how the energy density and equation of state of a fermion system diverge from the commonly assumed ideal gas form under the presence of scalar long range interactions with a range much smaller than cosmological scales. In this scenario, ""small""-scale physics can impact our largest-scale observations. As a benchmark, we apply the formalism to self-interacting neutrinos, performing an analysis to present and future cosmological data. Our results show that the current cosmological neutrino mass bound is fully avoided in the presence of a long range interaction, opening the possibility for a laboratory neutrino mass detection in the near future. We also demonstrate an interesting complementarity between neutrino laboratory experiments and the future EUCLID survey. ",Long Range Interactions in Cosmology: Implications for Neutrinos,3,"['Paper out!\nIn , Jordi Salvado & myself ask ourselves: why do we always assume ideal gases in cosmology? \nWe study how long-range interactions change the energy density and equation of state (eos). And, what happens if neutrinos self-interact? ', 'This plot shows eos as a function of temperature for an interacting system:\n1) It is ultrarelativistic (w=1/3) for temperatures well below the particle mass. For neutrinos, this means they would be relativistic longer!\n2) Notice the dramatic difference wrt ideal gas [dashed]! https://t.co/THqijhjsxX', 'When applied to self-interacting neutrinos,\n1) There is *no* relevant cosmological neutrino mass bound. #KATRIN could detect something soon!\n2) Future surveys as EUCLID could fail to detect neutrino masses.\n\nWe have also made our cosmo code public! https://t.co/Oi9oVsApuj']",21,01,793
488,98,1359004813942525953,206818334,Ivan Oseledets,1/n Analyzing GAN convergence is hard. We derived exact rates and oscillations providing complete analysis from Poincare constant and weighted Laplacian. Check our new paper w @vforvalya1 and Artem Babenko. This is the first time the convergence is analyzed in the functional space (not for a fixed NN approx). I really enjoyed working on this paper: the concepts match each other very well! Check how it helps to understand data augs for GAN. We link local GAN training convergence to the fundamental properties of the density.,https://arxiv.org/abs/2102.04448,"Recent work demonstrated the benefits of studying continuous-time dynamics governing the GAN training. However, this dynamics is analyzed in the model parameter space, which results in finite-dimensional dynamical systems. We propose a novel perspective where we study the local dynamics of adversarial training in the general functional space and show how it can be represented as a system of partial differential equations. Thus, the convergence properties can be inferred from the eigenvalues of the resulting differential operator. We show that these eigenvalues can be efficiently estimated from the target dataset before training. Our perspective reveals several insights on the practical tricks commonly used to stabilize GANs, such as gradient penalty, data augmentation, and advanced integration schemes. As an immediate practical benefit, we demonstrate how one can a priori select an optimal data augmentation strategy for a particular generation task. ",Functional Space Analysis of Local GAN Convergence,3,"['1/n Analyzing GAN convergence is hard. We derived exact rates and oscillations providing complete analysis from Poincare constant and weighted Laplacian. Check our new paper w @vforvalya1 and Artem Babenko.', 'This is the first time the convergence is analyzed in the functional space (not for a fixed NN approx). I really enjoyed working on this paper: the concepts match each other very well! Check how it helps to understand data augs for GAN.', 'We link local GAN training convergence to the fundamental properties of the density.']",21,02,535
489,4,1036617608160862208,2427297348,Ashish Mehta,"Our new paper ""Learning End-to-end Autonomous Driving using Guided Auxiliary Supervision"", outlines a Multi-Task Learning from Demonstration framework for end-to-end autonomous driving in urban environments. With @AdithyaSub86 and Anbumani Subramanian. ",https://arxiv.org/abs/1808.10393,"Learning to drive faithfully in highly stochastic urban settings remains an open problem. To that end, we propose a Multi-task Learning from Demonstration (MT-LfD) framework which uses supervised auxiliary task prediction to guide the main task of predicting the driving commands. Our framework involves an end-to-end trainable network for imitating the expert demonstrator's driving commands. The network intermediately predicts visual affordances and action primitives through direct supervision which provide the aforementioned auxiliary supervised guidance. We demonstrate that such joint learning and supervised guidance facilitates hierarchical task decomposition, assisting the agent to learn faster, achieve better driving performance and increases transparency of the otherwise black-box end-to-end network. We run our experiments to validate the MT-LfD framework in CARLA, an open-source urban driving simulator. We introduce multiple non-player agents in CARLA and induce temporal noise in them for realistic stochasticity. ","Learning End-to-end Autonomous Driving using Guided Auxiliary
Supervision",1,"['Our new paper ""Learning End-to-end Autonomous Driving using Guided Auxiliary Supervision"", outlines a Multi-Task Learning from Demonstration framework for end-to-end autonomous driving in urban environments. \nWith @AdithyaSub86 and Anbumani Subramanian.\n\n']",18,08,259
490,65,1275384152808992780,799177412013592577,Rianne van den Berg,"New paper! IDF++: analyzing and improving integer discrete flows for lossless compression, with @agritsenko @m__dehghani @CasperKaae @TimSalimans paper: Our contributions: 1. We show that flows for discrete random variables are more flexible than previously thought. 2. We analyze the loss landscape & gradient bias of integer discrete flows (@emiel_hoogeboom, @jornpeters, @wellingmax). 3. And we introduce modifications to the integer discrete flows architecture that improve its performance on lossless compression --> IDF++. @wojczarnecki @agritsenko @m__dehghani @CasperKaae @TimSalimans Thanks! Nicolas Renaud () helped out and drew the figure in Blender ().",https://arxiv.org/abs/2006.12459,"In this paper we analyse and improve integer discrete flows for lossless compression. Integer discrete flows are a recently proposed class of models that learn invertible transformations for integer-valued random variables. Their discrete nature makes them particularly suitable for lossless compression with entropy coding schemes. We start by investigating a recent theoretical claim that states that invertible flows for discrete random variables are less flexible than their continuous counterparts. We demonstrate with a proof that this claim does not hold for integer discrete flows due to the embedding of data with finite support into the countably infinite integer lattice. Furthermore, we zoom in on the effect of gradient bias due to the straight-through estimator in integer discrete flows, and demonstrate that its influence is highly dependent on architecture choices and less prominent than previously thought. Finally, we show how different architecture modifications improve the performance of this model class for lossless compression, and that they also enable more efficient compression: a model with half the number of flow layers performs on par with or better than the original integer discrete flow model. ","IDF++: Analyzing and Improving Integer Discrete Flows for Lossless
Compression",5,"['New paper! IDF++: analyzing and improving integer discrete flows for lossless compression, with @agritsenko @m__dehghani @CasperKaae @TimSalimans \n\npaper: ', 'Our contributions:\n1. We show that flows for discrete random variables are more flexible than previously thought. https://t.co/4tNKWzXWsS', '2. We analyze the loss landscape & gradient bias of integer discrete flows (@emiel_hoogeboom, @jornpeters, @wellingmax). https://t.co/PgFL2fEbx8', '3. And we introduce modifications to the integer discrete flows architecture that improve its performance on lossless compression --> IDF++. https://t.co/BOXXbyQghK', '@wojczarnecki @agritsenko @m__dehghani @CasperKaae @TimSalimans Thanks! Nicolas Renaud (https://t.co/sgal4B3dgW) helped out and drew the figure in Blender (https://t.co/7QQXR3kZAv).']",20,06,716
491,134,1190267490598711297,1169373448193236992,Pam Vervoort,đđđšHey... wait a minute... this is not a paleoclimate paper! Venturing into new scientific territories and exploring how sensitive Earth's orbital cycles are to the presence of a giant planet like Jupiter. @JontiHorner @ExoCytherean @thealmashow ,https://arxiv.org/abs/1910.14250,"A wealth of Earth-sized exoplanets will be discovered in the coming years, proving a large pool of candidates from which the targets for the search for life beyond the Solar system will be chosen. The target selection process will require the leveraging of all available information in order to maximise the robustness of the target list and make the most productive use of follow-up resources. Here, we present the results of a suite of $n$-body simulations that demonstrate the degree to which the orbital architecture of the Solar system impacts the variability of Earth's orbital elements. By varying the orbit of Jupiter and keeping the initial orbits of the other planets constant, we demonstrate how subtle changes in Solar system architecture could alter the Earth's orbital evolution -- a key factor in the Milankovitch cycles that alter the amount and distribution of solar insolation, thereby driving periodic climate change on our planet. The amplitudes and frequencies of Earth's modern orbital cycles fall in the middle of the range seen in our runs for all parameters considered -- neither unusually fast nor slow, nor large nor small. This finding runs counter to the `Rare Earth' hypothesis, which suggests that conditions on Earth are so unusual that life elsewhere is essentially impossible. Our results highlight how dynamical simulations of newly discovered exoplanetary systems could be used as an additional means to assess the potential targets of biosignature searches, and thereby help focus the search for life to the most promising targets. ",Quantifying the Influence of Jupiter on the Earth's Orbital Cycles,1,"[""đđđšHey... wait a minute... this is not a paleoclimate paper! Venturing into new scientific territories and exploring how sensitive Earth's orbital cycles are to the presence of a giant planet like Jupiter. @JontiHorner @ExoCytherean @thealmashow \n\n""]",19,10,253
492,76,1440222180550197249,204261944,Matthew Kenworthy,"New paper day! @ajbohn discusses ""Unveiling wide-orbit companions to K-type stars in Sco-Cen with Gaia EDR3"" where he selected common proper motion companions (including stars from the YSES survey that has discovered 3 directly imaged exoplanets) /1 @ajbohn Alex noted that one of the brown dwarfs detected in the YSES survey was bright enough to be detected in the GAIA catalogue - which made us wonder about more distant comoving companions, so he searched and found many more! The astrometric precision of GAIA enabled us to test /2 @ajbohn whether they were bound by comparing the projected velocity between the star and the candidate companion with the maximum bound velocity, enabling us to reject unbound companions. The GAIA photometry also allowed us to estimate the mass of the companions too. /3 @ajbohn Looking at 480 K-type stars, there are 163 potential companions to 142 stars including 21 candidate triple systems, and about a dozen brown dwarf candidates. One BD was seen by SPHERE and Gaia, and the SED from SPHERE agrees with the GAIA photometric mass estimate. /4 @ajbohn It was great to see this paper get accepted before @ajbohn's thesis defence tomorrow, and I'm very proud of all of Alex's work on this and the other papers.",https://arxiv.org/abs/2109.09185,"Abbreviated. We aim to identify new low-mass companions to young stars using the astrometric measurements provided by the Gaia space mission and complementary VLT/SPHERE data. We identify companion candidates from a sample of K-type, pre-main sequence stars in the Scorpius Centaurus association using the early version of the third data release of the Gaia space mission. Based on the provided positions, proper motions, and magnitudes, we identify all objects within a predefined radius whose differential proper motions are consistent with a gravitationally bound system. We derive companion masses through comparison with evolutionary tracks. For seven identified companion candidates we use additional data collected with VLT/SPHERE and VLT/NACO to assess the accuracy of the properties of the companions based on Gaia photometry alone. We identify 110 comoving companions that have a companionship likelihood of more than $95\,\%$. We identify ten especially intriguing companions that have masses in the brown dwarf regime down to $20\,M_\mathrm{Jup}$. Our high-contrast imaging data confirm both astrometry and photometric masses derived from Gaia alone. We discover a new brown dwarf companion, TYC 8252-533-1 B, with a projected separation of approximately $570\,\mathrm{au}$ from its Sun-like primary. SED modeling provides a companion mass of $52^{+17}_{-11}\,M_\mathrm{Jup}$. We show that the Gaia database can identify low-mass companions at wide separations from their host stars. For K-type Sco-Cen members Gaia can detect sub-stellar objects at projected separations larger than $300\,\mathrm{au}$ and is sensitivity limited beyond $1,000\,\mathrm{au}$ with a lower mass limit down to $20\,M_\mathrm{Jup}$. A similar analysis of other star-forming regions could significantly enlarge the sample size of such objects and test formation and evolution theories of planetary systems. ","Unveiling wide-orbit companions to K-type stars in Sco-Cen with Gaia
EDR3",5,"['New paper day! @ajbohn discusses ""Unveiling wide-orbit companions to K-type stars in Sco-Cen with Gaia EDR3"" where he selected common proper motion companions (including stars from the YSES survey that has discovered 3 directly imaged exoplanets) /1 ', '@ajbohn Alex noted that one of the brown dwarfs detected in the YSES survey was bright enough to be detected in the GAIA catalogue - which made us wonder about more distant comoving companions, so he searched and found many more! The astrometric precision of GAIA enabled us to test /2 https://t.co/PXpqn3IUwo', '@ajbohn whether they were bound by comparing the projected velocity between the star and the candidate companion with the maximum bound velocity, enabling us to reject unbound companions. The GAIA photometry also allowed us to estimate the mass of the companions too. /3 https://t.co/boXDZEJ7Yt', '@ajbohn Looking at 480 K-type stars, there are 163 potential companions to 142 stars including 21 candidate triple systems, and about a dozen brown dwarf candidates. One BD was seen by SPHERE and Gaia, and the SED from SPHERE agrees with the GAIA photometric mass estimate. /4 https://t.co/uPCnPk5IOm', ""@ajbohn It was great to see this paper get accepted before @ajbohn's thesis defence tomorrow, and I'm very proud of all of Alex's work on this and the other papers.""]",21,09,1283
493,127,1431284942605066248,366380609,Evan Rosenman,"New paper from me, @rina_friedberg, and @BaiocchiMike: ""Robust Designs for Prospective Randomized Trials Surveying Sensitive Topics"". This work emerged out of our experience analyzing a cluster-randomized trial of an empowerment training program deployed to adolescent girls in Nairobi, Kenya. The treatment was intended to reduce the incidence of gender-based violence. When surveying sensitive topics, reporting biases -- e.g. the possibility of underreporting troubling outcomes -- pose a threat to causal inference. We approach the problem under the potential outcomes framework, assuming a binary outcome. We suppose reporting behavior is fixed given the choice of survey and show the joint distribution of ""reporting classes"" (e.g. underreporter, overreporter, truth-teller) and ""response classes"" (e.g. outcome increases, decreases, stays the same) determines the bias exactly. Then, we propose a sensitivity model and an optimization procedure to determine the required sample size for achieving a desired power level, given the worst-case configuration of misreporters. This is a challenging area! Insights from social scientists + local stakeholders are crucial to design the best survey instruments. Rigorous practices must be followed to preserve comfort and anonymity. Then, statisticians can design procedures to help address residual biases. We hope folks find these results interesting and welcome any feedback!",https://arxiv.org/abs/2108.08944,"We consider the problem of designing a prospective randomized trial in which the outcome data will be self-reported, and will involve sensitive topics. Our interest is in misreporting behavior, and how respondents' tendency to under- or overreport a binary outcome might affect the power of the experiment. We model the problem by assuming each individual in our study is a member of one ""reporting class"": a truth-teller, underreporter, overreporter, or false-teller. We show that the joint distribution of reporting classes and ""response classes"" (characterizing individuals' response to the treatment) will exactly define the bias and variance of the causal estimate in our experiment. Then, we propose a novel procedure for deriving sample sizes under the worst-case power corresponding to a given level of misreporting. Our problem is motivated by prior experience implementing a randomized controlled trial of a sexual violence prevention program among adolescent girls in Nairobi, Kenya. ","Robust Designs for Prospective Randomized Trials Surveying Sensitive
Topics",7,"['New paper from me, @rina_friedberg, and @BaiocchiMike: ""Robust Designs for Prospective Randomized Trials Surveying Sensitive Topics"". ', 'This work emerged out of our experience analyzing a cluster-randomized trial of an empowerment training program deployed to adolescent girls in Nairobi, Kenya. The treatment was intended to reduce the incidence of gender-based violence.', 'When surveying sensitive topics, reporting biases -- e.g. the possibility of underreporting troubling outcomes -- pose a threat to causal inference. We approach the problem under the potential outcomes framework, assuming a binary outcome.', 'We suppose reporting behavior is fixed given the choice of survey and show the joint distribution of ""reporting classes"" (e.g. underreporter, overreporter, truth-teller) and ""response classes"" (e.g. outcome increases, decreases, stays the same) determines the bias exactly. https://t.co/saUrrRg92u', 'Then, we propose a sensitivity model and an optimization procedure to determine the required sample size for achieving a desired power level, given the worst-case configuration of misreporters. https://t.co/mCsRmEWxcK', 'This is a challenging area! Insights from social scientists + local stakeholders are crucial to design the best survey instruments. Rigorous practices must be followed to preserve comfort and anonymity. Then, statisticians can design procedures to help address residual biases.', 'We hope folks find these results interesting and welcome any feedback!']",21,08,1448
494,13,968523266997415936,29251447,Tuan Do,"In our new paper, we found that some stars at the Galactic center have very unusual abundances compared to elsewhere in the Milky Way. This indicates a potentially very different chemical enrichment history or supernova rates. It's also the first paper I've written with @jlu_astro and @QuinnKono together!",https://arxiv.org/abs/1802.08270,"We present adaptive-optics assisted near-infrared high-spectral resolution observations of late-type giants in the nuclear star cluster of the Milky Way. The metallicity and elemental abundance measurements of these stars offer us an opportunity to understand the formation and evolution of the nuclear star cluster. In addition, their proximity to the supermassive black hole ($\sim 0.5$ pc) offers a unique probe of the star formation and chemical enrichment in this extreme environment. We observed two stars identified by medium spectral-resolution observations as potentially having very high metallicities. We use spectral-template fitting with the PHOENIX grid and Bayesian inference to simultaneously constrain the overall metallicity, [M/H], alpha-element abundance [$\alpha$/Fe], effective temperature, and surface gravity of these stars. We find that one of the stars has very high metallicity ([M/H] $> 0.6$) and the other is slightly above solar metallicity. Both Galactic center stars have lines from scandium (Sc), vanadium (V), and yttrium (Y) that are much stronger than allowed by the PHOENIX grid. We find, using the spectral synthesis code Spectroscopy Made Easy, that [Sc/Fe] may be an order of magnitude above solar. For comparison, we also observed an empirical calibrator in NGC6791, the highest metallicity cluster known ([M/H] $\sim 0.4$). Most lines are well matched between the calibrator and the Galactic center stars, except for Sc, V, and Y, which confirms that their abundances must be anomalously high in these stars. These unusual abundances, which may be a unique signature of nuclear star clusters, offer an opportunity to test models of chemical enrichment in this region. ","Super-Solar Metallicity Stars in the Galactic Center Nuclear Star
Cluster: Unusual Sc, V, and Y Abundances",2,"['In our new paper, we found that some stars at the Galactic center have very unusual abundances compared to elsewhere in the Milky Way. This indicates a potentially very different chemical enrichment history or supernova rates. \n\n', ""It's also the first paper I've written with @jlu_astro and @QuinnKono together!""]",18,02,314