text
stringlengths 64
6.93k
|
---|
in Python",3,"[""Interested in learning the design principles and technical decisions that went into PyTorch's new `torch.fx` program transformation framework? Learn all that and more from our new paper on arXiv <LINK>"", ""@Dr_flerken @PyTorch Can you elaborate? I just whipped up a small transform in terms of the `nn.Transformer` module: https://t.co/c7ZDbsIQOa. That works. Would be glad to look into whatever issue you're having"", '@RaineyCode Hi @RaineyCode, let us know any abbreviations/terms that are not easy to understand from the paper and we can help!']",21,12,522 |
3888,189,1275959421399597062,800790709599014912,Francis Williams,Our new paper: Kernels arising from wide nets are a powerful representation for encoding 3D surfaces. They producing qualitatively and quantitatively superior results to both traditional methods like Poisson Reconstruction and neural-net based methods. <LINK> <LINK>,https://arxiv.org/abs/2006.13782,"We present Neural Splines, a technique for 3D surface reconstruction that is based on random feature kernels arising from infinitely-wide shallow ReLU networks. Our method achieves state-of-the-art results, outperforming recent neural network-based techniques and widely used Poisson Surface Reconstruction (which, as we demonstrate, can also be viewed as a type of kernel method). Because our approach is based on a simple kernel formulation, it is easy to analyze and can be accelerated by general techniques designed for kernel-based learning. We provide explicit analytical expressions for our kernel and argue that our formulation can be seen as a generalization of cubic spline interpolation to higher dimensions. In particular, the RKHS norm associated with Neural Splines biases toward smooth interpolants. ",Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks,1,['Our new paper: Kernels arising from wide nets are a powerful representation for encoding 3D surfaces. They producing qualitatively and quantitatively superior results to both traditional methods like Poisson Reconstruction and neural-net based methods. <LINK> <LINK>'],20,06,266 |
3889,10,1266263305003216904,77990334,Jari Saramäki,"And now for something completely different: speciation. Our new review paper, accepted to Phil Trans R Soc B, with @InaSatok @simonhmartin @HeikkiHelantera @jonna_kulmuni [TL;DR for network scientists: networks are involved & there is a lot to discover]. <LINK>",http://arxiv.org/abs/2005.13790,"All genes interact with other genes, and their additive effects and epistatic interactions affect an organism's phenotype and fitness. Recent theoretical and empirical work has advanced our understanding of the role of multi-locus interactions in speciation. However, relating different models to one another and to empirical observations is challenging. This review focuses on multi-locus interactions that lead to reproductive isolation (RI) through reduced hybrid fitness. We first review theoretical approaches and show how recent work incorporating a mechanistic understanding of multi-locus interactions recapitulates earlier models, but also makes novel predictions concerning the build-up of RI. These include high variance in the build-up rate of RI among taxa, the emergence of strong incompatibilities producing localised barriers to introgression, and an effect of population size on the build-up of RI. We then review recent experimental approaches to detect multi-locus interactions underlying RI using genomic data. We argue that future studies would benefit from overlapping methods like Ancestry Disequilibrium scans, genome scans of differentiation and analyses of hybrid gene expression. Finally, we highlight a need for further overlap between theoretical and empirical work, and approaches that predict what kind of patterns multi-locus interactions resulting in incompatibilities will leave in genome-wide polymorphism data. ",Multi-locus interactions and the build-up of reproductive isolation,1,"['And now for something completely different: speciation. Our new review paper, accepted to Phil Trans R Soc B, with @InaSatok @simonhmartin @HeikkiHelantera @jonna_kulmuni [TL;DR for network scientists: networks are involved & there is a lot to discover]. <LINK>']",20,05,261 |
3890,118,1447588040487182339,1138012858988617728,Hannes Stärk,"Our new paper is out!⚛️ 3D Infomax improves GNNs for Molecular Property Prediction: <LINK> With @GabriCorso @sacdallago @guennemann @pl219_Cambridge and @dom_beaini @TOSSOUPrudencio from @valence_ai 🤗 Use 3D to improve for molecules with unknown 3D! 👇 1/4 <LINK> I'll present the paper tomorrow in a talk at the @Cambridge_CL AI Seminar: <LINK> at 2:15pm CEST In short: We teach a GNN to reason about the 3D geometry of molecules given only their 2D graphs which improves the GNN's molecular property predictions by ~22%! 2/4 <LINK> Step 1: Use 3D Infomax pre-training to teach a 2D Net to generate latent 3D information (using molecules with known 3D) Step 2: Fine-tune to predict properties of molecules with unknown 3D: the 2D Net still generates implicit 3D and uses it to improve predictions! 3/4 <LINK> For most molecules, there are multiple likely 3D arrangements of the atoms. We found that leveraging more of them can further improve predictions which 3D Infomax achieves. For some pre-training datasets using multiple 3D structures per molecule is essential to improve! 4/4 <LINK> @simonbatzner @GabriCorso @sacdallago @guennemann @pl219_Cambridge @dom_beaini @TOSSOUPrudencio @valence_ai Dankeschön @simonbatzner, it truly makes me happy to hear that coming from you! 🤗 (I might be somewhat of a fan of your work) @abhik1368 @GabriCorso @sacdallago @guennemann @pl219_Cambridge @dom_beaini @TOSSOUPrudencio @valence_ai I don't know what classifies as field-based (explanation appreciated), but still: Used as additional features for the pre-trained GNN: Yes Used during pre-training as features carrying 3D information instead of the conformers: I doubt it works as good, but we have not tried it.",https://arxiv.org/abs/2110.04126,"Molecular property prediction is one of the fastest-growing applications of deep learning with critical real-world impacts. Including 3D molecular structure as input to learned models improves their performance for many molecular tasks. However, this information is infeasible to compute at the scale required by several real-world applications. We propose pre-training a model to reason about the geometry of molecules given only their 2D molecular graphs. Using methods from self-supervised learning, we maximize the mutual information between 3D summary vectors and the representations of a Graph Neural Network (GNN) such that they contain latent 3D information. During fine-tuning on molecules with unknown geometry, the GNN still generates implicit 3D information and can use it to improve downstream tasks. We show that 3D pre-training provides significant improvements for a wide range of properties, such as a 22% average MAE reduction on eight quantum mechanical properties. Moreover, the learned representations can be effectively transferred between datasets in different molecular spaces. ",3D Infomax improves GNNs for Molecular Property Prediction,6,"['Our new paper is out!⚛️\n3D Infomax improves GNNs for Molecular Property Prediction: <LINK>\n\nWith @GabriCorso @sacdallago @guennemann @pl219_Cambridge and @dom_beaini @TOSSOUPrudencio from @valence_ai 🤗\n\nUse 3D to improve for molecules with unknown 3D! 👇\n1/4 <LINK>', ""I'll present the paper tomorrow in a talk at the @Cambridge_CL AI Seminar: https://t.co/cx8MJcb2VF at 2:15pm CEST\n\nIn short:\nWe teach a GNN to reason about the 3D geometry of molecules given only their 2D graphs which improves the GNN's molecular property predictions by ~22%!\n2/4 https://t.co/ZpUyH4i9Aj"", 'Step 1: Use 3D Infomax pre-training to teach a 2D Net to generate latent 3D information (using molecules with known 3D)\n\nStep 2: Fine-tune to predict properties of molecules with unknown 3D: the 2D Net still generates implicit 3D and uses it to improve predictions!\n3/4 https://t.co/qBLAVstdGL', 'For most molecules, there are multiple likely 3D arrangements of the atoms. We found that leveraging more of them can further improve predictions which 3D Infomax achieves.\nFor some pre-training datasets using multiple 3D structures per molecule is essential to improve!\n4/4 https://t.co/L3Bh3BLrzh', '@simonbatzner @GabriCorso @sacdallago @guennemann @pl219_Cambridge @dom_beaini @TOSSOUPrudencio @valence_ai Dankeschön @simonbatzner, it truly makes me happy to hear that coming from you! 🤗\n(I might be somewhat of a fan of your work)', ""@abhik1368 @GabriCorso @sacdallago @guennemann @pl219_Cambridge @dom_beaini @TOSSOUPrudencio @valence_ai I don't know what classifies as field-based (explanation appreciated), but still:\n\nUsed as additional features for the pre-trained GNN: Yes\n\nUsed during pre-training as features carrying 3D information instead of the conformers: I doubt it works as good, but we have not tried it.""]",21,10,1708 |
3891,83,1237204046185811968,701797730750898176,Triet Le,"Glad to share that our paper PUMiner: Mining #security posts from Q&A websites using PU learning with @davidhin1, @CroftRoland and Prof. @alibabar @CREST_center accepted in @msrconf. A new #ML way to retrieve #informationsecurity with no negative class. <LINK>",http://arxiv.org/abs/2003.03741,"Security is an increasing concern in software development. Developer Question and Answer (Q&A) websites provide a large amount of security discussion. Existing studies have used human-defined rules to mine security discussions, but these works still miss many posts, which may lead to an incomplete analysis of the security practices reported on Q&A websites. Traditional supervised Machine Learning methods can automate the mining process; however, the required negative (non-security) class is too expensive to obtain. We propose a novel learning framework, PUMiner, to automatically mine security posts from Q&A websites. PUMiner builds a context-aware embedding model to extract features of the posts, and then develops a two-stage PU model to identify security content using the labelled Positive and Unlabelled posts. We evaluate PUMiner on more than 17.2 million posts on Stack Overflow and 52,611 posts on Security StackExchange. We show that PUMiner is effective with the validation performance of at least 0.85 across all model configurations. Moreover, Matthews Correlation Coefficient (MCC) of PUMiner is 0.906, 0.534 and 0.084 points higher than one-class SVM, positive-similarity filtering, and one-stage PU models on unseen testing posts, respectively. PUMiner also performs well with an MCC of 0.745 for scenarios where string matching totally fails. Even when the ratio of the labelled positive posts to the unlabelled ones is only 1:100, PUMiner still achieves a strong MCC of 0.65, which is 160% better than fully-supervised learning. Using PUMiner, we provide the largest and up-to-date security content on Q&A websites for practitioners and researchers. ","PUMiner: Mining Security Posts from Developer Question and Answer |
Websites with PU Learning",1,"['Glad to share that our paper PUMiner: Mining #security posts from Q&A websites using PU learning with @davidhin1, @CroftRoland and Prof. @alibabar @CREST_center accepted in @msrconf. A new #ML way to retrieve #informationsecurity with no negative class. <LINK>']",20,03,260 |
3892,96,1057558437159206912,869154646694264832,Hugh Salimbeni,"Delighted to share our new paper, Gaussian Process Conditional Density Estimation <LINK>, appearing at NIPS. With @vdutor, @mpd37 and @jameshensman. If you’re a VAE person, you can see this as a conditional VAE with a Gaussian process for the decoder. If you’re a GP person, you can see this as a hybrid between (mutioutput) regression and the Bayesian-GPLVM. We combine observed inputs with latent variables via concatenation. On the GP side, we develop the GPLVM model in a number of ways: natural gradients (which are essential in the minibatch setting), correlated outputs (e.g. for a priori pixel correlations) and probabilistic linear projections of the inputs On the VAE side, we show being Bayesian about the mapping mitigates overfitting and improves test performance, especially in the low data regimes. It was great fun working on this project with the excellent people at @PROWLER_IO ! Particular shout out to @vdutor 👏👏👏",https://arxiv.org/abs/1810.12750,"Conditional Density Estimation (CDE) models deal with estimating conditional distributions. The conditions imposed on the distribution are the inputs of the model. CDE is a challenging task as there is a fundamental trade-off between model complexity, representational capacity and overfitting. In this work, we propose to extend the model's input with latent variables and use Gaussian processes (GP) to map this augmented input onto samples from the conditional distribution. Our Bayesian approach allows for the modeling of small datasets, but we also provide the machinery for it to be applied to big data using stochastic variational inference. Our approach can be used to model densities even in sparse data regions, and allows for sharing learned structure between conditions. We illustrate the effectiveness and wide-reaching applicability of our model on a variety of real-world problems, such as spatio-temporal density estimation of taxi drop-offs, non-Gaussian noise modeling, and few-shot learning on omniglot images. ",Gaussian Process Conditional Density Estimation,6,"['Delighted to share our new paper, Gaussian Process Conditional Density Estimation <LINK>, appearing at NIPS. With @vdutor, @mpd37 and @jameshensman.', 'If you’re a VAE person, you can see this as a conditional VAE with a Gaussian process for the decoder.', 'If you’re a GP person, you can see this as a hybrid between (mutioutput) regression and the Bayesian-GPLVM. We combine observed inputs with latent variables via concatenation.', 'On the GP side, we develop the GPLVM model in a number of ways: natural gradients (which are essential in the minibatch setting), correlated outputs (e.g. for a priori pixel correlations) and probabilistic linear projections of the inputs', 'On the VAE side, we show being Bayesian about the mapping mitigates overfitting and improves test performance, especially in the low data regimes.', 'It was great fun working on this project with the excellent people at @PROWLER_IO ! Particular shout out to @vdutor 👏👏👏']",18,10,933 |
3893,7,1235194511170719744,448941996,Indro Spinelli,New (first for me) paper on graph neural network with Simone Scardapane and Aurelio Uncini. <LINK> Our goal was to find the best number of propagation steps for each node in the graph. #geometricdeeplearning #graphneuralnetworks Edit: @s_scardapane has joined twitter!,https://arxiv.org/abs/2002.10306,"Graph convolutional networks (GCNs) are a family of neural network models that perform inference on graph data by interleaving vertex-wise operations and message-passing exchanges across nodes. Concerning the latter, two key questions arise: (i) how to design a differentiable exchange protocol (e.g., a 1-hop Laplacian smoothing in the original GCN), and (ii) how to characterize the trade-off in complexity with respect to the local updates. In this paper, we show that state-of-the-art results can be achieved by adapting the number of communication steps independently at every node. In particular, we endow each node with a halting unit (inspired by Graves' adaptive computation time) that after every exchange decides whether to continue communicating or not. We show that the proposed adaptive propagation GCN (AP-GCN) achieves superior or similar results to the best proposed models so far on a number of benchmarks, while requiring a small overhead in terms of additional parameters. We also investigate a regularization term to enforce an explicit trade-off between communication and accuracy. The code for the AP-GCN experiments is released as an open-source library. ",Adaptive Propagation Graph Convolutional Network,2,"['New (first for me) paper on graph neural network with Simone Scardapane and Aurelio Uncini.\n\n<LINK>\n\nOur goal was to find the best number of propagation steps for each node in the graph. #geometricdeeplearning #graphneuralnetworks', 'Edit: @s_scardapane has joined twitter!']",20,02,268 |
3894,272,1412922727145435136,504239269,Vu Nguyen,1⃣Tuning RL hypers is expensive 2⃣Do you want to optimize them without retraining the whole agent each time? 3⃣Hypers can be continuous or mixed categorical-continuous? We propose to optimize mixed categorical-continuous hypers on-the-fly for AutoRL 👉<LINK> <LINK>,https://arxiv.org/abs/2106.15883,"Despite a series of recent successes in reinforcement learning (RL), many RL algorithms remain sensitive to hyperparameters. As such, there has recently been interest in the field of AutoRL, which seeks to automate design decisions to create more general algorithms. Recent work suggests that population based approaches may be effective AutoRL algorithms, by learning hyperparameter schedules on the fly. In particular, the PB2 algorithm is able to achieve strong performance in RL tasks by formulating online hyperparameter optimization as time varying GP-bandit problem, while also providing theoretical guarantees. However, PB2 is only designed to work for continuous hyperparameters, which severely limits its utility in practice. In this paper we introduce a new (provably) efficient hierarchical approach for optimizing both continuous and categorical variables, using a new time-varying bandit algorithm specifically designed for the population based training regime. We evaluate our approach on the challenging Procgen benchmark, where we show that explicitly modelling dependence between data augmentation and other hyperparameters improves generalization. ","Tuning Mixed Input Hyperparameters on the Fly for Efficient Population |
Based AutoRL",1,['1⃣Tuning RL hypers is expensive \n2⃣Do you want to optimize them without retraining the whole agent each time? \n3⃣Hypers can be continuous or mixed categorical-continuous?\n\nWe propose to optimize mixed categorical-continuous hypers on-the-fly for AutoRL\n\n👉<LINK> <LINK>'],21,06,264 |
3895,15,1199160199887507456,114562472,Prof. Emily Levesque 🤓✨🔭📚,"New paper from the @UW Massive Stars group accepted to ApJ! Led by @KathrynNeugent with @MassiveStarGuy, this one derives the luminosity function of red supergiants in M31 & uses it to test the mass loss rates used in current models. Check it out on arXiv! <LINK>",https://arxiv.org/abs/1911.10638,"The mass-loss rates of red supergiant stars (RSGs) are poorly constrained by direct measurements, and yet the subsequent evolution of these stars depends critically on how much mass is lost during the RSG phase. In 2012 the Geneva evolutionary group updated their mass-loss prescription for RSGs with the result that a 20 solar mass star now loses 10x more mass during the RSG phase than in the older models. Thus, higher mass RSGs evolve back through a second yellow supergiant phase rather than exploding as Type II-P supernovae, in accord with recent observations (the so-called ""RSG Problem""). Still, even much larger mass-loss rates during the RSG phase cannot be ruled out by direct measurements of their current dust-production rates, as such mass-loss is episodic. Here we test the models by deriving a luminosity function for RSGs in the nearby spiral galaxy M31 which is sensitive to the total mass loss during the RSG phase. We carefully separate RSGs from asymptotic giant branch stars in the color-magnitude diagram following the recent method exploited by Yang and collaborators in their Small Magellanic Cloud studies. Comparing our resulting luminosity function to that predicted by the evolutionary models shows that the new prescription for RSG mass-loss does an excellent job of matching the observations, and we can readily rule out significantly larger values. ",The Luminosity Function of Red Supergiants in M31,1,"['New paper from the @UW Massive Stars group accepted to ApJ! Led by @KathrynNeugent with @MassiveStarGuy, this one derives the luminosity function of red supergiants in M31 & uses it to test the mass loss rates used in current models. Check it out on arXiv! <LINK>']",19,11,263 |
3896,113,1169145914872778752,179893276,Sinan Kalkan,"New paper with @KemLoksz, @camcanbaris and @eakbas2 on Imbalance Problems in Object Detection, which provides a taxonomy, identifies open challenges as well as introduces new imbalance problems. A must read for people working in object detection. <LINK>",https://arxiv.org/abs/1909.00169,"In this paper, we present a comprehensive review of the imbalance problems in object detection. To analyze the problems in a systematic manner, we introduce a problem-based taxonomy. Following this taxonomy, we discuss each problem in depth and present a unifying yet critical perspective on the solutions in the literature. In addition, we identify major open issues regarding the existing imbalance problems as well as imbalance problems that have not been discussed before. Moreover, in order to keep our review up to date, we provide an accompanying webpage which catalogs papers addressing imbalance problems, according to our problem-based taxonomy. Researchers can track newer studies on this webpage available at: this https URL . ",Imbalance Problems in Object Detection: A Review,1,"['New paper with @KemLoksz, @camcanbaris and @eakbas2 on Imbalance Problems in Object Detection, which provides a taxonomy, identifies open challenges as well as introduces new imbalance problems. A must read for people working in object detection. <LINK>']",19,09,253 |
3897,47,1029723584942329857,17797390,Sharad Goel,"In a new review paper, @scorbettdavies and I summarize the recent literature on fair machine learning, describe critical limitations in the foundation of fair ML, and identify promising directions to advance the field. We'd love to hear your comments! <LINK>",https://arxiv.org/abs/1808.00023,"The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes---like race, gender, and their proxies---are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area. ","The Measure and Mismeasure of Fairness: A Critical Review of Fair |
Machine Learning",1,"[""In a new review paper, @scorbettdavies and I summarize the recent literature on fair machine learning, describe critical limitations in the foundation of fair ML, and identify promising directions to advance the field. We'd love to hear your comments! <LINK>""]",18,08,258 |
3898,37,984842946242461696,157110062,Shirin Nilizadeh,"In our new @icwsm paper, we found hate instigators target more popular and high profile Twitter users, and that participating in hate speech is associated with greater online visibility. <LINK> with @mai_elsherief, Dana Nguyen, @gio, and Elizabeth Belding The personality analysis showed both groups have eccentric personality facets that differ from the general Twitter population. <LINK> with @mai_elsherief, Dana Nguyen, @gio and Elizabeth Belding",http://arxiv.org/abs/1804.04649,"While social media has become an empowering agent to individual voices and freedom of expression, it also facilitates anti-social behaviors including online harassment, cyberbullying, and hate speech. In this paper, we present the first comparative study of hate speech instigators and target users on Twitter. Through a multi-step classification process, we curate a comprehensive hate speech dataset capturing various types of hate. We study the distinctive characteristics of hate instigators and targets in terms of their profile self-presentation, activities, and online visibility. We find that hate instigators target more popular and high profile Twitter users, and that participating in hate speech can result in greater online visibility. We conduct a personality analysis of hate instigators and targets and show that both groups have eccentric personality facets that differ from the general Twitter population. Our results advance the state of the art of understanding online hate speech engagement. ",Peer to Peer Hate: Hate Speech Instigators and Their Targets,2,"['In our new @icwsm paper, we found hate instigators target more popular and high profile Twitter users, and that participating in hate speech is associated with greater online visibility. <LINK> with @mai_elsherief, Dana Nguyen, @gio, and Elizabeth Belding', 'The personality analysis showed both groups have eccentric personality facets that differ from the general Twitter population. https://t.co/tZZSzjfkSA with @mai_elsherief, Dana Nguyen, @gio and Elizabeth Belding']",18,04,450 |
3899,116,1489529011462156288,3358932880,Joschka Roffe,"New paper on the arXiv: Bias-tailored quantum LDPC codes, w/ Larry Cohen, @armanda_oq, @daryuschandra and @earltcampbell <LINK> 🧵👇(1/n) Often, in QEC theory, we like to study things in terms of depolarising noise where px=py=pz. However, in the real world, it will usually be the case that the strength of one error types dominates over the others. (2/n) Last year Ataides et al. showed that a modified form of the surface code, the XZZX code, can take advantage of biased-noise to significantly improve code performance <LINK> (3/n) In this project, we set out to extend the idea of bias-tailoring to more general quantum LDPC codes (which promise much lower overheads than the surface code). To do this, we first looked to unpick exactly what allows the XZZX code to respond so well to biased noise... (4/n) Essentially, this boils down to two modifications over the surface code: 1) A redefinition of the stabilisers that ensures the XZZX code decouples to sets of repetition codes in the infinite-bias limit; 2) Twisted boundary checks that lead to longer logical operators... (5/n) We discovered that both characteristics of the XZZX code -- stabiliser redefinition and boundary twists -- arise naturally from a modified form of Pantaleev and Kalachev's lifted product (of recent asymptotically good d~n scaling fame)... (6/n) This modified form, which we've coined the bias-tailored lifted product, is obtained from the standard lifted product via a Clifford transformation on a subset of the qubits. The bias-tailored lifted product can be applied to any pair of quasi-cyclic LDPC codes to... (7/n) construct quantum LDPC codes that inherit the high-bias performance of the XZZX code. Pictured is a decoding plot of a bias-tailored lifted product code with parameters [[416,16,d~20]] under increasing X-bias... (8/n) <LINK> The code construction scripts and decoding simulation routines can be found on the Github repo for this project. <LINK>",https://arxiv.org/abs/2202.01702,"Bias-tailoring allows quantum error correction codes to exploit qubit noise asymmetry. Recently, it was shown that a modified form of the surface code, the XZZX code, exhibits considerably improved performance under biased noise. In this work, we demonstrate that quantum low density parity check codes can be similarly bias-tailored. We introduce a bias-tailored lifted product code construction that provides the framework to expand bias-tailoring methods beyond the family of 2D topological codes. We present examples of bias-tailored lifted product codes based on classical quasi-cyclic codes and numerically assess their performance using a belief propagation plus ordered statistics decoder. Our Monte Carlo simulations, performed under asymmetric noise, show that bias-tailored codes achieve several orders of magnitude improvement in their error suppression relative to depolarising noise. ",Bias-tailored quantum LDPC codes,9,"['New paper on the arXiv: Bias-tailored quantum LDPC codes, w/ Larry Cohen, @armanda_oq, @daryuschandra and @earltcampbell <LINK> 🧵👇(1/n)', 'Often, in QEC theory, we like to study things in terms of depolarising noise where px=py=pz. However, in the real world, it will usually be the case that the strength of one error types dominates over the others. (2/n)', 'Last year Ataides et al. showed that a modified form of the surface code, the XZZX code, can take advantage of biased-noise to significantly improve code performance https://t.co/tKeA2ktjO3 (3/n)', 'In this project, we set out to extend the idea of bias-tailoring to more general quantum LDPC codes (which promise much lower overheads than the surface code). To do this, we first looked to unpick exactly what allows the XZZX code to respond so well to biased noise... (4/n)', 'Essentially, this boils down to two modifications over the surface code: 1) A redefinition of the stabilisers that ensures the XZZX code decouples to sets of repetition codes in the infinite-bias limit; 2) Twisted boundary checks that lead to longer logical operators... (5/n)', ""We discovered that both characteristics of the XZZX code -- stabiliser redefinition and boundary twists -- arise naturally from a modified form of Pantaleev and Kalachev's lifted product (of recent asymptotically good d~n scaling fame)... (6/n)"", ""This modified form, which we've coined the bias-tailored lifted product, is obtained from the standard lifted product via a Clifford transformation on a subset of the qubits. The bias-tailored lifted product can be applied to any pair of quasi-cyclic LDPC codes to... (7/n)"", 'construct quantum LDPC codes that inherit the high-bias performance of the XZZX code. Pictured is a decoding plot of a bias-tailored lifted product code with parameters [[416,16,d~20]] under increasing X-bias... (8/n) https://t.co/ZvL02A91hL', 'The code construction scripts and decoding simulation routines can be found on the Github repo for this project.\nhttps://t.co/xNJw8sUyYH']",22,02,1950 |
3900,180,1399901662857285633,15327263,Carl-Johan Haster,"New paper on the ArXiv today! Led by Tom Callister @FlatironCCA, and with Ken Ng from @MIT_Physics, @sasomao and @farrwill. <LINK> We looked for correlations between the parameters describing the population of Binary Black Holes observed by @LIGO and @ego_virgo. And we found a correlation! Looking at the ratio between the black hole masses, and comparing this against the effective inspiral black hole spin of the binary, we found that more unequal mass binaries have larger effective spins... Given the standard description of binary black holes, how they're thought to be formed in the Universe, and how we're able to observe the gravitational waves they emit we're not necessarily surprised that there are correlations between these two parameters. We are however somewhat surprised about the direction of the correlation. So ping to all Black Hole fans to start thinking of exciting ways to explain what we've found! I should also say that we've been careful to check that the correlation really comes from the data (and isn't ""just caused"" by some bug/oversight in our analysis), and for all we can say the correlation is real and sufficiently statistically significant to be interesting! And as usual, we're awaiting more Binary Black Hole observations to see how well our model holds up agains more observational data! @CraigCahillane <LINK>",https://arxiv.org/abs/2106.00521,"Hierarchical analysis of the binary black hole (BBH) detections by the Advanced LIGO and Virgo detectors has offered an increasingly clear picture of their mass, spin, and redshift distributions. Fully understanding the formation and evolution of BBH mergers will require not just the characterization of these marginal distributions, though, but the discovery of any correlations that exist between the properties of BBHs. Here, we hierarchically analyze the ensemble of BBHs discovered by the LIGO and Virgo with a model that allows for intrinsic correlations between their mass ratios $q$ and effective inspiral spins $\chi_\mathrm{eff}$. At $98.7\%$ credibility, we find that the mean of the $\chi_\mathrm{eff}$ distribution varies as a function of $q$, such that more unequal-mass BBHs exhibit systematically larger $\chi_\mathrm{eff}$. We find Bayesian odds ratio of $10.5$ in favor of a model that allows for such a correlation over one that does not. Finally, we use simulated signals to verify that our results are robust against degeneracies in the measurements of $q$ and $\chi_\mathrm{eff}$ for individual events. While many proposed astrophysical formation channels predict some degree correlation between spins and mass ratio, these predicted correlations typically act in an opposite sense to the trend we observationally identify in the data. ","Who Ordered That? Unequal-Mass Binary Black Hole Mergers Have Larger |
Effective Spins",7,"['New paper on the ArXiv today! Led by Tom Callister @FlatironCCA, and with Ken Ng from @MIT_Physics, @sasomao and @farrwill.\n<LINK>\nWe looked for correlations between the parameters describing the population of Binary Black Holes observed by @LIGO and @ego_virgo.', 'And we found a correlation!\nLooking at the ratio between the black hole masses, and comparing this against the effective inspiral black hole spin of the binary, we found that more unequal mass binaries have larger effective spins...', ""Given the standard description of binary black holes, how they're thought to be formed in the Universe, and how we're able to observe the gravitational waves they emit we're not necessarily surprised that there are correlations between these two parameters."", ""We are however somewhat surprised about the direction of the correlation.\nSo ping to all Black Hole fans to start thinking of exciting ways to explain what we've found!"", 'I should also say that we\'ve been careful to check that the correlation really comes from the data (and isn\'t ""just caused"" by some bug/oversight in our analysis), and for all we can say the correlation is real and sufficiently statistically significant to be interesting!', ""And as usual, we're awaiting more Binary Black Hole observations to see how well our model holds up agains more observational data!"", '@CraigCahillane https://t.co/hGWlq7yzt2']",21,06,1350 |
3901,88,1425437801785536513,748850869366468609,"Rishi Paudel, PhD","Our new paper “Simultaneous Multiwavelength Flare Observations of EV Lacertae’ is now available in <LINK>. It has been accepted for publication in the Astrophysical Journal. I am very grateful to all my collaborators for their great support, guidance and help. In this paper, we studied flares on EV Lac using simultaneous multiwavelength data obtained by TESS, Swift, NICER, LCOGT and UH88.",https://arxiv.org/abs/2108.04753,"We present the first results of our ongoing project conducting simultaneous multiwavelength observations of flares on nearby active M dwarfs. We acquired data of the nearby dM3.5e star EV Lac using 5 different observatories: NASA's Transiting Exoplanet Survey Satellite (TESS), NASA's Neil Gehrels Swift Observatory (\textit{Swift}), NASA's Neutron Interior Composition Explorer (NICER), the University of Hawaii 2.2-m telescope (UH88) and the Las Cumbres Observatory Global Telescope (LCOGT) Network. During the $\sim$25 days of TESS observations, we acquired three simultaneous UV/X-ray observations using \textit{Swift} that total $\sim$18 ks, 21 simultaneous epochs totaling $\sim$98 ks of X-ray data using NICER, one observation ($\sim$ 3 hours) with UH88, and one observation ($\sim$ 3 hours) with LCOGT. We identified 56 flares in the TESS light curve with estimated energies in the range log $E_{\rm T}$ (erg) = (30.5 - 33.2), nine flares in the \textit{Swift} UVM2 light curve with estimated energies in the range log $E_{UV}$ (erg) = (29.3 - 31.1), 14 flares in the NICER light curve with estimated minimum energies in the range log $E_{N}$ (erg) = (30.5 - 32.3), and 1 flare in the LCOGT light curve with log $E_{L}$ (erg) = 31.6. We find that the flare frequency distributions (FFDs) of TESS and NICER flares have comparable slopes, $\beta_{T}$ = -0.67$\pm$0.09 and $\beta_{N}$ = -0.65$\pm$0.19, and the FFD of UVOT flares has a shallower slope ($\beta_{U}$ = -0.38$\pm$0.13). Furthermore, we do not find conclusive evidence for either the first ionization potential (FIP) or the inverse FIP effect during coronal flares on EV Lac. ",Simultaneous Multiwavelength Flare Observations of EV Lacertae,2,"['Our new paper “Simultaneous Multiwavelength Flare Observations of EV Lacertae’ is now available in <LINK>. It has been accepted for publication in the Astrophysical Journal. I am very grateful to all my collaborators for their great support, guidance and help.', 'In this paper, we studied flares on EV Lac using simultaneous multiwavelength data obtained by TESS, Swift, NICER, LCOGT and UH88.']",21,08,391 |
3902,3,1247595780573200389,384900803,Shantanu Basu,"Thrilled abt our new paper on star formation. The solutions with a counter-rotating disk, tiny disk, or nonexistent disk (direct collapse) are unique outcomes that are realized in collapse from magnetically-dominated clouds. <LINK> @westernuPhysAst @KyushuUniv_EN",https://arxiv.org/abs/2003.03078,"The accretion phase of star formation is investigated in magnetically-dominated clouds that have an initial subcritical mass-to-flux ratio. We employ nonideal magnetohydrodynamic simulations that include ambipolar diffusion and ohmic dissipation. During the early prestellar phase the mass-to-flux ratio rises toward the critical value for collapse, and during this time the angular momentum of the cloud core is reduced significantly by magnetic braking. Once a protostar is formed in the core, the accretion phase is characterized by the presence of a small amount of angular momentum but a large amount of magnetic flux in the near-protostellar environment. The low angular momentum leads to a very small (or even nonexistent) disk and weak outflow, while the large magnetic flux can lead to an interchange instability that rapidly removes flux from the central region. The effective magnetic braking in the early collapse phase can even lead to a counter-rotating disk and outflow, in which the rotation direction of the disk and outflow is opposite to that of the infalling envelope. The solutions with a counter-rotating disk, tiny disk, or nonexistent disk (direct collapse) are unique outcomes that are realized in collapse from magnetically-dominated clouds with an initial subcritical mass-to-flux ratio. ","Different Modes of Star Formation II: Gas Accretion Phase of Initially |
Subcritical Star-Forming Clouds",1,"['Thrilled abt our new paper on star formation. The solutions with a counter-rotating disk, tiny disk, or nonexistent disk (direct collapse) are unique outcomes that are realized in collapse from magnetically-dominated clouds. <LINK> @westernuPhysAst @KyushuUniv_EN']",20,03,263 |
3903,107,1503547971815813121,994778954241380352,Nick Mayhall,2nd paper on pulse-level VQE on the arxiv! <LINK> Ayush and Chenxu explore the minimal state preparation time using quantum optimal control - discover a new way to speed up state preparation! @AyushAsthana92 @see_quantum @oinamslair @VT_Science @Chenxu_liu_Phd (just in time for Ayush to talk about it tomorrow at #apsmarch ),https://arxiv.org/abs/2203.06818,"Quantum simulation on NISQ devices is severely limited by short coherence times. A variational pulse-shaping algorithm known as ctrl-VQE was recently proposed to address this issue by eliminating the need for parameterized quantum circuits, which lead to long state preparation times. Here, we find the shortest possible pulses for ctrl-VQE to prepare target molecular wavefunctions for a given device Hamiltonian describing coupled transmon qubits. We find that the time-optimal pulses that do this have a bang-bang form consistent with Pontryagin's maximum principle. We further investigate how the minimal state preparation time is impacted by truncating the transmons to two versus more levels. We find that leakage outside the computational subspace (something that is usually considered problematic) speeds up the state preparation, further reducing device coherence-time demands. This speedup is due to an enlarged solution space of target wavefunctions and to the appearance of additional channels connecting initial and target states. ","Minimizing state preparation times in pulse-level variational molecular |
simulations",2,"['2nd paper on pulse-level VQE on the arxiv! <LINK> Ayush and Chenxu explore the minimal state preparation time using quantum optimal control - discover a new way to speed up state preparation! @AyushAsthana92 @see_quantum @oinamslair @VT_Science @Chenxu_liu_Phd', '(just in time for Ayush to talk about it tomorrow at #apsmarch )']",22,03,325 |
3904,88,1113994344091070465,367297219,Melanie Mitchell,"New paper from my research group: ""Revisiting Visual Grounding"". <LINK>. We investigate whether certain visual grounding methods are actually learning what has been claimed. To appear in Proceedings of Workshop on Shortcomings in Vision and Language, NAACL-2019",https://arxiv.org/abs/1904.02225,"We revisit a particular visual grounding method: the ""Image Retrieval Using Scene Graphs"" (IRSG) system of Johnson et al. (2015). Our experiments indicate that the system does not effectively use its learned object-relationship models. We also look closely at the IRSG dataset, as well as the widely used Visual Relationship Dataset (VRD) that is adapted from it. We find that these datasets exhibit biases that allow methods that ignore relationships to perform relatively well. We also describe several other problems with the IRSG dataset, and report on experiments using a subset of the dataset in which the biases and other problems are removed. Our studies contribute to a more general effort: that of better understanding what machine learning methods that combine language and vision actually learn and what popular datasets actually test. ",Revisiting Visual Grounding,1,"['New paper from my research group: ""Revisiting Visual Grounding"". <LINK>. We investigate whether certain visual grounding methods are actually learning what has been claimed. To appear in Proceedings of Workshop on Shortcomings in Vision and Language, NAACL-2019']",19,04,261 |
3905,200,1314010985297047553,1251964945899683841,Faisal Ladhak,"Our EMNLP Findings paper presenting WikiLingua — a new multilingual abstractive summarization dataset — is now on arXiv. It contains 770K article/summary pairs in 17 languages, parallel with English. Paper: <LINK> Dataset: <LINK> #NLProc This was joint work with @esindurmusnlp, @clairecardie and Kathy McKeown. We hope this will encourage further research in cross-lingual summarization, as well as summarization in languages other than English.",https://arxiv.org/abs/2010.03093,"We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct crosslingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference. ","WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive |
Summarization",2,"['Our EMNLP Findings paper presenting WikiLingua — a new multilingual abstractive summarization dataset — is now on arXiv. \n\nIt contains 770K article/summary pairs in 17 languages, parallel with English. \n\nPaper: <LINK>\n\nDataset: <LINK>\n\n#NLProc', 'This was joint work with @esindurmusnlp, @clairecardie and Kathy McKeown. We hope this will encourage further research in cross-lingual summarization, as well as summarization in languages other than English.']",20,10,448 |
3906,249,1369266144964730885,1145298986548551680,Seán Kavanagh,First solo-theory paper of my PhD with @lonepair and @scanlond81! We find non-radiative recombination at cadmium vacancies reduces PV efficiency by over 5% in untreated CdTe – a result of metastability and anharmonicity in defect PESs 💻☀️🔌🌍 <LINK> Using the powerful CarrierCapture.jl package (<LINK>) and TLC metric (<LINK>) developed by @RealSunghyunKim 🔥 🔥,http://arxiv.org/abs/2103.00984,"CdTe is a key thin-film photovoltaic technology. Non-radiative electron-hole recombination reduces the solar conversion efficiency from an ideal value of 32% to a current champion performance of 22%. The cadmium vacancy (V_Cd) is a prominent acceptor species in p-type CdTe; however, debate continues regarding its structural and electronic behavior. Using ab initio defect techniques, we calculate a negative-U double-acceptor level for V_Cd, while reproducing the V_Cd^-1 hole-polaron, reconciling theoretical predictions with experimental observations. We find the cadmium vacancy facilitates rapid charge-carrier recombination, reducing maximum power-conversion efficiency by over 5% for untreated CdTe -- a consequence of tellurium dimerization, metastable structural arrangements, and anharmonic potential energy surfaces for carrier capture. ",Rapid Recombination by Cadmium Vacancies in CdTe,2,"['First solo-theory paper of my PhD with @lonepair and @scanlond81! We find non-radiative recombination at cadmium vacancies reduces PV efficiency by over 5% in untreated CdTe – a result of metastability and anharmonicity in defect PESs 💻☀️🔌🌍\n<LINK>', 'Using the powerful CarrierCapture.jl package (https://t.co/i6P3zsqudB) and TLC metric (https://t.co/y8lqsR5mjv) developed by @RealSunghyunKim 🔥 🔥']",21,03,359 |
3907,120,1121231185475096576,728042672695484416,Nimit Sohoni,"Tired of running out of memory? We study four techniques for reducing the memory required to train neural networks, while still preserving accuracy as much as possible. Techniques: (1) Sparsity (2) Half precision (3) Microbatching (4) Checkpointing <LINK> Before/after charts of the total memory usage of training: <LINK>",https://arxiv.org/abs/1904.10631,"Memory is increasingly often the bottleneck when training neural network models. Despite this, techniques to lower the overall memory requirements of training have been less widely studied compared to the extensive literature on reducing the memory requirements of inference. In this paper we study a fundamental question: How much memory is actually needed to train a neural network? To answer this question, we profile the overall memory usage of training on two representative deep learning benchmarks -- the WideResNet model for image classification and the DynamicConv Transformer model for machine translation -- and comprehensively evaluate four standard techniques for reducing the training memory requirements: (1) imposing sparsity on the model, (2) using low precision, (3) microbatching, and (4) gradient checkpointing. We explore how each of these techniques in isolation affects both the peak memory usage of training and the quality of the end model, and explore the memory, accuracy, and computation tradeoffs incurred when combining these techniques. Using appropriate combinations of these techniques, we show that it is possible to the reduce the memory required to train a WideResNet-28-2 on CIFAR-10 by up to 60.7x with a 0.4% loss in accuracy, and reduce the memory required to train a DynamicConv model on IWSLT'14 German to English translation by up to 8.7x with a BLEU score drop of 0.15. ",Low-Memory Neural Network Training: A Technical Report,2,"['Tired of running out of memory? We study four techniques for reducing the memory required to train neural networks, while still preserving accuracy as much as possible. Techniques:\n(1) Sparsity\n(2) Half precision\n(3) Microbatching\n(4) Checkpointing\n\n<LINK>', 'Before/after charts of the total memory usage of training: https://t.co/4PzGZjLeg1']",19,04,321 |
3908,76,1080830457112014853,702241209276829697,Cecilia Garraffo 💚,Cold and breezy but much milder than Proxima b and TRAPPIST-1 planets. Check our new paper on the space weather conditions of @BranardsStar_b. Orbital distance matters more than magnetic activity level! <LINK> @AstroRaikoh @cosmodrake @SofiaMoschou <LINK> @predictionmonk @AstroRaikoh @cosmodrake @SofiaMoschou If you like snow... ;),https://arxiv.org/abs/1901.00219,"A physically realistic stellar wind model based on Alfv\'en wave dissipation has been used to simulate the wind from Barnard's Star and to estimate the conditions at the location of its recently discovered planetary companion. Such models require knowledge of the stellar surface magnetic field that is currently unknown for Barnard's Star. We circumvent this by considering the observed field distributions of three different stars that constitute admissible magnetic proxies of this object. Under these considerations, Barnard's Star b experiences less intense wind pressure than the much more close-in planet Proxima~b and the planets of the TRAPPIST-1 system. The milder wind conditions are more a result of its much greater orbital distance rather than in differences in the surface magnetic field strengths of the host stars. The dynamic pressure experienced by the planet is comparable to present-day Earth values, but it can undergo variations by factors of several during current sheet crossings in each orbit. The magnetospause standoff distance would be $\sim$\,$20 - 40$\,\% smaller than that of the Earth for an equivalent planetary magnetic field strength. ",Breezing through the space environment of Barnard's Star b,2,"['Cold and breezy but much milder than Proxima b and TRAPPIST-1 planets. Check our new paper on the space weather conditions of @BranardsStar_b. Orbital distance matters more than magnetic activity level! <LINK> @AstroRaikoh @cosmodrake @SofiaMoschou <LINK>', '@predictionmonk @AstroRaikoh @cosmodrake @SofiaMoschou If you like snow... ;)']",19,01,333 |
3909,14,1421185613567381504,1865522274,Dmitry Arkhangelsky,"1/5 (Reweighted) two-way regression strikes back in our new paper with Guido Imbens, @lihua_lei_stat, and Xiaoman Luo (<LINK>)! Can (and should) be used whenever data allows us to meaningfully discuss the model for the treatment assignment. 2/5 The two-way regression model is a good approximation (captures a lot of variation) but has undesirable properties whenever treatment varies substantially over units and time. However, precisely this variation provides information about the assignment process. 3/5 We show how to use this information to augment the two-way regression model with unit-specific weights. These are constructed using the assignment process but are not equal to the standard inverse propensities. 4/5 The resulting estimator does not suffer from the bad properties of TWFE and converges to the average treatment effect (over units and time). Limitation: do not allow for dynamic treatment effects but have unrestricted heterogeneity in contemporaneous ones. If the assignment model is correctly specified (and estimable), then the estimator is consistent without any version of parallel trends. More generally, the estimator is double-robust: close to ATE if either the two-way model or the assignment model is close to the truth.",https://arxiv.org/abs/2107.13737,"We propose a new estimator for the average causal effects of a binary treatment with panel data in settings with general treatment patterns. Our approach augments the two-way-fixed-effects specification with the unit-specific weights that arise from a model for the assignment mechanism. We show how to construct these weights in various settings, including situations where units opt into the treatment sequentially. The resulting estimator converges to an average (over units and time) treatment effect under the correct specification of the assignment model. We show that our estimator is more robust than the conventional two-way estimator: it remains consistent if either the assignment mechanism or the two-way regression model is correctly specified and performs better than the two-way-fixed-effect estimator if both are locally misspecified. This strong double robustness property quantifies the benefits from modeling the assignment process and motivates using our estimator in practice. ",Double-Robust Two-Way-Fixed-Effects Regression For Panel Data,5,"['1/5 (Reweighted) two-way regression strikes back in our new paper with Guido Imbens, @lihua_lei_stat, and Xiaoman Luo (<LINK>)! Can (and should) be used whenever data allows us to meaningfully discuss the model for the treatment assignment.', '2/5 The two-way regression model is a good approximation (captures a lot of variation) but has undesirable properties whenever treatment varies substantially over units and time. However, precisely this variation provides information about the assignment process.', '3/5 We show how to use this information to augment the two-way regression model with unit-specific weights. These are constructed using the assignment process but are not equal to the standard inverse propensities.', '4/5 The resulting estimator does not suffer from the bad properties of TWFE and converges to the average treatment effect (over units and time). Limitation: do not allow for dynamic treatment effects but have unrestricted heterogeneity in contemporaneous ones.', 'If the assignment model is correctly specified (and estimable), then the estimator is consistent without any version of parallel trends. More generally, the estimator is double-robust: close to ATE if either the two-way model or the assignment model is close to the truth.']",21,07,1253 |
3910,234,1448329935664279555,2691163094,Devin Incerti,"External controls are intriguing but bias is a concern. Across 14 old studies, we found that trial controls survived longer---even after PS adjustment---than external controls (HR=0.90). We propose a meta-analytic framework to adjust for such bias. <LINK>",https://arxiv.org/abs/2110.03827,"While randomized controlled trials (RCTs) are the gold standard for estimating treatment effects in medical research, there is increasing use of and interest in using real-world data for drug development. One such use case is the construction of external control arms for evaluation of efficacy in single-arm trials, particularly in cases where randomization is either infeasible or unethical. However, it is well known that treated patients in non-randomized studies may not be comparable to control patients -- on either measured or unmeasured variables -- and that the underlying population differences between the two groups may result in biased treatment effect estimates as well as increased variability in estimation. To address these challenges for analyses of time-to-event outcomes, we developed a meta-analytic framework that uses historical reference studies to adjust a log hazard ratio estimate in a new external control study for its additional bias and variability. The set of historical studies is formed by constructing external control arms for historical RCTs, and a meta-analysis compares the trial controls to the external control arms. Importantly, a prospective external control study can be performed independently of the meta-analysis using standard causal inference techniques for observational data. We illustrate our approach with a simulation study and an empirical example based on reference studies for advanced non-small cell lung cancer. In our empirical analysis, external control patients had lower survival than trial controls (hazard ratio: 0.907), but our methodology is able to correct for this bias. An implementation of our approach is available in the R package ecmeta. ",A meta-analytic framework to adjust for bias in external control studies,1,"['External controls are intriguing but bias is a concern. Across 14 old studies, we found that trial controls survived longer---even after PS adjustment---than external controls (HR=0.90). We propose a meta-analytic framework to adjust for such bias.\n\n<LINK>']",21,10,255 |
3911,0,948506662754836481,924462993219432448,Lisa Kaltenegger,"What would Earth look like in our telescope if we orbited another star? See Sarah & my new paper - Spectra of Earth like planets through geological evolution - to use to find them for real! @CSInst @SarahRugheimer #exoplanets (image:MIT,R.Ramirez) <LINK> <LINK>",https://arxiv.org/abs/1712.10027,"Future observations of terrestrial exoplanet atmospheres will occur for planets at different stages of geological evolution. We expect to observe a wide variety of atmospheres and planets with alternative evolutionary paths, with some planets resembling Earth at different epochs. For an Earth-like atmospheric time trajectory, we simulate planets from prebiotic to current atmosphere based on geological data. We use a stellar grid F0V to M8V ($T_\mathrm{eff}$ = 7000$\mskip3mu$K to 2400$\mskip3mu$K) to model four geological epochs of Earth's history corresponding to a prebiotic world (3.9$\mskip3mu$Ga), the rise of oxygen at 2.0$\mskip3mu$Ga and at 0.8$\mskip3mu$Ga, and the modern Earth. We show the VIS - IR spectral features, with a focus on biosignatures through geological time for this grid of Sun-like host stars and the effect of clouds on their spectra. We find that the observability of biosignature gases reduces with increasing cloud cover and increases with planetary age. The observability of the visible O$_2$ feature for lower concentrations will partly depend on clouds, which while slightly reducing the feature increase the overall reflectivity thus the detectable flux of a planet. The depth of the IR ozone feature contributes substantially to the opacity at lower oxygen concentrations especially for the high near-UV stellar environments around F stars. Our results are a grid of model spectra for atmospheres representative of Earth's geological history to inform future observations and instrument design and are publicly available online. ","Spectra of Earth-like Planets Through Geological Evolution Around FGKM |
Stars",1,"['What would Earth look like in our telescope if we orbited another star? See Sarah & my new paper - Spectra of Earth like planets through geological evolution - to use to find them for real! @CSInst @SarahRugheimer #exoplanets (image:MIT,R.Ramirez) <LINK> <LINK>']",17,12,261 |
3912,171,1410999635968106500,4666231375,Konstantin Batygin,"In a new paper led by Katee Schultz and @ChrisCrossSci, we examine the onset of early dynamical instabilities in short-period Super-Earth systems due to a diminishing stellar J2 as a driving mechanism for planetary inclinations. Details here: <LINK> <LINK>",https://arxiv.org/abs/2107.00044,"A large proportion of transiting planetary systems appear to possess only a single planet as opposed to multiple transiting planets. This excess of singles is indicative of significant mutual inclinations existing within a large number of planetary systems, but the origin of these misalignments is unclear. Moreover, recent observational characterization reveals that mutual inclinations tend to increase with proximity to the host star. These trends are both consistent with the dynamical influence of a strong quadrupolar potential arising from the host star during its early phase of rapid rotation, coupled with a non-zero stellar obliquity. Here, we simulate a population of planetary systems subject to the secular perturbation arising from a tilted, oblate host star as it contracts and spins down subsequent to planet formation. We demonstrate that this mechanism can reproduce the general increase in planet-planet mutual inclinations with proximity to the host star, and delineate a parameter space wherein the host star can drive dynamical instabilities. We suggest that approximately 5-10\% of low-mass Kepler systems are susceptible to this instability mechanism, suggesting that a significant number of single-transiting planets may truly be intrinsically single. We also report a novel connection between instability and stellar obliquity reduction and make predictions that can be tested within upcoming TESS observations. ","The distribution of mutual inclinations arising from the stellar |
quadrupole moment",1,"['In a new paper led by Katee Schultz and @ChrisCrossSci, we examine the onset of early dynamical instabilities in short-period Super-Earth systems due to a diminishing stellar J2 as a driving mechanism for planetary inclinations. Details here: <LINK> <LINK>']",21,07,256 |
3913,75,1040283816491925505,554869994,Yonatan Belinkov,"Always heard about the split between Classical and Modern Standard Arabic? Interested in periodization of historical languages? Check out our new paper on the history of Arabic: <LINK> Joint work with Alex Magidow, @_albarron_ , Avi Shmidman, and @maximromanov We preprocess the amazing OpenITI corpus (<LINK>), identify text reuse, and perform both automatic and expert-based periodization. We find evidence for familiar periods of change (e.g. the split between Classical and MSA), as well as new ones. <LINK> @marcos_zampieri @_albarron_ @maximromanov It remains to be seen...",https://arxiv.org/abs/1809.03891,"Arabic is a widely-spoken language with a long and rich history, but existing corpora and language technology focus mostly on modern Arabic and its varieties. Therefore, studying the history of the language has so far been mostly limited to manual analyses on a small scale. In this work, we present a large-scale historical corpus of the written Arabic language, spanning 1400 years. We describe our efforts to clean and process this corpus using Arabic NLP tools, including the identification of reused text. We study the history of the Arabic language using a novel automatic periodization algorithm, as well as other techniques. Our findings confirm the established division of written Arabic into Modern Standard and Classical Arabic, and confirm other established periodizations, while suggesting that written Arabic may be divisible into still further periods of development. ","Studying the History of the Arabic Language: Language Technology and a |
Large-Scale Historical Corpus",3,"['Always heard about the split between Classical and Modern Standard Arabic? Interested in periodization of historical languages? Check out our new paper on the history of Arabic: <LINK>\nJoint work with Alex Magidow, @_albarron_ , Avi Shmidman, and @maximromanov', 'We preprocess the amazing OpenITI corpus (https://t.co/y3gzrKYfxw), identify text reuse, and perform both automatic and expert-based periodization. We find evidence for familiar periods of change (e.g. the split between Classical and MSA), as well as new ones. https://t.co/cxO6Rv6qTy', '@marcos_zampieri @_albarron_ @maximromanov It remains to be seen...']",18,09,579 |
3914,250,1404363953254416386,966686196,Mark Mitchison,"Pleased to announce a nice collab with colleagues at @unicomplutense. We find symmetry-protected boundary currents in a non-equilibrium lattice system without band topology: consequence of Berry curvature + weak coupling. Had fun messing with quiver plots! <LINK> <LINK> The plot shows bosons (red) and fermions (blue) flowing through a system. Currents are localised on boundary. Bosons react to impurities in the middle (black circles) by ""shielding"" them with counter-current. Fermions in topological phase don't react at all (depending on filling)",https://arxiv.org/abs/2106.05988,"We study two-dimensional bosonic and fermionic lattice systems under nonequilibrium conditions corresponding to a sharp gradient of temperature imposed by two thermal baths. In particular, we consider a lattice model with broken time-reversal symmetry that exhibits both topologically trivial and nontrivial phases. Using a nonperturbative Green function approach, we characterize the nonequilibrium current distribution in different parameter regimes. For both bosonic and fermionic systems, we find chiral edge currents that are robust against coupling to reservoirs and to the presence of defects on the boundary or in the bulk. This robustness not only originates from topological effects at zero temperature but, remarkably, also persists as a result of dissipative symmetries in regimes where band topology plays no role. Chirality of the edge currents implies that energy locally flows against the temperature gradient without any external work input. In the fermionic case, there is also a regime with topologically protected boundary currents, which nonetheless do not circulate around all system edges. ",Robust nonequilibrium edge currents with and without band topology,2,"['Pleased to announce a nice collab with colleagues at @unicomplutense. We find symmetry-protected boundary currents in a non-equilibrium lattice system without band topology: consequence of Berry curvature + weak coupling. Had fun messing with quiver plots!\n<LINK> <LINK>', 'The plot shows bosons (red) and fermions (blue) flowing through a system. Currents are localised on boundary. Bosons react to impurities in the middle (black circles) by ""shielding"" them with counter-current. Fermions in topological phase don\'t react at all (depending on filling)']",21,06,551 |
3915,1,1313587138349408263,550979881,Klim Zaporojets,"We are very excited to introduce DWIE: a new document-level entity-centric Information Extraction dataset that covers the following tasks: NER, Coreference Resolution, Relation Extraction, and Entity Linking. Paper: <LINK> GitHub: <LINK> [1/7] DWIE is conceived as an entity-centric dataset that describes (both explicit and implicit) interactions and properties of conceptual entities on the level of the complete document. [2/7] <LINK> The DWIE's document-level annotation approach allows to capture relations between entities whose mentions are located further apart compared to other similar datasets. [3/7] <LINK> We introduce a new ""Soft Entity-Level"" evaluation metric for the NER and Relation Extraction tasks, in line with the entity-centric nature of DWIE. It allows for the measurements to not be dominated by predictions on more frequently mentioned entities. [4/7] We present an end-to-end neural graph-based architecture that allows document-level message passing between mention spans. Our results on DWIE show a consistent improvement when using different neural graph propagation techniques. [5/7] Thanks to all the amazing colleagues and collaborators @ugentnlp, Johannes Deleu, @thomeestr, Chris Develder for this exciting project! [6/7] And if you have read this far: I am looking for collaborations to apply for a PhD research stay. If you are interested, please drop me a line! Preference for Denmark/EU. [7/7]",https://arxiv.org/abs/2009.12626,"This paper presents DWIE, the 'Deutsche Welle corpus for Information Extraction', a newly created multi-task dataset that combines four main Information Extraction (IE) annotation subtasks: (i) Named Entity Recognition (NER), (ii) Coreference Resolution, (iii) Relation Extraction (RE), and (iv) Entity Linking. DWIE is conceived as an entity-centric dataset that describes interactions and properties of conceptual entities on the level of the complete document. This contrasts with currently dominant mention-driven approaches that start from the detection and classification of named entity mentions in individual sentences. Further, DWIE presented two main challenges when building and evaluating IE models for it. First, the use of traditional mention-level evaluation metrics for NER and RE tasks on entity-centric DWIE dataset can result in measurements dominated by predictions on more frequently mentioned entities. We tackle this issue by proposing a new entity-driven metric that takes into account the number of mentions that compose each of the predicted and ground truth entities. Second, the document-level multi-task annotations require the models to transfer information between entity mentions located in different parts of the document, as well as between different tasks, in a joint learning setting. To realize this, we propose to use graph-based neural message passing techniques between document-level mention spans. Our experiments show an improvement of up to 5.5 F1 percentage points when incorporating neural graph propagation into our joint model. This demonstrates DWIE's potential to stimulate further research in graph neural networks for representation learning in multi-task IE. We make DWIE publicly available at this https URL ","DWIE: an entity-centric dataset for multi-task document-level |
information extraction",7,"['We are very excited to introduce DWIE: a new document-level entity-centric Information Extraction dataset that covers the following tasks: NER, Coreference Resolution, Relation Extraction, and Entity Linking.\n\nPaper: <LINK>\nGitHub: <LINK>\n\n[1/7]', 'DWIE is conceived as an entity-centric dataset that describes (both explicit and implicit) interactions and properties of conceptual entities on the level of the complete document. \n[2/7] https://t.co/hSzY3P3NMT', ""The DWIE's document-level annotation approach allows to capture relations between entities whose mentions are located further apart compared to other similar datasets. \n[3/7] https://t.co/5P4GfB5hAr"", 'We introduce a new ""Soft Entity-Level"" evaluation metric for the NER and Relation Extraction tasks, in line with the entity-centric nature of DWIE. It allows for the measurements to not be dominated by predictions on more frequently mentioned entities.\n[4/7]', 'We present an end-to-end neural graph-based architecture that allows document-level message passing between mention spans. Our results on DWIE show a consistent improvement when using different neural graph propagation techniques. \n[5/7]', 'Thanks to all the amazing colleagues and collaborators @ugentnlp, Johannes Deleu, @thomeestr, Chris Develder for this exciting project! \n[6/7]', 'And if you have read this far: I am looking for collaborations to apply for a PhD research stay. If you are interested, please drop me a line! Preference for Denmark/EU. \n[7/7]']",20,09,1433 |
3916,147,1258555474775064576,35926248,Thomas Paula,"Our paper ""A Proposal for Intelligent Agents with Episodic Memory"" is on Arxiv! We propose an alternative look at the role of episodic memory for intelligent agents. We hope this view can help to spark new discussions and ideas! @dfm794 @JulianoVacaro <LINK>",https://arxiv.org/abs/2005.03182,"In the future we can expect that artificial intelligent agents, once deployed, will be required to learn continually from their experience during their operational lifetime. Such agents will also need to communicate with humans and other agents regarding the content of their experience, in the context of passing along their learnings, for the purpose of explaining their actions in specific circumstances or simply to relate more naturally to humans concerning experiences the agent acquires that are not necessarily related to their assigned tasks. We argue that to support these goals, an agent would benefit from an episodic memory; that is, a memory that encodes the agent's experience in such a way that the agent can relive the experience, communicate about it and use its past experience, inclusive of the agents own past actions, to learn more effective models and policies. In this short paper, we propose one potential approach to provide an AI agent with such capabilities. We draw upon the ever-growing body of work examining the function and operation of the Medial Temporal Lobe (MTL) in mammals to guide us in adding an episodic memory capability to an AI agent composed of artificial neural networks (ANNs). Based on that, we highlight important aspects to be considered in the memory organization and we propose an architecture combining ANNs and standard Computer Science techniques for supporting storage and retrieval of episodic memories. Despite being initial work, we hope this short paper can spark discussions around the creation of intelligent agents with memory or, at least, provide a different point of view on the subject. ",A Proposal for Intelligent Agents with Episodic Memory,1,"['Our paper ""A Proposal for Intelligent Agents with Episodic Memory"" is on Arxiv! We propose an alternative look at the role of episodic memory for intelligent agents. We hope this view can help to spark new discussions and ideas! @dfm794 @JulianoVacaro \n\n<LINK>']",20,05,259 |
3917,229,1436211773229600775,775630411863130112,Sven Buder,"How much did accreted stars contribute to the build-up/shape of the Milky Way? We all found many accreted stars via their distinct orbits thanks to @ESAGaia. In our latest study (<LINK>), we identify/“tag” accreted stars via their chemistry with @galahsurvey. 1/5 <LINK> Galactic archaeology with accreted stars made immense progress thanks to work by P. E. Nissen, @amina_helmi, @BelokurovVasily, @drpayeldas, @AGalactichawk, H. Koppelman, D. Feuillet, @Rohan_Naidu, @anabonaca, G. C. Myeong, and many more (see paper for more refs)! Exciting! 2/5 Countless new techniques have been developed to identify accreted stars. We try to give an overview in our Table A1 and hope it will be useful for anyone interesting in joining our search for accreted stars! 3/5 <LINK> Thanks to the combination of @GaiaESA and @galahsurvey DR3 (<LINK>), we can identify stars through chemistry via their distinct [Na/Fe] vs. [Mg/Mn] abundances (see also Das+2020) and compare with dynamical selections via L_Z vs. sqrt{J_R} (Feuillet+2020). 4/5 <LINK> This allows us to study the dynamical properties (via chemical selected stars) and chemical properties (via dynamically selected stars). For everyone who wants to follow up this study, we provide the code on <LINK> (GALAH+ DR3 data is public!) 5/5 <LINK>",https://arxiv.org/abs/2109.04059,"Since the advent of $Gaia$ astrometry, it is possible to identify massive accreted systems within the Galaxy through their unique dynamical signatures. One such system, $Gaia$-Sausage-Enceladus (GSE), appears to be an early ""building block"" given its virial mass $> 10^{10}\,\mathrm{M_\odot}$ at infall ($z\sim1-3$). In order to separate the progenitor population from the background stars, we investigate its chemical properties with up to 30 element abundances from the GALAH+ Survey Data Release 3 (DR3). To inform our choice of elements for purely chemically selecting accreted stars, we analyse 4164 stars with low-$\alpha$ abundances and halo kinematics. These are most different to the Milky Way stars for abundances of Mg, Si, Na, Al, Mn, Fe, Ni, and Cu. Based on the significance of abundance differences and detection rates, we apply Gaussian mixture models to various element abundance combinations. We find the most populated and least contaminated component, which we confirm to represent GSE, contains 1049 stars selected via [Na/Fe] vs. [Mg/Mn] in GALAH+ DR3. We provide tables of our selections and report the chrono-chemodynamical properties (age, chemistry, and dynamics). Through a previously reported clean dynamical selection of GSE stars, including $30 < \sqrt{J_R~/~\mathrm{kpc\,km\,s^{-1}}} < 55$, we can characterise an unprecedented 24 abundances of this structure with GALAH+ DR3. Our chemical selection allows us to prevent circular reasoning and characterise the dynamical properties of the GSE, for example mean $\sqrt{J_R~/~\mathrm{kpc\,km\,s^{-1}}} = 26_{-14}^{+9}$. We find only $(29\pm1)\%$ of the GSE stars within the clean dynamical selection region. Our methodology will improve future studies of accreted structures and their importance for the formation of the Milky Way. ","The GALAH Survey: Chemical tagging and chrono-chemodynamics of accreted |
halo stars with GALAH+ DR3 and $Gaia$ eDR3",5,"['How much did accreted stars contribute to the build-up/shape of the Milky Way? We all found many accreted stars via their distinct orbits thanks to @ESAGaia. In our latest study (<LINK>), we identify/“tag” accreted stars via their chemistry with @galahsurvey. 1/5 <LINK>', 'Galactic archaeology with accreted stars made immense progress thanks to work by P. E. Nissen, @amina_helmi, @BelokurovVasily, @drpayeldas, @AGalactichawk, H. Koppelman, D. Feuillet, @Rohan_Naidu, @anabonaca, G. C. Myeong, and many more (see paper for more refs)! Exciting! 2/5', 'Countless new techniques have been developed to identify accreted stars. We try to give an overview in our Table A1 and hope it will be useful for anyone interesting in joining our search for accreted stars! 3/5 https://t.co/rrjyxrLft6', 'Thanks to the combination of @GaiaESA and @galahsurvey DR3 (https://t.co/okDy7tBfr2), we can identify stars through chemistry via their distinct [Na/Fe] vs. [Mg/Mn] abundances (see also Das+2020) and compare with dynamical selections via L_Z vs. sqrt{J_R} (Feuillet+2020). 4/5 https://t.co/tP3tI6NJbm', 'This allows us to study the dynamical properties (via chemical selected stars) and chemical properties (via dynamically selected stars). For everyone who wants to follow up this study, we provide the code on https://t.co/Syrb44GwaN (GALAH+ DR3 data is public!) 5/5 https://t.co/MP3TkQCrS4']",21,09,1289 |
3918,157,1382522020844425217,1082668696202502144,Corey J Nolet,I’m happy to announce the pre-print for our new paper “Semiring Primitives for Sparse Neighborhood Methods on the GPU.” We show that semirings can enable many important neighborhood machine learning methods and provide a novel implementation on the GPU. <LINK>,https://arxiv.org/abs/2104.06357,"High-performance primitives for mathematical operations on sparse vectors must deal with the challenges of skewed degree distributions and limits on memory consumption that are typically not issues in dense operations. We demonstrate that a sparse semiring primitive can be flexible enough to support a wide range of critical distance measures while maintaining performance and memory efficiency on the GPU. We further show that this primitive is a foundational component for enabling many neighborhood-based information retrieval and machine learning algorithms to accept sparse input. To our knowledge, this is the first work aiming to unify the computation of several critical distance measures on the GPU under a single flexible design paradigm and we hope that it provides a good baseline for future research in this area. Our implementation is fully open source and publicly available as part of the RAFT library of GPU-accelerated machine learning primitives (this https URL). ",GPU Semiring Primitives for Sparse Neighborhood Methods,1,['I’m happy to announce the pre-print for our new paper “Semiring Primitives for Sparse Neighborhood Methods on the GPU.” We show that semirings can enable many important neighborhood machine learning methods and provide a novel implementation on the GPU. <LINK>'],21,04,260 |
3919,141,1457668698798497792,608020439,Dr. Gareth Cabourn Davies,"Lots of work from lots of wonderful people in the @LIGO, @ego_virgo @KAGRA_PR collaborations has gone into this, and we get some pretty exciting results! New possible NSBH systems and loads more binary black holes! Read the paper here: <LINK> 🧵👇 <LINK> Its an excellent paper, but I may be biased as part of the team that led the writing of the paper. This team was excellently led by @cplberry of @UofGravity, and with colleagues from around the globe 2/ Its 82 pages and a large percentage of that is the author list to highlight all our amazing colleagues This paper really is a collaborative effort, and everyone contributed to these exciting results BUT I'm going to indulge myself and highlight things I've directly worked on 3/ The @UoPCosmology institute at @portsmouthuni has been vital for this catalog, not just in scientific content, where ICG researchers developed many of the search and detector characterisation methods, but also in organisation and leadership in the collaboration 4/ Me and colleagues at @UoPCosmology organised and performed analyses of the gravitational-wave data using PyCBC searches, and there are quite a few events which would not have been found without our efforts. 5/ The PyCBC searches used analysis developed in my time at @IGFAE_HEP in <LINK> with @TomD_Santiago, @UoPCosmology colleagues and @alexandernitz 6/ The O3b data is now public as well, so if you want to discover huge astrophysical objects crashing into one another as well, give it a go! Data is available through GWOSC, and open source gravitational-wave search and parameter estimation software is available from PyCBC 7/ We love efforts both inside and outside the collaboration to squeeze as much science out of the data as possible, so I look forward to even more results coming out from this data in the coming months 8/ Any volunteers to update <LINK> ? END",https://arxiv.org/abs/2111.03606,"The third Gravitational-wave Transient Catalog (GWTC-3) describes signals detected with Advanced LIGO and Advanced Virgo up to the end of their third observing run. Updating the previous GWTC-2.1, we present candidate gravitational waves from compact binary coalescences during the second half of the third observing run (O3b) between 1 November 2019, 15:00 UTC and 27 March 2020, 17:00 UTC. There are 35 compact binary coalescence candidates identified by at least one of our search algorithms with a probability of astrophysical origin $p_\mathrm{astro} > 0.5$. Of these, 18 were previously reported as low-latency public alerts, and 17 are reported here for the first time. Based upon estimates for the component masses, our O3b candidates with $p_\mathrm{astro} > 0.5$ are consistent with gravitational-wave signals from binary black holes or neutron star-black hole binaries, and we identify none from binary neutron stars. However, from the gravitational-wave data alone, we are not able to measure matter effects that distinguish whether the binary components are neutron stars or black holes. The range of inferred component masses is similar to that found with previous catalogs, but the O3b candidates include the first confident observations of neutron star-black hole binaries. Including the 35 candidates from O3b in addition to those from GWTC-2.1, GWTC-3 contains 90 candidates found by our analysis with $p_\mathrm{astro} > 0.5$ across the first three observing runs. These observations of compact binary coalescences present an unprecedented view of the properties of black holes and neutron stars. ","GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During |
the Second Part of the Third Observing Run",9,"['Lots of work from lots of wonderful people in the @LIGO, @ego_virgo @KAGRA_PR collaborations has gone into this, and we get some pretty exciting results!\n\nNew possible NSBH systems and loads more binary black holes!\n\nRead the paper here:\n<LINK>\n\n🧵👇 <LINK>', 'Its an excellent paper, but I may be biased as part of the team that led the writing of the paper. This team was excellently led by @cplberry of @UofGravity, and with colleagues from around the globe 2/', ""Its 82 pages and a large percentage of that is the author list to highlight all our amazing colleagues\n\nThis paper really is a collaborative effort, and everyone contributed to these exciting results\n\nBUT I'm going to indulge myself and highlight things I've directly worked on 3/"", 'The @UoPCosmology institute at @portsmouthuni has been vital for this catalog, not just in scientific content, where ICG researchers developed many of the search and detector characterisation methods, but also in organisation and leadership in the collaboration 4/', 'Me and colleagues at @UoPCosmology organised and performed analyses of the gravitational-wave data using PyCBC searches, and there are quite a few events which would not have been found without our efforts. 5/', 'The PyCBC searches used analysis developed in my time at @IGFAE_HEP in https://t.co/lJ309SFDQa with @TomD_Santiago, @UoPCosmology colleagues and @alexandernitz 6/', 'The O3b data is now public as well, so if you want to discover huge astrophysical objects crashing into one another as well, give it a go!\n\nData is available through GWOSC, and open source gravitational-wave search and parameter estimation software is available from PyCBC 7/', 'We love efforts both inside and outside the collaboration to squeeze as much science out of the data as possible, so I look forward to even more results coming out from this data in the coming months 8/', 'Any volunteers to update https://t.co/3dUVD39f2z ?\n\nEND']",21,11,1871 |
3920,90,1252406595775893505,777916820984475648,Homanga Bharadhwaj,"New L4DC2020 paper: We investigate the problem of planning in Model-based RL through the lens of CEM and Gradient-Descent, analyzing their resp. benefits and pitfalls. paper: <LINK> code: <LINK> w/ Kevin Xie, and @florian_shkurti 1/3 <LINK> We consider both the settings where the underlying dynamics+reward models are known and where they are learned, and based on the analyses propose a *simple* scheme of interleaving CEM and gradient descent update steps for planning. 2/3 <LINK> CEM does not scale with increasing action dim. while g.d. fails when there are multiple obstacles. The *simple* proposed scheme scales well to high action dim., to multiple obstacles, and converges faster than CEM while achieving similar/better asymptotic performance. 3/3 <LINK> @UofTCompSci @UofTRobotics @VectorInst",https://arxiv.org/abs/2004.08763,"Recent works in high-dimensional model-predictive control and model-based reinforcement learning with learned dynamics and reward models have resorted to population-based optimization methods, such as the Cross-Entropy Method (CEM), for planning a sequence of actions. To decide on an action to take, CEM conducts a search for the action sequence with the highest return according to the dynamics model and reward. Action sequences are typically randomly sampled from an unconditional Gaussian distribution and evaluated on the environment. This distribution is iteratively updated towards action sequences with higher returns. However, this planning method can be very inefficient, especially for high-dimensional action spaces. An alternative line of approaches optimize action sequences directly via gradient descent, but are prone to local optima. We propose a method to solve this planning problem by interleaving CEM and gradient descent steps in optimizing the action sequence. Our experiments show faster convergence of the proposed hybrid approach, even for high-dimensional action spaces, avoidance of local minima, and better or equal performance to CEM. Code accompanying the paper is available here this https URL ","Model-Predictive Control via Cross-Entropy and Gradient-Based |
Optimization",4,"['New L4DC2020 paper: \n\nWe investigate the problem of planning in Model-based RL through the lens of CEM and Gradient-Descent, analyzing their resp. benefits and pitfalls. \n\npaper: <LINK>\ncode: <LINK>\n\nw/ Kevin Xie, and @florian_shkurti \n\n1/3 <LINK>', 'We consider both the settings where the underlying dynamics+reward models are known and where they are learned, and based on the analyses propose a *simple* scheme of interleaving CEM and gradient descent update steps for planning.\n\n2/3 https://t.co/ZYokLv3WUL', 'CEM does not scale with increasing action dim. while g.d. fails when there are multiple obstacles. The *simple* proposed scheme scales well to high action dim., to multiple obstacles, and converges faster than CEM while achieving similar/better asymptotic performance.\n\n3/3 https://t.co/Hr5hnV4u34', '@UofTCompSci @UofTRobotics @VectorInst']",20,04,805 |
3921,75,1203009449838993413,8514822,asim kadav,"Our new paper ""15 Keypoints Is All You Need"" <LINK> is on arXiv. It describes ""KeyTrack"" which has been #1 on the PoseTrack '17 leaderboard since last month. With just 0.43M parameters, it can be trained on 1 GPU in under two hours. #computervision <LINK> Paper also includes comparisons of transformer and convolutions, and at low resolutions, transformers can outperform convolutions for the tracking task.",http://arxiv.org/abs/1912.02323,"Pose tracking is an important problem that requires identifying unique human pose-instances and matching them temporally across different frames of a video. However, existing pose tracking methods are unable to accurately model temporal relationships and require significant computation, often computing the tracks offline. We present an efficient Multi-person Pose Tracking method, KeyTrack, that only relies on keypoint information without using any RGB or optical flow information to track human keypoints in real-time. Keypoints are tracked using our Pose Entailment method, in which, first, a pair of pose estimates is sampled from different frames in a video and tokenized. Then, a Transformer-based network makes a binary classification as to whether one pose temporally follows another. Furthermore, we improve our top-down pose estimation method with a novel, parameter-free, keypoint refinement technique that improves the keypoint estimates used during the Pose Entailment step. We achieve state-of-the-art results on the PoseTrack'17 and the PoseTrack'18 benchmarks while using only a fraction of the computation required by most other methods for computing the tracking information. ",15 Keypoints Is All You Need,2,"['Our new paper ""15 Keypoints Is All You Need"" <LINK> is on arXiv. It describes ""KeyTrack"" which has been #1 on the PoseTrack \'17 leaderboard since last month. With just 0.43M parameters, it can be trained on 1 GPU in under two hours. #computervision <LINK>', 'Paper also includes comparisons of transformer and convolutions, and at low resolutions, transformers can outperform convolutions for the tracking task.']",19,12,408 |
3922,81,1405601565831053316,2923392199,Hannah Eyre,"MedspaCy finally has a paper! ""Launching into clinical space with medspaCy: a new clinical text processing toolkit in Python"" at #AMIA2021 Congrats to @abchapman93 @burgersmoke and others at @SLC_IDEAS, @vahsrd, and @UofUEpi <LINK> @spacy_io #NLProc @abchapman93 @burgersmoke @SLC_IDEAS @vahsrd @UofUEpi @spacy_io spacy 3.x update to medspaCy coming very soon as well <LINK>",https://arxiv.org/abs/2106.07799,"Despite impressive success of machine learning algorithms in clinical natural language processing (cNLP), rule-based approaches still have a prominent role. In this paper, we introduce medspaCy, an extensible, open-source cNLP library based on spaCy framework that allows flexible integration of rule-based and machine learning-based algorithms adapted to clinical text. MedspaCy includes a variety of components that meet common cNLP needs such as context analysis and mapping to standard terminologies. By utilizing spaCy's clear and easy-to-use conventions, medspaCy enables development of custom pipelines that integrate easily with other spaCy-based modules. Our toolkit includes several core components and facilitates rapid development of pipelines for clinical text. ","Launching into clinical space with medspaCy: a new clinical text |
processing toolkit in Python",2,"['MedspaCy finally has a paper! \n\n""Launching into clinical space with medspaCy: a new clinical text processing toolkit in Python"" at #AMIA2021\n\nCongrats to @abchapman93 @burgersmoke and others at @SLC_IDEAS, @vahsrd, and @UofUEpi\n\n<LINK>\n\n@spacy_io #NLProc', '@abchapman93 @burgersmoke @SLC_IDEAS @vahsrd @UofUEpi @spacy_io spacy 3.x update to medspaCy coming very soon as well https://t.co/zNni1TtNuf']",21,06,375 |
3923,118,1323851225046110209,27562677,Roland Matsouaka,"We have a new paper on overlap weights (under review) that you can access on ArXiv under the title: A framework for causal inference in the presence of extreme inverse probability weights: the role of overlap weights <LINK> First all, know that we advocate using overlap or matching weights (as opposed to PS trimming or truncation) when positivity is violated. They're more robust and precise under PS model misspecification, with lower variance inflation <LINK> What are the key take-aways: 1. Overlap, Matching, (Shannon's) entropy, and Beta weights all target the same (sub)population of patients for whom there is (clinical) equipoise. 2. Their estimands are often similar; only SEs differ, with OW having smaller ones 3. Depending on the prevalence of trt., these estimands can be close to ATE, ATT, or ATC, i.e., they're flexible (or adaptable) 4. Although, MW have been presented to us as analog to 1-on-1 matching, this is further from the truth. Thus, MW are better. 5. Augmentation of OW, MW, EW, and BW are not doubly robust like augmented IPW. However, they're still better when positivity is violated. 6. We provide sandwich-variance estimation, which can be easily estimated instead of bootstrap 7. Last, but not least, we provided extensively all the (tedious) theoretical proofs for those who want to dig deeper. And if you still have questions, do not hesitate to contact me at roland.matsouaka@duke.edu or @matsouaka, I'll be more than happy to help",https://arxiv.org/abs/2011.01388,"In this paper, we consider recent progress in estimating the average treatment effect when extreme inverse probability weights are present and focus on methods that account for a possible violation of the positivity assumption. These methods aim at estimating the treatment effect on the subpopulation of patients for whom there is a clinical equipoise. We propose a systematic approach to determine their related causal estimands and develop new insights into the properties of the weights targeting such a subpopulation. Then, we examine the roles of overlap weights, matching weights, Shannon's entropy weights, and beta weights. This helps us characterize and compare their underlying estimators, analytically and via simulations, in terms of the accuracy, precision, and root mean squared error. Moreover, we study the asymptotic behaviors of their augmented estimators (that mimic doubly robust estimators), which lead to improved estimations when either the propensity or the regression models are correctly specified. Based on the analytical and simulation results, we conclude that overall overlap weights are preferable to matching weights, especially when there is moderate or extreme violations of the positivity assumption. Finally, we illustrate the methods using a real data example marked by extreme inverse probability weights. ","A framework for causal inference in the presence of extreme inverse |
probability weights: the role of overlap weights",6,"['We have a new paper on overlap weights (under review) that you can access on ArXiv under the title: \n\nA framework for causal inference in the presence of extreme inverse probability weights: the role of overlap weights\n\n<LINK>', ""First all, know that we advocate using overlap or matching weights (as opposed to PS trimming or truncation) when positivity is violated. They're more robust and precise under PS model misspecification, with lower variance inflation\nhttps://t.co/CUJOR3cxec"", ""What are the key take-aways:\n1. Overlap, Matching, (Shannon's) entropy, and Beta weights all target the same (sub)population of patients for whom there is (clinical) equipoise.\n2. Their estimands are often similar; only SEs differ, with OW having smaller ones"", ""3. Depending on the prevalence of trt., these estimands can be close to ATE, ATT, or ATC, i.e., they're flexible (or adaptable)\n4. Although, MW have been presented to us as analog to 1-on-1 matching, this is further from the truth. Thus, MW are better."", ""5. Augmentation of OW, MW, EW, and BW are not doubly robust like augmented IPW. However, they're still better when positivity is violated.\n6. We provide sandwich-variance estimation, which can be easily estimated instead of bootstrap"", ""7. Last, but not least, we provided extensively all the (tedious) theoretical proofs for those who want to dig deeper.\n\nAnd if you still have questions, do not hesitate to contact me at roland.matsouaka@duke.edu or @matsouaka, I'll be more than happy to help""]",20,11,1469 |
3924,86,971560667596509184,195143286,Song Huang,"Using deep images from HSC, we find that the size and stellar mass distribution of massive central galaxies do depend on their halo: massive halo hosts larger, more extended central galaxy. Stay tuned for more insights from weak lensing analysis! <LINK> <LINK>",https://arxiv.org/abs/1803.02824,"We use ~100 square deg of deep (>28.5 mag arcsec$^{-2}$ in i-band), high-quality (median 0.6 arcsec seeing) imaging data from the Hyper Suprime-Cam (HSC) survey to reveal the halo mass dependence of the surface mass density profiles and outer stellar envelopes of massive galaxies. The i-band images from the HSC survey reach ~4 magnitudes deeper than Sloan Digital Sky Survey and enable us to directly trace stellar mass distributions to 100 kpc without requiring stacking. We conclusively show that, at fixed stellar mass, the stellar profiles of massive galaxies depend on the masses of their dark matter haloes. On average, massive central galaxies with $\log M_{\star, 100\ \mathrm{kpc}}>11.6$ in more massive haloes at 0.3 < z < 0.5 have shallower inner stellar mass density profiles (within ~10-20 kpc) and more prominent outer envelopes. These differences translate into a halo mass dependence of the mass-size relation. Central galaxies in haloes with $\log M_{\rm{Halo}}>14.0$ are ~20% larger in $R_{\mathrm{50}}$ at fixed stellar mass. Such dependence is also reflected in the relationship between the stellar mass within 10 and 100 kpc. Comparing to the mass--size relation, the $\log M_{\star, 100\ \rm{kpc}}$-$\log M_{\star, 10\ \rm{kpc}}$ relation avoids the ambiguity in the definition of size, and can be straightforwardly compared with simulations. Our results demonstrate that, with deep images from HSC, we can quantify the connection between halo mass and the outer stellar halo, which may provide new constraints on the formation and assembly of massive central galaxies. ","A Detection of the Environmental Dependence of the Sizes and Stellar |
Haloes of Massive Central Galaxies",1,"['Using deep images from HSC, we find that the size and stellar mass distribution of massive central galaxies do depend on their halo: massive halo hosts larger, more extended central galaxy. Stay tuned for more insights from weak lensing analysis! <LINK> <LINK>']",18,03,260 |
3925,2,1147142514165596160,880722356288802817,Sibylle Hess,"My second paper is ""C-SALT: Mining Class-Specific ALTerations in Boolean Matrix Factorization"". We explore a new model of Boolean matrix factorizations where a clustering and a discriminative model of a labelled, binary dataset is simultaneously optimized.<LINK> <LINK> The starting point was the question which genomic characteristics unify a group of patients of a specific tumor type, and which genomic characteristics discriminate samples from a normal, a tumor and a relapse cell. We found that state-of-the-art model assumptions did not apply. We explored a novel, more complex factorization structure where every bi-cluster has an additional feature cluster for every class. Therewith, we discriminate the samples from one cluster into the classes. The PAL-Tiling framework from our first paper is used for optimization. As a result, we proposed the first method which enabled to identify clusters together with discriminating components, and this in the most flexible way. C-Salt does not need any specifications of the number of clusters and/or discriminating components, opposed to former methods.",https://arxiv.org/abs/1906.09907,"Given labeled data represented by a binary matrix, we consider the task to derive a Boolean matrix factorization which identifies commonalities and specifications among the classes. While existing works focus on rank-one factorizations which are either specific or common to the classes, we derive class-specific alterations from common factorizations as well. Therewith, we broaden the applicability of our new method to datasets whose class-dependencies have a more complex structure. On the basis of synthetic and real-world datasets, we show on the one hand that our method is able to filter structure which corresponds to our model assumption, and on the other hand that our model assumption is justified in real-world application. Our method is parameter-free. ","C-SALT: Mining Class-Specific ALTerations in Boolean Matrix |
Factorization",4,"['My second paper is ""C-SALT: Mining Class-Specific ALTerations in Boolean Matrix Factorization"". We explore a new model of Boolean matrix factorizations where a clustering and a discriminative model of a labelled, binary dataset is simultaneously optimized.<LINK> <LINK>', 'The starting point was the question which genomic characteristics unify a group of patients of a specific tumor type, and which genomic characteristics discriminate samples from a normal, a tumor and a relapse cell. We found that state-of-the-art model assumptions did not apply.', 'We explored a novel, more complex factorization structure where every bi-cluster has an additional feature cluster for every class. Therewith, we discriminate the samples from one cluster into the classes. The PAL-Tiling framework from our first paper is used for optimization.', 'As a result, we proposed the first method which enabled to identify clusters together with discriminating components, and this in the most flexible way. C-Salt does not need any specifications of the number of clusters and/or discriminating components, opposed to former methods.']",19,06,1107 |
3926,195,1506593717616058371,1231755108234547202,Ariel Werle,"This is one of the post-starburst galaxies we have studied in our last paper (<LINK>) and I think it deserves a couple of tweets. <LINK> The yellow-blue color map in the image above traces how much stars were recently formed in the galaxy disk, while the red region indicates a tail of ionized gas. Note that there is a stellar population gradient aligned with the direction of the tail! This galaxy just had its gas removed due to the ram-pressure of the intracluster medium (around 60 million years ago if our math is right), and we were lucky enough to catch it in a stage when the removed gas is still visible. We interpret this object as a ""missing link"" between post-starburst and jellyfish galaxies.",https://arxiv.org/abs/2203.08862,"We present results from MUSE spatially-resolved spectroscopy of 21 post-starburst galaxies in the centers of 8 clusters from $z\sim0.3$ to $z\sim0.4$. We measure spatially resolved star-formation histories (SFHs), the time since quenching ($t_Q$) and the fraction of stellar mass assembled in the past 1.5 Gyr ($\mu_{1.5}$). The SFHs display a clear enhancement of star-formation prior to quenching for 16 out of 21 objects, with at least 10% (and up to $>50$%) of the stellar mass being assembled in the past 1.5 Gyr and $t_Q$ ranging from less than 100 Myrs to $\sim800$ Myrs. By mapping $t_Q$ and $\mu_{1.5}$, we analyze the quenching patterns of the galaxies. Most galaxies in our sample have quenched their star-formation from the outside-in or show a side-to-side/irregular pattern, both consistent with quenching by ram-pressure stripping. Only three objects show an inside-out quenching pattern, all of which are at the high-mass end of our sample. At least two of them currently host an active galactic nucleus. In two post-starbursts, we identify tails of ionized gas indicating that these objects had their gas stripped by ram pressure very recently. Post-starburst features are also found in the stripped regions of galaxies undergoing ram-pressure stripping in the same clusters, confirming the link between these classes of objects. Our results point to ram-pressure stripping as the main driver of fast quenching in these environments, with active galactic nuclei playing a role at high stellar masses. ",Post-starburst galaxies in the centers of intermediate redshift clusters,4,"['This is one of the post-starburst galaxies we have studied in our last paper (<LINK>) and I think it deserves a couple of tweets. <LINK>', 'The yellow-blue color map in the image above traces how much stars were recently formed in the galaxy disk, while the red region indicates a tail of ionized gas. Note that there is a stellar population gradient aligned with the direction of the tail!', 'This galaxy just had its gas removed due to the ram-pressure of the intracluster medium (around 60 million years ago if our math is right), and we were lucky enough to catch it in a stage when the removed gas is still visible.', 'We interpret this object as a ""missing link"" between post-starburst and jellyfish galaxies.']",22,03,706 |
3927,107,1392301806739345408,1191056593476915200,Ray Bai,"New preprint w/ @BalocchiCecilia, @MaryRBoland, @spcanelon, @chenyong1203, Ed George, & Jessica Liu! ""A Bayesian Hierarchical Modeling Framework for Geospatial Analysis of Adverse Pregnancy Outcomes."" Read our paper here: <LINK> <LINK> In this work, we develop geospatial mixed effects logistic regression models for adverse pregnancy outcomes that account for spatial autocorrelation and heterogeneity between neighborhoods. 1/2 We identify several informative patient-level and neighborhood-level covariates for stillbirth and preterm birth. We identify the neighborhoods in the city of Philadelphia at greatest risk of these adverse pregnancy outcomes. 2/2",https://arxiv.org/abs/2105.04981,"Studying the determinants of adverse pregnancy outcomes like stillbirth and preterm birth is of considerable interest in epidemiology. Understanding the role of both individual and community risk factors for these outcomes is crucial for planning appropriate clinical and public health interventions. With this goal, we develop geospatial mixed effects logistic regression models for adverse pregnancy outcomes. Our models account for both spatial autocorrelation and heterogeneity between neighborhoods. To mitigate the low incidence of stillbirth and preterm births in our data, we explore using class rebalancing techniques to improve predictive power. To assess the informative value of the covariates in our models, we use posterior distributions of their coefficients to gauge how well they can be distinguished from zero. As a case study, we model stillbirth and preterm birth in the city of Philadelphia, incorporating both patient-level data from electronic health records (EHR) data and publicly available neighborhood data at the census tract level. We find that patient-level features like self-identified race and ethnicity were highly informative for both outcomes. Neighborhood-level factors were also informative, with poverty important for stillbirth and crime important for preterm birth. Finally, we identify the neighborhoods in Philadelphia at highest risk of stillbirth and preterm birth. ","A Bayesian Hierarchical Modeling Framework for Geospatial Analysis of |
Adverse Pregnancy Outcomes",3,"['New preprint w/ @BalocchiCecilia, @MaryRBoland, @spcanelon, @chenyong1203, Ed George, & Jessica Liu! ""A Bayesian Hierarchical Modeling Framework for Geospatial Analysis of Adverse Pregnancy Outcomes."" Read our paper here: <LINK> <LINK>', 'In this work, we develop geospatial mixed effects logistic regression models for adverse pregnancy outcomes that account for spatial autocorrelation and heterogeneity between neighborhoods. 1/2', 'We identify several informative patient-level and neighborhood-level covariates for stillbirth and preterm birth. We identify the neighborhoods in the city of Philadelphia at greatest risk of these adverse pregnancy outcomes. 2/2']",21,05,659 |
3928,74,1014691455422455808,726837554000084993,Jeremy Bailin,"New paper: A Model for Clumpy Self-Enrichment in Globular Clusters. <LINK> Ultrasummary: If GCs formed with subclumps like in observed + simulated star formation regions, supernova enrichment between clumps results in observed metallicity spreads within clusters. Code is at <LINK>",https://arxiv.org/abs/1807.01447,"Detailed observations of globular clusters (GCs) have revealed evidence of self-enrichment: some of the heavy elements that we see in stars today were produced by cluster stars themselves. Moreover, GCs have internal subpopulations with different elemental abundances, including, in some cases, in elements such as iron that are produced by supernovae. This paper presents a theoretical model for GC formation motivated by observations of Milky Way star forming regions and simulations of star formation, where giant molecular clouds fragment into multiple clumps which undergo star formation at slightly different times. Core collapse supernovae from earlier-forming clumps can enrich later-forming clumps to the degree that the ejecta can be retained within the gravitational potential well, resulting in subpopulations with different total metallicities once the clumps merge to form the final cluster. The model matches the mass-metallicity relation seen in GC populations around massive elliptical galaxies, and predicts metallicity spreads within clusters in excellent agreement with those seen in Milky Way GCs, even for those whose internal abundance spreads are so large that their entire identity as a GC is in question. The internal metallicity spread serves as an excellent measurement of how much self-enrichment has occurred in a cluster, a result that is very robust to variation in the model parameters. ",A Model for Clumpy Self-Enrichment in Globular Clusters,2,"['New paper: A Model for Clumpy Self-Enrichment in Globular Clusters.\n<LINK>\nUltrasummary: If GCs formed with subclumps like in observed + simulated star formation regions, supernova enrichment between clumps results in observed metallicity spreads within clusters.', 'Code is at https://t.co/gzKugNzMe4']",18,07,281 |
3929,192,1468533684450041857,1159570181401845761,Stefan Prohazka,"<LINK> The new work ""Carrollian and celestial spaces at infinity"" together with José, Emil and Jakob just appeared on the arXiv. To better understand holography we study low dimensional homogeneous spaces of Poincaré. #Physics #arXiv This homog. spaces are connected to (blown-up) asymptotic infinities (Spi, Ti, Ni, scri). To some extend this mirrors this aspect of (A)dS/CFT, but (for good reasons as we explain) it is more subtle. I find it amazing how much ground can be covered by basically looking merely at the Poincaré group and its homogenous spaces, invariants, etc. For a safe pronunciation of Ni see footnote 2 and <LINK> ;).",https://arxiv.org/abs/2112.03319,"We show that the geometry of the asymptotic infinities of Minkowski spacetime (in $d+1$ dimensions) is captured by homogeneous spaces of the Poincar\'e group: the blow-ups of spatial (Spi) and timelike (Ti) infinities in the sense of Ashtekar--Hansen and a novel space Ni fibering over $\mathscr{I}$. We embed these spaces \`a la Penrose--Rindler into a pseudo-euclidean space of signature $(d+1,2)$ as orbits of the same Poincar\'e subgroup of O$(d+1,2)$. We describe the corresponding Klein pairs and determine their Poincar\'e-invariant structures: a carrollian structure on Ti, a pseudo-carrollian structure on Spi and a ""doubly-carrollian"" structure on Ni. We give additional geometric characterisations of these spaces as grassmannians of affine hyperplanes in Minkowski spacetime: Spi is the (double cover of the) grassmannian of affine lorentzian hyperplanes; Ti is the grassmannian of affine spacelike hyperplanes and Ni fibers over the grassmannian of affine null planes, which is $\mathscr{I}$. We exhibit Ni as the fibred product of $\mathscr{I}$ and the lightcone over the celestial sphere. We also show that Ni is the total space of the bundle of scales of the conformal carrollian structure on $\mathscr{I}$ and show that the symmetry algebra of its doubly-carrollian structure is isomorphic to the symmetry algebra of the conformal carrollian structure on $\mathscr{I}$; that is, the BMS algebra. We show how to reconstruct Minkowski spacetime from any of its asymptotic geometries, by establishing that points in Minkowski spacetime parametrise certain lightcone cuts in the asymptotic geometries. We include an appendix comparing with (A)dS and observe that the de Sitter groups have no homogeneous spaces which could play the r\^ole that the celestial sphere plays in flat space holography. ",Carrollian and celestial spaces at infinity,3,"['<LINK> The new work ""Carrollian and celestial spaces at infinity"" together with José, Emil and Jakob just appeared on the arXiv. To better understand holography we study low dimensional homogeneous spaces of Poincaré. #Physics #arXiv', 'This homog. spaces are connected to (blown-up) asymptotic infinities (Spi, Ti, Ni, scri). To some extend this mirrors this aspect of (A)dS/CFT, but (for good reasons as we explain) it is more subtle.', 'I find it amazing how much ground can be covered by basically looking merely at the Poincaré group and its homogenous spaces, invariants, etc. For a safe pronunciation of Ni see footnote 2 and https://t.co/DD3jX6H5Le ;).']",21,12,637 |
3930,63,971332313572311040,59413748,Reuben Binns,"We often criticize tech platforms for using crude behavioral metrics, but what should we replace them with? New paper (probably weirdest I've written), asks 'What do users really want?' (w/ @ulyngs, @emax @Nigel_Shadbolt for #chi2018 #altchi <LINK> @tavmos @ulyngs @emax @Nigel_Shadbolt it was a busy winter!",https://arxiv.org/abs/1803.02065,"Equating users' true needs and desires with behavioural measures of 'engagement' is problematic. However, good metrics of 'true preferences' are difficult to define, as cognitive biases make people's preferences change with context and exhibit inconsistencies over time. Yet, HCI research often glosses over the philosophical and theoretical depth of what it means to infer what users really want. In this paper, we present an alternative yet very real discussion of this issue, via a fictive dialogue between senior executives in a tech company aimed at helping people live the life they `really' want to live. How will the designers settle on a metric for their product to optimise? ","""So, Tell Me What Users Want, What They Really, Really Want!""",2,"[""We often criticize tech platforms for using crude behavioral metrics, but what should we replace them with?\n\nNew paper (probably weirdest I've written), asks 'What do users really want?'\n\n(w/ @ulyngs, @emax @Nigel_Shadbolt for #chi2018 #altchi \n\n<LINK>"", '@tavmos @ulyngs @emax @Nigel_Shadbolt it was a busy winter!']",18,03,309 |
3931,273,1410648136016560129,1285209259701989377,Spandan Madan,"New preprint! We show that CNNs and Transformers are brittle to small changes in 3D perspective and lighting. We propose an evolution strategies (ES) based search method for finding failures within training distribution! 1/5 pdf: <LINK> <LINK> Brittleness of NNs to shifts/rotations/color changes is attributed to biased training data and lack of shift-invariance. We challenge this hypothesis and test classification under camera and lighting variations while ensuring - unbiased training and shift invariant architectures. To investigate brittleness we propose an ES based approach which searches for failures within the training distribution (CMA-Search). Starting with correctly classified image, our method searches for failures in vicinity of camera and light params. 3/5 <LINK> Our method finds errors in vicinity of correctly classified images for 71% cases with < 3.6% change in camera. For transformers, its far worse---only 15% survived CMA-Search. 4/5 <LINK> Shift invariant architectures were robust to 2D shifts, but not 3D changes. Data augmentation, unbiased datasets, and shift-invariant architectures help, but new architectures need to be invariant to a super-set of 2D shifts—3D perspective changes and lighting. 5/5 This is joint work with Tomotake Sasaki, @tzumaoli, Xavier Boix and @hpfister!",https://arxiv.org/abs/2106.16198,"Neural networks are susceptible to small transformations including 2D rotations and shifts, image crops, and even changes in object colors. This is often attributed to biases in the training dataset, and the lack of 2D shift-invariance due to not respecting the sampling theorem. In this paper, we challenge this hypothesis by training and testing on unbiased datasets, and showing that networks are brittle to both small 3D perspective changes and lighting variations which cannot be explained by dataset bias or lack of shift-invariance. To find these in-distribution errors, we introduce an evolution strategies (ES) based approach, which we call CMA-Search. Despite training with a large-scale (0.5 million images), unbiased dataset of camera and light variations, in over 71% cases CMA-Search can find camera parameters in the vicinity of a correctly classified image which lead to in-distribution misclassifications with < 3.6% change in parameters. With lighting changes, CMA-Search finds misclassifications in 33% cases with < 11.6% change in parameters. Finally, we extend this method to find misclassifications in the vicinity of ImageNet images for both ResNet and OpenAI's CLIP model. ","Small in-distribution changes in 3D perspective and lighting fool both |
CNNs and Transformers",6,"['New preprint! We show that CNNs and Transformers are brittle to small changes in 3D perspective and lighting. We propose an evolution strategies (ES) based search method for finding failures within training distribution! 1/5\n\npdf: <LINK> <LINK>', 'Brittleness of NNs to shifts/rotations/color changes is attributed to biased training data and lack of shift-invariance. We challenge this hypothesis and test classification under camera and lighting variations while ensuring - unbiased training and shift invariant architectures.', 'To investigate brittleness we propose an ES based approach which searches for failures within the training distribution (CMA-Search). Starting with correctly classified image, our method searches for failures in vicinity of camera and light params. 3/5 https://t.co/19AwVZb64v', 'Our method finds errors in vicinity of correctly classified images for 71% cases with < 3.6% change in camera. For transformers, its far worse---only 15% survived CMA-Search. 4/5 https://t.co/t2rKBKv8Rn', 'Shift invariant architectures were robust to 2D shifts, but not 3D changes. Data augmentation, unbiased datasets, and shift-invariant architectures help, but new architectures need to be invariant to a super-set of 2D shifts—3D perspective changes and lighting. 5/5', 'This is joint work with Tomotake Sasaki, @tzumaoli, Xavier Boix and @hpfister!']",21,06,1318 |
3932,165,1390127257478434822,104529881,Diogo Souto,"New paper on @arXiver! In this work we used the @APOGEEsurvey spectra to determine metallicities for a sample of FGKM dwarfs stars from the Coma Berenices open cluster. This is the first APOGEE study determining detailed metallicities of Mdwarfs from an OC <LINK> <LINK> We also observe the signature of atomic diffusion operating in the warmer stars from the cluster, as can be seen in the Figure above. Enjoy the reading!",https://arxiv.org/abs/2105.01667,"We present a study of metallicities in a sample of main sequence stars with spectral types M, K, G and F ($T_{\rm eff}$ $\sim$ 3200 -- 6500K and log $g$ $\sim$ 4.3 -- 5.0 dex) belonging to the solar neighborhood young open cluster Coma Berenices. Metallicities were determined using the high-resolution (R=$\lambda$/$\Delta$ $\lambda$ $\sim$ 22,500) NIR spectra ($\lambda$1.51 -- $\lambda$1.69 $\mu$m) of the SDSS-IV APOGEE survey. Membership to the cluster was confirmed using previous studies in the literature along with APOGEE radial velocities and Gaia DR2. An LTE analysis using plane-parallel MARCS model atmospheres and the APOGEE DR16 line list was adopted to compute synthetic spectra and derive atmospheric parameters ($T_{\rm eff}$ and log $g$) for the M dwarfs and metallicities for the sample. The derived metallicities are near solar and are homogeneous at the level of the expected uncertainties, in particular when considering stars from a given stellar class. The mean metallicity computed for the sample of G, K, and M dwarfs is $\langle$[Fe/H]$\rangle$ = +0.04 $\pm$ 0.02 dex; however, the metallicities of the F-type stars are slightly lower, by about 0.04 dex, when compared to cooler and less massive members. Models of atomic diffusion can explain this modest abundance dip for the F dwarfs, indicating that atomic diffusion operates in Coma Berenices stars. The [Fe/H] dip occurs in nearly the same effective temperature range as that found in previous analyses of the lithium and beryllium abundances in Coma Berenices. ","A metallicity study of F, G, K and M dwarfs in the Coma Berenices open |
cluster from the APOGEE survey",2,"['New paper on @arXiver!\nIn this work we used the @APOGEEsurvey spectra to determine metallicities for a sample of FGKM dwarfs stars from the Coma Berenices open cluster. This is the first APOGEE study determining detailed metallicities of Mdwarfs from an OC <LINK> <LINK>', 'We also observe the signature of atomic diffusion operating in the warmer stars from the cluster, as can be seen in the Figure above. Enjoy the reading!']",21,05,423 |
3933,60,1329420894083579906,502864087,Alham Fikri Aji,"Introducing a new informal-formal Indonesian parallel corpus. We collect informal texts from Twitter, which are then annotated into the formal form. We also explore seq2seq techs for informal-formal style transfer. data: <LINK> paper: <LINK> <LINK> This research was carried out by the @KataDotAI research team, in collaboration with a researcher and interns from UI and Binus. Great work @haryoaw and team.",https://arxiv.org/abs/2011.03286v1,"In its daily use, the Indonesian language is riddled with informality, that is, deviations from the standard in terms of vocabulary, spelling, and word order. On the other hand, current available Indonesian NLP models are typically developed with the standard Indonesian in mind. In this work, we address a style-transfer from informal to formal Indonesian as a low-resource machine translation problem. We build a new dataset of parallel sentences of informal Indonesian and its formal counterpart. We benchmark several strategies to perform style transfer from informal to formal Indonesian. We also explore augmenting the training set with artificial forward-translated data. Since we are dealing with an extremely low-resource setting, we find that a phrase-based machine translation approach outperforms the Transformer-based approach. Alternatively, a pre-trained GPT-2 fined-tuned to this task performed equally well but costs more computational resource. Our findings show a promising step towards leveraging machine translation models for style transfer. Our code and data are available in this https URL ","] Semi-Supervised Low-Resource Style Transfer of Indonesian Informal to |
Formal Language with Iterative Forward-Translation",2,"['Introducing a new informal-formal Indonesian parallel corpus. We collect informal texts from Twitter, which are then annotated into the formal form. We also explore seq2seq techs for informal-formal style transfer.\n\ndata: <LINK>\npaper: <LINK> <LINK>', 'This research was carried out by the @KataDotAI research team, in collaboration with a researcher and interns from UI and Binus. Great work @haryoaw and team.']",20,11,407 |
3934,243,1278623624874930176,1360175358,Michele Starnini,"Last out! @electionstudies survey data shows that extreme opinions wrt different topics can be correlated. We propose a model where these polarized ideological opinions emerge, without assuming apriori such correlations or preexisting social structures 1/3 <LINK> <LINK> In our model, opinions evolve in a multidimensional space, driven by dynamical, homophilic social interactions. Topics form a non-orthogonal basis of the space, ie they can overlap. Ideological states emerge even between rather unrelated but sufficiently controversial topics. 2/3 <LINK> The phase transition between consensus, opinion polarization, and ideological states can be analytically characterized as a function of the controversialness and overlap of the topics. Thanks to F. Baumann, @philipplenz6 and I. Sokolov. Comments welcome! 3/3 <LINK> @dgarcia_eu @electionstudies Sure, we read and cited your very interesting paper! Here instead we assume that opinion formation is driven by homophilic social interactions (based on activity-driven dynamics). This also leads to a social network segregated according to different opinions.",https://arxiv.org/abs/2007.00601,"Opinion polarization is on the rise, causing concerns for the openness of public debates. Additionally, extreme opinions on different topics often show significant correlations. The dynamics leading to these polarized ideological opinions pose a challenge: How can such correlations emerge, without assuming them a priori in the individual preferences or in a preexisting social structure? Here we propose a simple model that qualitatively reproduces ideological opinion states found in survey data, even between rather unrelated, but sufficiently controversial, topics. Inspired by skew coordinate systems recently proposed in natural language processing models, we solidify these intuitions in a formalism of opinions unfolding in a multidimensional space where topics form a non-orthogonal basis. Opinions evolve according to the social interactions among the agents, which are ruled by homophily: two agents sharing similar opinions are more likely to interact. The model features phase transitions between a global consensus, opinion polarization, and ideological states. Interestingly, the ideological phase emerges by relaxing the assumption of an orthogonal basis of the topic space, i.e. if topics thematically overlap. Furthermore, we analytically and numerically show that these transitions are driven by the controversialness of the topics discussed, the more controversial the topics, the more likely are opinion to be correlated. Our findings shed light upon the mechanisms driving the emergence of ideology in the formation of opinions. ","Emergence of polarized ideological opinions in multidimensional topic |
spaces",4,"['Last out! @electionstudies survey data shows that extreme opinions wrt different topics can be correlated. We propose a model where these polarized ideological opinions emerge, without assuming apriori such correlations or preexisting social structures 1/3\n<LINK> <LINK>', 'In our model, opinions evolve in a multidimensional space, driven by dynamical, homophilic social interactions. Topics form a non-orthogonal basis of the space, ie they can overlap. Ideological states emerge even between rather unrelated but sufficiently controversial topics. 2/3 https://t.co/jmG6fz7jaD', 'The phase transition between consensus, opinion polarization, and ideological states can be analytically characterized as a function of the controversialness and overlap of the topics. Thanks to F. Baumann, @philipplenz6 and I. Sokolov. Comments welcome! 3/3 https://t.co/NMR4k50jpU', '@dgarcia_eu @electionstudies Sure, we read and cited your very interesting paper! Here instead we assume that opinion formation is driven by homophilic social interactions (based on activity-driven dynamics). This also leads to a social network segregated according to different opinions.']",20,07,1113 |
3935,213,1513771265471172617,1352638067761311744,Joseph Imperial,"Say you're writing a story for a Grade 2 learner, how do you ensure that what you'll write next will still be readable by the student? Can you maintain the text's complexity throughout? We propose the task of Uniform Complexity for Text Generation. <LINK> We investigate if humans and neural language models (GPT-2) trained to produce coherent stories can *maintain* the linguistic complexities of generated continuations with respect to the prompts. We explored over 160 features. Turns out, both generally struggle in this task. This project is supported by my @GoogleAI and @TensorFlow grant. Also big thanks to @Alexir563 for helping me with the GPT-2 models 💯 @mrinmayasachan @GoogleAI @TensorFlow @Alexir563 I was about to email you the link 😅",https://arxiv.org/abs/2204.05185,"Powerful language models such as GPT-2 have shown promising results in tasks such as narrative generation which can be useful in an educational setup. These models, however, should be consistent with the linguistic properties of triggers used. For example, if the reading level of an input text prompt is appropriate for low-leveled learners (ex. A2 in the CEFR), then the generated continuation should also assume this particular level. Thus, we propose the task of uniform complexity for text generation which serves as a call to make existing language generators uniformly complex with respect to prompts used. Our study surveyed over 160 linguistic properties for evaluating text complexity and found out that both humans and GPT-2 models struggle in preserving the complexity of prompts in a narrative generation setting. ",Uniform Complexity for Text Generation,4,"[""Say you're writing a story for a Grade 2 learner, how do you ensure that what you'll write next will still be readable by the student? Can you maintain the text's complexity throughout?\n\nWe propose the task of Uniform Complexity for Text Generation.\n\n<LINK>"", 'We investigate if humans and neural language models (GPT-2) trained to produce coherent stories can *maintain* the linguistic complexities of generated continuations with respect to the prompts. \n\nWe explored over 160 features. Turns out, both generally struggle in this task.', 'This project is supported by my @GoogleAI and @TensorFlow grant. Also big thanks to @Alexir563 for helping me with the GPT-2 models 💯', '@mrinmayasachan @GoogleAI @TensorFlow @Alexir563 I was about to email you the link 😅']",22,04,750 |
3936,80,1171449086743990273,1033928440645382149,Ben Zhou,"Check out our new #emnlp2019 paper where we studied temporal commonsense: <LINK>. We collected a QA dataset MC-TACO🌮(leaderboard coming soon) and showed that it's a new challenge to existing systems. Co-author with @DanielKhashabi, Qiang Ning and Dan Roth.",https://arxiv.org/abs/1909.03065,"Understanding time is crucial for understanding events expressed in natural language. Because people rarely say the obvious, it is often necessary to have commonsense knowledge about various temporal aspects of events, such as duration, frequency, and temporal order. However, this important problem has so far received limited attention. This paper systematically studies this temporal commonsense problem. Specifically, we define five classes of temporal commonsense, and use crowdsourcing to develop a new dataset, MCTACO, that serves as a test set for this task. We find that the best current methods used on MCTACO are still far behind human performance, by about 20%, and discuss several directions for improvement. We hope that the new dataset and our study here can foster more future research on this topic. ","""Going on a vacation"" takes longer than ""Going for a walk"": A Study of |
Temporal Commonsense Understanding",1,"[""Check out our new #emnlp2019 paper where we studied temporal commonsense: <LINK>. We collected a QA dataset MC-TACO🌮(leaderboard coming soon) and showed that it's a new challenge to existing systems. Co-author with @DanielKhashabi, Qiang Ning and Dan Roth.""]",19,09,256 |
3937,147,1367920278605598727,429863384,Sarath Chandar,"Are you tired of manually creating new tasks for Lifelong RL? We introduce Lifelong Hanabi in which every task is coordinating with a partner that's an expert player of Hanabi. Work led by @HadiNekoei and @akileshbadri. paper: <LINK> @Mila_Quebec 1/n <LINK> The large strategy space of Hanabi facilitates generating a diverse set of partners ideal for designing LLL tasks. The Cross-play matrix shows how these partners (tasks) are related to each other, a feature not commonly found in existing LLL benchmarks. 2/n <LINK> We also show that a simple IQL agent trained continually in our setup can coordinate well with unseen agents in Hanabi, without having access to its partner's policies or symmetries in the game. 3/n Working on a new LLL algorithm and want to benchmark its effectiveness? Make sure to try it on Lifelong Hanabi! Code to reproduce all experiments, pre-trained partners available <LINK>. Reach out to us for more questions! n/n",https://arxiv.org/abs/2103.03216,"Current deep reinforcement learning (RL) algorithms are still highly task-specific and lack the ability to generalize to new environments. Lifelong learning (LLL), however, aims at solving multiple tasks sequentially by efficiently transferring and using knowledge between tasks. Despite a surge of interest in lifelong RL in recent years, the lack of a realistic testbed makes robust evaluation of LLL algorithms difficult. Multi-agent RL (MARL), on the other hand, can be seen as a natural scenario for lifelong RL due to its inherent non-stationarity, since the agents' policies change over time. In this work, we introduce a multi-agent lifelong learning testbed that supports both zero-shot and few-shot settings. Our setup is based on Hanabi -- a partially-observable, fully cooperative multi-agent game that has been shown to be challenging for zero-shot coordination. Its large strategy space makes it a desirable environment for lifelong RL tasks. We evaluate several recent MARL methods, and benchmark state-of-the-art LLL algorithms in limited memory and computation regimes to shed light on their strengths and weaknesses. This continual learning paradigm also provides us with a pragmatic way of going beyond centralized training which is the most commonly used training protocol in MARL. We empirically show that the agents trained in our setup are able to coordinate well with unseen agents, without any additional assumptions made by previous works. The code and all pre-trained models are available at this https URL ",Continuous Coordination As a Realistic Scenario for Lifelong Learning,4,"[""Are you tired of manually creating new tasks for Lifelong RL? We introduce Lifelong Hanabi in which every task is coordinating with a partner that's an expert player of Hanabi. Work led by @HadiNekoei and @akileshbadri.\npaper: <LINK> @Mila_Quebec 1/n <LINK>"", 'The large strategy space of Hanabi facilitates generating a diverse set of partners ideal for designing LLL tasks. The Cross-play matrix shows how these partners (tasks) are related to each other, a feature not commonly found in existing LLL benchmarks. 2/n https://t.co/xyKqY50TaU', ""We also show that a simple IQL agent trained continually in our setup can coordinate well with unseen agents in Hanabi, without having access to its partner's policies or symmetries in the game. 3/n"", 'Working on a new LLL algorithm and want to benchmark its effectiveness? Make sure to try it on Lifelong Hanabi! Code to reproduce all experiments, pre-trained partners available https://t.co/9bbuO3F4c7. Reach out to us for more questions! n/n']",21,03,947 |
3938,76,1382953793134923781,507704346,Isabelle Augenstein,New #NLProc paper: quantifying gender bias towards politicians in X-ling language models tl;dr: 🗣️gender bias is highly lang-dependent 🤔larger models are not significantly more gender-biased <LINK> #NLProc @karstanczak @sagnikrayc @tpimentelms @ryandcotterell <LINK>,https://arxiv.org/abs/2104.07505,"While the prevalence of large pre-trained language models has led to significant improvements in the performance of NLP systems, recent research has demonstrated that these models inherit societal biases extant in natural language. In this paper, we explore a simple method to probe pre-trained language models for gender bias, which we use to effect a multi-lingual study of gender bias towards politicians. We construct a dataset of 250k politicians from most countries in the world and quantify adjective and verb usage around those politicians' names as a function of their gender. We conduct our study in 7 languages across 6 different language modeling architectures. Our results demonstrate that stance towards politicians in pre-trained language models is highly dependent on the language used. Finally, contrary to previous findings, our study suggests that larger language models do not tend to be significantly more gender-biased than smaller ones. ","Quantifying Gender Bias Towards Politicians in Cross-Lingual Language |
Models",1,['New #NLProc paper: quantifying gender bias towards politicians in X-ling language models\n\ntl;dr:\n🗣️gender bias is highly lang-dependent\n🤔larger models are not significantly more gender-biased\n\n<LINK>\n#NLProc @karstanczak @sagnikrayc @tpimentelms @ryandcotterell <LINK>'],21,04,266 |
3939,109,1318821901217705989,4878011,Matthias Rosenkranz ✨,"New paper out 🎉 We introduce variational quantum-classical Wasserstein GANs with gradient penalty and embed them in a classical framework for anomaly detection. With Daniel Herr and Benjamin Obert. <LINK> <LINK> Tests on a credit card dataset show F1 scores competitive with the fully classical architecture. The classical upscaling in the generator allows us to use all features of the dataset. <LINK> Amongst other things we check the training convergence for finite measurement shots. It's a little slower but remains OK for 100 samples only <LINK> Big thanks to collaborators Daniel Herr and Benjamin Obert. 🙏 Oh, here's Scirate :) <LINK> @jj_xyz Thanks, Johannes",https://arxiv.org/abs/2010.10492,"Generative adversarial networks (GANs) are a machine learning framework comprising a generative model for sampling from a target distribution and a discriminative model for evaluating the proximity of a sample to the target distribution. GANs exhibit strong performance in imaging or anomaly detection. However, they suffer from training instabilities, and sampling efficiency may be limited by the classical sampling procedure. We introduce variational quantum-classical Wasserstein GANs to address these issues and embed this model in a classical machine learning framework for anomaly detection. Classical Wasserstein GANs improve training stability by using a cost function better suited for gradient descent. Our model replaces the generator of Wasserstein GANs with a hybrid quantum-classical neural net and leaves the classical discriminative model unchanged. This way, high-dimensional classical data only enters the classical model and need not be prepared in a quantum circuit. We demonstrate the effectiveness of this method on a credit card fraud dataset. For this dataset our method shows performance on par with classical methods in terms of the $F_1$ score. We analyze the influence of the circuit ansatz, layer width and depth, neural net architecture parameter initialization strategy, and sampling noise on convergence and performance. ","Anomaly detection with variational quantum generative adversarial |
networks",6,"['New paper out 🎉\n\nWe introduce variational quantum-classical Wasserstein GANs with gradient penalty and embed them in a classical framework for anomaly detection.\n\nWith Daniel Herr and Benjamin Obert.\n<LINK> <LINK>', 'Tests on a credit card dataset show F1 scores competitive with the fully classical architecture. The classical upscaling in the generator allows us to use all features of the dataset. https://t.co/Zq6fq8hR1K', ""Amongst other things we check the training convergence for finite measurement shots. It's a little slower but remains OK for 100 samples only https://t.co/oajNwXL3FS"", 'Big thanks to collaborators Daniel Herr and Benjamin Obert. 🙏', ""Oh, here's Scirate :) \nhttps://t.co/FaamaytCm3"", '@jj_xyz Thanks, Johannes']",20,10,667 |
3940,252,1270374761055637520,938463903754862593,George Papamakarios,"New work: The Lipschitz constant of self-attention We show that dot-product self-attention is not Lipschitz and propose an alternative that is. Useful when Lipschitz constraints are needed, e.g. for invertible ResNets. <LINK> Work led by @hyunjik11, details 👇 <LINK>",http://arxiv.org/abs/2006.04710,"Lipschitz constants of neural networks have been explored in various contexts in deep learning, such as provable adversarial robustness, estimating Wasserstein distance, stabilising training of GANs, and formulating invertible neural networks. Such works have focused on bounding the Lipschitz constant of fully connected or convolutional networks, composed of linear maps and pointwise non-linearities. In this paper, we investigate the Lipschitz constant of self-attention, a non-linear neural network module widely used in sequence modelling. We prove that the standard dot-product self-attention is not Lipschitz for unbounded input domain, and propose an alternative L2 self-attention that is Lipschitz. We derive an upper bound on the Lipschitz constant of L2 self-attention and provide empirical evidence for its asymptotic tightness. To demonstrate the practical relevance of our theoretical work, we formulate invertible self-attention and use it in a Transformer-based architecture for a character-level language modelling task. ",The Lipschitz Constant of Self-Attention,1,"['New work: The Lipschitz constant of self-attention\n\nWe show that dot-product self-attention is not Lipschitz and propose an alternative that is. Useful when Lipschitz constraints are needed, e.g. for invertible ResNets.\n\n<LINK>\n\nWork led by @hyunjik11, details 👇 <LINK>']",20,06,266 |
3941,182,1383049651792842752,73090248,Prof. Danushka Bollegala,"We propose a novel social bias evaluation measure, All Unmasked Likelihood (AUL), for evaluating masked language model biases. <LINK> w/ @MasahiroKaneko_ We show that AUL is better suited for evaluating MLM biases using CrowsPairs and StereoSet datasets (1/n) AUL differ from prior proposals which mask modified or unmodified (<LINK>, <LINK>) tokens in example sentences in that it predicts all tokens in a sentence. This avoids freqency-related biases in psuedo log-likelihood computation. (2/n) We evaluate social biases in BERT, RoBERTa and ALBERT in the paper. (3/n) <LINK> We also propose a variant of AUL, which uses attention over tokens to emphasize salient tokens in a sentence (AULA). Both AUL and AULA report higher agreement with human bias ratings in the SeteroSet dataset. (4/n) <LINK> Would like to thank @sleepinyourhat and @sivareddyg research groups for creating the CrowsPairs and StereoSet datasets, without which this work would not be possible! (5/n=5)",https://arxiv.org/abs/2104.07496,"Masked Language Models (MLMs) have shown superior performances in numerous downstream NLP tasks when used as text encoders. Unfortunately, MLMs also demonstrate significantly worrying levels of social biases. We show that the previously proposed evaluation metrics for quantifying the social biases in MLMs are problematic due to following reasons: (1) prediction accuracy of the masked tokens itself tend to be low in some MLMs, which raises questions regarding the reliability of the evaluation metrics that use the (pseudo) likelihood of the predicted tokens, and (2) the correlation between the prediction accuracy of the mask and the performance in downstream NLP tasks is not taken into consideration, and (3) high frequency words in the training data are masked more often, introducing noise due to this selection bias in the test cases. To overcome the above-mentioned disfluencies, we propose All Unmasked Likelihood (AUL), a bias evaluation measure that predicts all tokens in a test case given the MLM embedding of the unmasked input. We find that AUL accurately detects different types of biases in MLMs. We also propose AUL with attention weights (AULA) to evaluate tokens based on their importance in a sentence. However, unlike AUL and AULA, previously proposed bias evaluation measures for MLMs systematically overestimate the measured biases, and are heavily influenced by the unmasked tokens in the context. ",Unmasking the Mask -- Evaluating Social Biases in Masked Language Models,5,"['We propose a novel social bias evaluation measure, All Unmasked Likelihood (AUL), for evaluating masked language model biases. <LINK> w/ @MasahiroKaneko_ We show that AUL is better suited for evaluating MLM biases using CrowsPairs and StereoSet datasets (1/n)', 'AUL differ from prior proposals which mask modified or unmodified (https://t.co/L5vswg3o17, https://t.co/23Dv8Jh4gZ) tokens in example sentences in that it predicts all tokens in a sentence. This avoids freqency-related biases in psuedo log-likelihood computation. (2/n)', 'We evaluate social biases in BERT, RoBERTa and ALBERT in the paper. (3/n) https://t.co/QNeS3I0gub', 'We also propose a variant of AUL, which uses attention over tokens to emphasize salient tokens in a sentence (AULA). Both AUL and AULA report higher agreement with human bias ratings in the SeteroSet dataset. (4/n) https://t.co/99JeswRIa9', 'Would like to thank @sleepinyourhat and @sivareddyg research groups for creating the CrowsPairs and StereoSet datasets, without which this work would not be possible! (5/n=5)']",21,04,975 |
3942,230,1253321924060602368,778246363175907328,Xinyi Wang (Cindy),"Multilingual training requires careful data sampling to handle imbalanced datasets. But how can we find the optimal data sampling strategy? Our #Acl2020 paper proposes MultiDDS, an algorithm that automatically learns to maximize accuracy on all languages: <LINK> <LINK> MultiDDS optimizes a distribution over training datasets by unweighting the language that has similar gradient with the gradient of all dev languages. Experiments show that it outperforms the popular heuristic data sampling schedule under a variety of multilingual NMT setting. <LINK> MultiDDS also allows flexible control over the performance of which languages are optimized. We can define different dev set aggregation methods to reflect the desired optimization priority, such as prioritizing low-performing languages. <LINK> Joint work with Yulia Tsvetkov and Graham Neubig(@gneubig)!",https://arxiv.org/abs/2004.06748,"When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance. In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages. Experiments on two sets of languages under both one-to-many and many-to-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized. ",Balancing Training for Multilingual Neural Machine Translation,4,"['Multilingual training requires careful data sampling to handle imbalanced datasets. But how can we find the optimal data sampling strategy? Our #Acl2020 paper proposes MultiDDS, an algorithm that automatically learns to maximize accuracy on all languages: <LINK> <LINK>', 'MultiDDS optimizes a distribution over training datasets by unweighting the language that has similar gradient with the gradient of all dev languages. Experiments show that it outperforms the popular heuristic data sampling schedule under a variety of multilingual NMT setting. https://t.co/hjSvVNtmfl', 'MultiDDS also allows flexible control over the performance of which languages are optimized. We can define different dev set aggregation methods to reflect the desired optimization priority, such as prioritizing low-performing languages. https://t.co/Ev6kTD8XmA', 'Joint work with Yulia Tsvetkov and Graham Neubig(@gneubig)!']",20,04,859 |
3943,52,1140793766124515328,326843207,Yuta Notsu,"Our new paper (ApJ in press), which I joined as a collaborator, are now in the arXiv ! ""Impact of Stellar Superflares on Planetary Habitability"" Yamashiki et al. <LINK> We used the Particle and Heavy Ion Transport code System [PHITS] code and evaluate the effects of stellar particle event caused by large superflares in the atmosphere of habitable planets.",https://arxiv.org/abs/1906.06797,"High-energy radiation caused by exoplanetary space weather events from planet-hosting stars can play a crucial role in conditions promoting or destroying habitability in addition to the conventional factors. In this paper, we present the first quantitative impact evaluation system of stellar flares on the habitability factors with an emphasis on the impact of Stellar Proton Events. We derive the maximum flare energy from stellar starspot sizes and examine the impacts of flare associated ionizing radiation on CO$_2$, H$_2$, N$_2$+O$_2$ --rich atmospheres of a number of well-characterized terrestrial type exoplanets. Our simulations based on the Particle and Heavy Ion Transport code System [PHITS] suggest that the estimated ground level dose for each planet in the case of terrestrial-level atmospheric pressure (1 bar) for each exoplanet does not exceed the critical dose for complex (multi-cellular) life to persist, even for the planetary surface of Proxima Centauri b, Ross-128 b and TRAPPIST-1 e. However, when we take into account the effects of the possible maximum flares from those host stars, the estimated dose reaches fatal levels at the terrestrial lowest atmospheric depth on TRAPPIST-1 e and Ross-128 b. Large fluxes of coronal XUV radiation from active stars induces high atmospheric escape rates from close-in exoplanets suggesting that the atmospheric depth can be substantially smaller than that on the Earth. In a scenario with the atmospheric thickness of 1/10 of Earth's, the radiation dose from close-in planets including Proxima Centauri b and TRAPPIST-1 e reach near fatal dose levels with annual frequency of flare occurrence from their hoststars. ",Impact of Stellar Superflares on Planetary Habitability,2,"['Our new paper (ApJ in press), which I joined as a collaborator, are now in the arXiv !\n\n""Impact of Stellar Superflares on Planetary Habitability""\nYamashiki et al. <LINK>', 'We used the Particle and Heavy Ion Transport code System [PHITS] code and evaluate the effects of stellar particle event caused by large superflares in the atmosphere of habitable planets.']",19,06,357 |
3944,47,999257699484389376,932805374,KordingLab 🦖,"The roles of machine learning in neuroscience: <LINK> new review paper with @joshuaiglaser. @arisbenjamin, @RoozbehFarhoodi Did we miss anything? @TheNeuralCoder @joshuaiglaser @arisbenjamin @RoozbehFarhoodi ok. We actually only cover supervised learning. Should have been more precise.",https://arxiv.org/abs/1805.08239,"Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML's contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: 1) creating solutions to engineering problems, 2) identifying predictive variables, 3) setting benchmarks for simple models of the brain, and 4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists. ",The Roles of Supervised Machine Learning in Systems Neuroscience,2,"['The roles of machine learning in neuroscience: <LINK> new review paper with @joshuaiglaser. @arisbenjamin, @RoozbehFarhoodi Did we miss anything?', '@TheNeuralCoder @joshuaiglaser @arisbenjamin @RoozbehFarhoodi ok. We actually only cover supervised learning. Should have been more precise.']",18,05,286 |
3945,225,1445913649042886656,1418313824109801475,Darryl Zachary Seligman,Have you ever wondered what the birth of a comet looks like up close? We could send an orbit matching spacecraft from Jupiter to a Centaur as it gets launched into the inner Solar System to find out! Paper is accepted @AAS_PSJ and talk @DPSMeeting <LINK> <LINK>,https://arxiv.org/abs/2110.02822,"The compositional and morphological evolution of minor bodies in the Solar System is primarily driven by the evolution of their heliocentric distances, as the level of incident solar radiation regulates cometary activity. We investigate the dynamical transfer of Centaurs into the inner Solar System, facilitated by mean motion resonances with Jupiter and Saturn. The recently discovered object, P/2019 LD2, will transition from the Centaur region to the inner Solar System in 2063. In order to contextualize LD2, we perform N-body simulations of a population of Centaurs and JFCs. Objects between Jupiter and Saturn with Tisserand parameter $T_J\sim$3 are transferred onto orbits with perihelia $q<4$au within the next 1000 years with notably high efficiency. Our simulations show that there may be additional LD2-like objects transitioning into the inner Solar System in the near-term future, all of which have low $\Delta$V with respect to Jupiter. We calculate the distribution of orbital elements resulting from a single Jovian encounter and show that objects with initial perihelia close to Jupiter are efficiently scattered to $q<4$au. Moreover, approximately $55\%$ of the transitioning objects in our simulated population experience at least 1 Jovian encounter prior to reaching $q<4$au. We demonstrate that a spacecraft stationed near Jupiter would be well-positioned to rendezvous, orbit match, and accompany LD2 into the inner Solar System, providing an opportunity to observe the onset of intense activity in a pristine comet $\textit{in situ}$. Finally, we discuss the prospect of identifying additional targets for similar measurements with forthcoming observational facilities. ","A Sublime Opportunity: The Dynamics of Transitioning Cometary Bodies and |
the Feasibility of $\textit{In Situ}$ Observations of The Evolution of Their |
Activity",1,['Have you ever wondered what the birth of a comet looks like up close? We could send an orbit matching spacecraft from Jupiter to a Centaur as it gets launched into the inner Solar System to find out! Paper is accepted @AAS_PSJ and talk @DPSMeeting <LINK> <LINK>'],21,10,261 |
3946,203,1468679940895920136,186701821,Aldo Pacchiano,"In this Neurips 2021 paper (<LINK>) we study a class of classification problems exemplified by the bank loan problem, where a lender decides whether or not to issue a loan. The lender only observes whether a customer will repay a loan if the loan is issued. (1/n) Thus modeled decisions affect the data available to the lender for future decisions. As a result, it is possible for the lender’s algorithm to “get stuck” with a self-fulfilling model. (2/n) This model never corrects its false negatives, since it never sees the true label for rejected data, thus accumulating infinite regret. In the case of linear models, this issue can be addressed by adding optimism directly into the model predictions. (3/n) However, there are few methods that extend to the function approximation case using Deep Neural Networks. We present Pseudo- Label Optimism (PLOT), a conceptually and computationally simple method for this setting applicable to DNNs. (4/n) PLOT adds optimistic pseudo-labels to the subset of decision points the current model is deciding on, trains the model on all data so far (including these new points along with their optimistic labels), and finally uses the resulting optimistic model for decision making. (5/n) The PLOT decision boundary with and without pseudo-labels <LINK> PLOT achieves competitive performance on a set of three challenging benchmark problems, requiring minimal hyperparameter tuning. (6/n) Thanks to all that came to our poster. We have also created a colab demo to experiment with multiple algorithms for the bank loan setting on a variety of public datasets (see <LINK>). (7/n) Joint work w/ @j_foerst , @alexandercberg, @edwardjchou and Shaun Singh",https://arxiv.org/abs/2112.02185,"We study a class of classification problems best exemplified by the \emph{bank loan} problem, where a lender decides whether or not to issue a loan. The lender only observes whether a customer will repay a loan if the loan is issued to begin with, and thus modeled decisions affect what data is available to the lender for future decisions. As a result, it is possible for the lender's algorithm to ``get stuck'' with a self-fulfilling model. This model never corrects its false negatives, since it never sees the true label for rejected data, thus accumulating infinite regret. In the case of linear models, this issue can be addressed by adding optimism directly into the model predictions. However, there are few methods that extend to the function approximation case using Deep Neural Networks. We present Pseudo-Label Optimism (PLOT), a conceptually and computationally simple method for this setting applicable to DNNs. \PLOT{} adds an optimistic label to the subset of decision points the current model is deciding on, trains the model on all data so far (including these points along with their optimistic labels), and finally uses the resulting \emph{optimistic} model for decision making. \PLOT{} achieves competitive performance on a set of three challenging benchmark problems, requiring minimal hyperparameter tuning. We also show that \PLOT{} satisfies a logarithmic regret guarantee, under a Lipschitz and logistic mean label model, and under a separability condition on the data. ",Neural Pseudo-Label Optimism for the Bank Loan Problem,7,"['In this Neurips 2021 paper (<LINK>) we study a class of classification problems exemplified by the bank loan problem, where a lender decides whether or not to issue a loan. The lender only observes whether a customer will repay a loan if the loan is issued. (1/n)', 'Thus modeled decisions affect the data available to the lender for future decisions. As a result, it is possible for the lender’s algorithm to “get stuck” with a self-fulfilling model. (2/n)', 'This model never corrects its false negatives, since it never sees the true label for rejected data, thus accumulating infinite regret. In the case of linear models, this issue can be addressed by adding optimism directly into the model predictions. (3/n)', 'However, there are few methods that extend to the function approximation case using Deep Neural Networks. We present Pseudo- Label Optimism (PLOT), a conceptually and computationally simple method for this setting applicable to DNNs. (4/n)', 'PLOT adds optimistic pseudo-labels to the subset of decision points the current model is deciding on, trains the model on all data so far (including these new points along with their optimistic labels), and finally uses the resulting optimistic model for decision making. (5/n)', 'The PLOT decision boundary with and without pseudo-labels\n\nhttps://t.co/kYOJj3Lf6z\n \nPLOT achieves competitive performance on a set of three challenging benchmark problems, requiring minimal hyperparameter tuning. (6/n)', 'Thanks to all that came to our poster. We have also created a colab demo to experiment with multiple algorithms for the bank loan setting on a variety of public datasets (see https://t.co/Iu5gBKF2o1). (7/n)\n\nJoint work w/ @j_foerst , @alexandercberg, @edwardjchou and Shaun Singh']",21,12,1691 |
3947,225,1374740943136509952,1062812206243422213,Jalal Kazempour,"Despite recent mathematical & computational advances, electricity markets are still using a linear model, where simplifying assumptions are necessary. Shall we go beyond LP, and use a conic model? You may find this paper interesting to read: <LINK> @anubhavratha <LINK> @themarklstone @anubhavratha @pierrepinson Indeed Mark! This is a natural extension to this work. Let's see first whether electricity markets in practice will be convinced to use a conic model, then it would be more straightforward to move towards (scalable) semi-definite markets ;o)",https://arxiv.org/abs/2103.12122,"We propose a new forward electricity market framework that admits heterogeneous market participants with second-order cone strategy sets, who accurately express the nonlinearities in their costs and constraints through conic bids, and a network operator facing conic operational constraints. In contrast to the prevalent linear-programming-based electricity markets, we highlight how the inclusion of second-order cone constraints enables uncertainty-, asset- and network-awareness of the market, which is key to the successful transition towards an electricity system based on weather-dependent renewable energy sources. We analyze our general market-clearing proposal using conic duality theory to derive efficient spatially-differentiated prices for the multiple commodities, comprising of energy and flexibility services. Under the assumption of perfect competition, we prove the equivalence of the centrally-solved market-clearing optimization problem to a competitive spatial price equilibrium involving a set of rational and self-interested participants and a price setter. Finally, under common assumptions, we prove that moving towards conic markets does not incur the loss of desirable economic properties of markets, namely market efficiency, cost recovery and revenue adequacy. Our numerical studies focus on the specific use case of uncertainty-aware market design and demonstrate that the proposed conic market brings advantages over existing alternatives within the linear programming market framework. ",Moving from Linear to Conic Markets for Electricity,2,"['Despite recent mathematical & computational advances, electricity markets are still using a linear model, where simplifying assumptions are necessary. Shall we go beyond LP, and use a conic model? You may find this paper interesting to read: <LINK>\n@anubhavratha <LINK>', ""@themarklstone @anubhavratha @pierrepinson Indeed Mark! This is a natural extension to this work. Let's see first whether electricity markets in practice will be convinced to use a conic model, then it would be more straightforward to move towards (scalable) semi-definite markets ;o)""]",21,03,554 |
3948,121,1293305776606330880,349172730,Ranjay Krishna,"If you're releasing a new user-facing AI project/product, you might want to read our new #CSCW2020 paper. We find that words or metaphors used to describe AI agents have a causal effect on users' intention to adopt your agent. <LINK> Thread👇 Conceptual metaphors are one of the most common and powerful means that a designer has to influence user expectations. They have been traditionally used by designers to convey functionality. Ex, ""recycling bin"" is for unwanted files, and ""notepad"" is for taking notes. <LINK> Metaphors have also been used for sense-making: understanding how existing AI systems work using informal, intuitive folk theories. For example, people explain Google Search as a ""robotic nose"" and YouTube's recommendations as a ""drug dealer"". <LINK> In our study, participants do a task with an agent described using various metaphors. Following Psych literature, we categorize metaphors along dimensions of competence and warmth: ""toddler"" projects low competence, high warmth, and ""inexperienced teen”-low competence, low warmth <LINK> Contrary to how today's AI products are advertised, people are more likely to adopt an agent that they originally expected to have low competence but outperforms that expectation. They are less forgiving of mistakes made by agents they expect to have high competence. <LINK> Meanwhile, the opposite is true for warmth. People are more likely to cooperate with agents that project high warmth. Also, people spend significantly more time interacting with high warmth agents. <LINK> Our work provides another lens explaining why some functionally-similar agents get adopted (Xiaoice ""sympathetic ear"") while others with high competence (Mitsuku ""record-breaking Turing Test winner"") or low warmth (Tay ""AI fam that's got no chill"") elicit anti-social behavior. <LINK> This is work done with my amazing collaborators: @pranavkhadpe @drfeifei Jeff Hancock and @msbernst",https://arxiv.org/abs/2008.02311,"With the emergence of conversational artificial intelligence (AI) agents, it is important to understand the mechanisms that influence users' experiences of these agents. We study a common tool in the designer's toolkit: conceptual metaphors. Metaphors can present an agent as akin to a wry teenager, a toddler, or an experienced butler. How might a choice of metaphor influence our experience of the AI agent? Sampling metaphors along the dimensions of warmth and competence---defined by psychological theories as the primary axes of variation for human social perception---we perform a study (N=260) where we manipulate the metaphor, but not the behavior, of a Wizard-of-Oz conversational agent. Following the experience, participants are surveyed about their intention to use the agent, their desire to cooperate with the agent, and the agent's usability. Contrary to the current tendency of designers to use high competence metaphors to describe AI products, we find that metaphors that signal low competence lead to better evaluations of the agent than metaphors that signal high competence. This effect persists despite both high and low competence agents featuring human-level performance and the wizards being blind to condition. A second study confirms that intention to adopt decreases rapidly as competence projected by the metaphor increases. In a third study, we assess effects of metaphor choices on potential users' desire to try out the system and find that users are drawn to systems that project higher competence and warmth. These results suggest that projecting competence may help attract new users, but those users may discard the agent unless it can quickly correct with a lower competence metaphor. We close with a retrospective analysis that finds similar patterns between metaphors and user attitudes towards past conversational agents such as Xiaoice, Replika, Woebot, Mitsuku, and Tay. ",Conceptual Metaphors Impact Perceptions of Human-AI Collaboration,8,"[""If you're releasing a new user-facing AI project/product, you might want to read our new #CSCW2020 paper. We find that words or metaphors used to describe AI agents have a causal effect on users' intention to adopt your agent. <LINK> Thread👇"", 'Conceptual metaphors are one of the most common and powerful means that a designer has to influence user expectations. They have been traditionally used by designers to convey functionality. Ex, ""recycling bin"" is for unwanted files, and ""notepad"" is for taking notes. https://t.co/9V3BuWQUuI', 'Metaphors have also been used for sense-making: understanding how existing AI systems work using informal, intuitive folk theories. For example, people explain Google Search as a ""robotic nose"" and YouTube\'s recommendations as a ""drug dealer"". https://t.co/PSi1o2Bvvl', 'In our study, participants do a task with an agent described using various metaphors. Following Psych literature, we categorize metaphors along dimensions of competence and warmth: ""toddler"" projects low competence, high warmth, and ""inexperienced teen”-low competence, low warmth https://t.co/ut5frOCaUh', ""Contrary to how today's AI products are advertised, people are more likely to adopt an agent that they originally expected to have low competence but outperforms that expectation. They are less forgiving of mistakes made by agents they expect to have high competence. https://t.co/RHTSBk8IF4"", 'Meanwhile, the opposite is true for warmth. People are more likely to cooperate with agents that project high warmth. Also, people spend significantly more time interacting with high warmth agents. https://t.co/lLoFFcRAr3', 'Our work provides another lens explaining why some functionally-similar agents get adopted (Xiaoice ""sympathetic ear"") while others with high competence (Mitsuku ""record-breaking Turing Test winner"") or low warmth (Tay ""AI fam that\'s got no chill"") elicit anti-social behavior. https://t.co/SdWbNW2drA', 'This is work done with my amazing collaborators: @pranavkhadpe @drfeifei Jeff Hancock and @msbernst']",20,08,1921 |
3949,90,1090629180855853056,835623319097454598,Krishnaswamy Lab,"Another preprint from our lab by @david_van_dijk and @DBBurkhardt on using neural networks to find a latent space that has natural archetypes (AAnet). We apply AAnet to TILs and microbiota, inspired by the work of @UriAlonWeizmann . Arxiv link here <LINK> <LINK> @SMukherjee89 @david_van_dijk @DBBurkhardt @UriAlonWeizmann Thanks. We'll take a look.",https://arxiv.org/abs/1901.09078,"Archetypal analysis is a data decomposition method that describes each observation in a dataset as a convex combination of ""pure types"" or archetypes. These archetypes represent extrema of a data space in which there is a trade-off between features, such as in biology where different combinations of traits provide optimal fitness for different environments. Existing methods for archetypal analysis work well when a linear relationship exists between the feature space and the archetypal space. However, such methods are not applicable to systems where the feature space is generated non-linearly from the combination of archetypes, such as in biological systems or image transformations. Here, we propose a reformulation of the problem such that the goal is to learn a non-linear transformation of the data into a latent archetypal space. To solve this problem, we introduce Archetypal Analysis network (AAnet), which is a deep neural network framework for learning and generating from a latent archetypal representation of data. We demonstrate state-of-the-art recovery of ground-truth archetypes in non-linear data domains, show AAnet can generate from data geometry rather than from data density, and use AAnet to identify biologically meaningful archetypes in single-cell gene expression data. ",Finding Archetypal Spaces Using Neural Networks,2,"['Another preprint from our lab by @david_van_dijk and @DBBurkhardt on using neural networks to find a latent space that has natural archetypes (AAnet). We apply AAnet to TILs and microbiota, inspired by the work of @UriAlonWeizmann . Arxiv link here <LINK> <LINK>', ""@SMukherjee89 @david_van_dijk @DBBurkhardt @UriAlonWeizmann Thanks. We'll take a look.""]",19,01,349 |
3950,78,1417032787488616450,301426952,Arttu Rajantie 🇪🇺 🇫🇮 #FBPE,"New paper ""Stochastic isocurvature constraints for axion dark matter with high-scale inflation"" with Liina Jukko, based on her MSc dissertation <LINK> Axions and axion-like particles are promising candidates for dark matter. They are constrained by primordial isocurvature perturbations. We show that for high inflationary Hubble rates (>10^12 GeV), standard perturbative methods fail. We use the stochastic method instead.",https://arxiv.org/abs/2107.07948,"Axions are among the best motivated dark matter candidates. Their production in the early Universe by the vacuum misalignment mechanism gives rise to isocurvature perturbations, which are constrained by cosmic microwave background measurements. In this paper, we compute the axion isocurvature power spectrum using spectral expansion in the stochastic Starobinsky-Yokoyama formalism, which captures non-linear effects in the axion dynamics. In contrast to most of the existing literature, we focus on high inflationary Hubble rates of order $10^{13}~{\rm GeV}$, and demonstrate that there is a significant window in which axions can account for all or part of the dark matter abundance without violating the isocurvature bounds or tensor mode bounds. Crucially, we find that the isocurvature spectrum is dominated by non-perturbative contributions in a large part of this window. Therefore the commonly used linear approximation is not reliable in this region, making the stochastic approach essential. ","Stochastic isocurvature constraints for axion dark matter with |
high-scale inflation",2,"['New paper ""Stochastic isocurvature constraints for axion dark matter with high-scale inflation"" with Liina Jukko, based on her MSc dissertation\n<LINK>', 'Axions and axion-like particles are promising candidates for dark matter. They are constrained by primordial isocurvature perturbations. We show that for high inflationary Hubble rates (>10^12 GeV), standard perturbative methods fail. We use the stochastic method instead.']",21,07,426 |
3951,79,1039885487715037186,2191799629,Sebastian Gehrmann,"A little late to the party, but check out our #emnlp2018 paper on bottom-up abstractive summarization! <LINK> We find that constraining copy-attention to predetermined words and phrases greatly improves results, with potential application to low-resource domains!",https://arxiv.org/abs/1808.10792,"Neural network-based methods for abstractive summarization produce outputs that are more fluent than other techniques, but which can be poor at content selection. This work proposes a simple technique for addressing this issue: use a data-efficient content selector to over-determine phrases in a source document that should be part of the summary. We use this selector as a bottom-up attention step to constrain the model to likely phrases. We show that this approach improves the ability to compress text, while still generating fluent summaries. This two-step process is both simpler and higher performing than other end-to-end content selection models, leading to significant improvements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the content selector can be trained with as little as 1,000 sentences, making it easy to transfer a trained summarizer to a new domain. ",Bottom-Up Abstractive Summarization,1,"['A little late to the party, but check out our #emnlp2018 paper on bottom-up abstractive summarization! <LINK>\nWe find that constraining copy-attention to predetermined words and phrases greatly improves results, with potential application to low-resource domains!']",18,08,263 |
3952,76,986580608737570816,338526004,Sam Bowman,"Niche-but-exciting new workshop paper with Nikita Nangia (@meloncholist): Current latent tree learning methods have trouble even on artificial data for which discovering the correct trees yields huge gains. <LINK> @jrking0 @meloncholist Aagh. Three or four people spent a total of 8+ hours figuring out how to draw them (same code as in a previous paper), and they're still not quite what we wanted. It's modified 'forest'. Download the source from arXiv as needed. @jrking0 @meloncholist For more background, this paper is the group's first project on the topic, and had a much longer page limit: <LINK> @yoavgo @meloncholist Good catch—thanks! @jekbradbury @meloncholist Yep—quite related (should have cited :/), though no equivalent to our tuning the dataset for the RNN/TreeRNN performance gap.",https://arxiv.org/abs/1804.06028,"Latent tree learning models learn to parse a sentence without syntactic supervision, and use that parse to build the sentence representation. Existing work on such models has shown that, while they perform well on tasks like sentence classification, they do not learn grammars that conform to any plausible semantic or syntactic formalism (Williams et al., 2018a). Studying the parsing ability of such models in natural language can be challenging due to the inherent complexities of natural language, like having several valid parses for a single sentence. In this paper we introduce ListOps, a toy dataset created to study the parsing ability of latent tree models. ListOps sequences are in the style of prefix arithmetic. The dataset is designed to have a single correct parsing strategy that a system needs to learn to succeed at the task. We show that the current leading latent tree models are unable to learn to parse and succeed at ListOps. These models achieve accuracies worse than purely sequential RNNs. ",ListOps: A Diagnostic Dataset for Latent Tree Learning,5,"['Niche-but-exciting new workshop paper with Nikita Nangia (@meloncholist): Current latent tree learning methods have trouble even on artificial data for which discovering the correct trees yields huge gains. <LINK>', ""@jrking0 @meloncholist Aagh. Three or four people spent a total of 8+ hours figuring out how to draw them (same code as in a previous paper), and they're still not quite what we wanted. It's modified 'forest'. Download the source from arXiv as needed."", ""@jrking0 @meloncholist For more background, this paper is the group's first project on the topic, and had a much longer page limit: https://t.co/kzf7zQHBdl"", '@yoavgo @meloncholist Good catch—thanks!', '@jekbradbury @meloncholist Yep—quite related (should have cited :/), though no equivalent to our tuning the dataset for the RNN/TreeRNN performance gap.']",18,04,798 |