text
stringlengths 64
6.93k
|
---|
3757,84,1034267735616827392,203639204,Dan Elton,"We have a new #matsci #preprint on the #arXiv: ""A Phonon Boltzmann Study of Microscale Thermal Transport in α-RDX Cook-Off"" <LINK> This is the 1st full Brillouin zone phonon calculation for RDX.. to better understand thermal transport & mechanism of initiation",https://arxiv.org/abs/1808.08295,"The microscale thermal transport properties of $\alpha$RDX are believed to be major factors in the initiation process. In this study we present a thorough examination of phonon properties which dominate energy storage and transport in $\alpha$RDX. The phonon lifetimes are determined for all phonon branches, revealing the characteristic time scale of energy transfer amongst phonon modes. The phonon parameters also serve as inputs to a full Brillouin zone three dimensional phonon transport simulation in the presence of a hotspot. In addition to identifying the phonon mode contributions to thermal transport, and as N-N bond breaking is integral to disassociation, we identify phonon modes corresponding to large N-N bond stretch analyzing the manner in which these modes store and transfer energy. ","A Phonon Boltzmann Study of Microscale Thermal Transport in $\alpha$-RDX |
Cook-Off",1,"['We have a new #matsci #preprint on the #arXiv:\n""A Phonon Boltzmann Study of Microscale Thermal Transport in α-RDX Cook-Off"" <LINK> \nThis is the 1st full Brillouin zone phonon calculation for RDX.. to better understand thermal transport & mechanism of initiation']",18,08,261 |
3758,181,1337617823942672384,1119400976685965313,Daniel Brown,Come visit our poster on Value Alignment Verification at the NeurIPS HAMLETS workshop on Sat. We study how a human can efficiently test whether the goals and behavior of other agents are aligned with their values: <LINK>. Joint work with @bayes_irl @scottniekum,http://arxiv.org/abs/2012.01557,"As humans interact with autonomous agents to perform increasingly complicated, potentially risky tasks, it is important to be able to efficiently evaluate an agent's performance and correctness. In this paper we formalize and theoretically analyze the problem of efficient value alignment verification: how to efficiently test whether the behavior of another agent is aligned with a human's values. The goal is to construct a kind of ""driver's test"" that a human can give to any agent which will verify value alignment via a minimal number of queries. We study alignment verification problems with both idealized humans that have an explicit reward function as well as problems where they have implicit values. We analyze verification of exact value alignment for rational agents and propose and analyze heuristic and approximate value alignment verification tests in a wide range of gridworlds and a continuous autonomous driving domain. Finally, we prove that there exist sufficient conditions such that we can verify exact and approximate alignment across an infinite set of test environments via a constant-query-complexity alignment test. ",Value Alignment Verification,1,['Come visit our poster on Value Alignment Verification at the NeurIPS HAMLETS workshop on Sat. We study how a human can efficiently test whether the goals and behavior of other agents are aligned with their values: <LINK>. Joint work with @bayes_irl @scottniekum'],20,12,261 |
3759,94,1259009592287277056,90131577,Noam Slonim 🟢,"Can we automatically find an opinion article that specifically counters an article we just read? Check out a new ACL-2020 paper, coming out from our #ProjectDebater team. And we also release 3.6k (!) recorded debate speeches! Congrats to the authors! <LINK>",https://arxiv.org/abs/2005.01157,"An educated and informed consumption of media content has become a challenge in modern times. With the shift from traditional news outlets to social media and similar venues, a major concern is that readers are becoming encapsulated in ""echo chambers"" and may fall prey to fake news and disinformation, lacking easy access to dissenting views. We suggest a novel task aiming to alleviate some of these concerns -- that of detecting articles that most effectively counter the arguments -- and not just the stance -- made in a given text. We study this problem in the context of debate speeches. Given such a speech, we aim to identify, from among a set of speeches on the same topic and with an opposing stance, the ones that directly counter it. We provide a large dataset of 3,685 such speeches (in English), annotated for this relation, which hopefully would be of general interest to the NLP community. We explore several algorithms addressing this task, and while some are successful, all fall short of expert human performance, suggesting room for further research. All data collected during this work is freely available for research. ",Out of the Echo Chamber: Detecting Countering Debate Speeches,1,"['Can we automatically find an opinion article that specifically counters an article we just read? Check out a new ACL-2020 paper, coming out from our #ProjectDebater team. And we also release 3.6k (!) recorded debate speeches! Congrats to the authors!\n\n<LINK>']",20,05,257 |
3760,8,1511065551061217284,1367548995799760898,Muhammad Haroon,"Do YouTube recommendations lead users to ideologically extreme rabbit holes? Can ideological bias in recommendation algorithms be minimized? Read our new paper <LINK>. Short answers: Yes, but there’s nuance (especially among right-wing users) 👇 [1/7] <LINK> Past audits either gathered data 1️⃣ from real users or 2️⃣ through sock puppets/APIs. 1️⃣ does not account for recommendation algorithms. 2️⃣ does not account for the role of user watch history. How do we reconcile these? 🤔 [2/7] We do a large-scale audit of YouTube’s recommendations by training over 💯,000 sock puppets to watch videos of different political ideologies (from v. left to v. right). We then analyze the recommendations for ideological bias, its amount, and over-time radicalization. [3/7] <LINK> And? Both the homepage and up-next video recommendations for a trained sock puppet *are* ideologically biased: users are more likely to encounter content that is ideologically similar to their watch history. This is especially the case for the right sock puppets. 😳 [4/7] <LINK> Not only that: Exposure to ideologically biased videos increases: As the user follows the trail, they encounter a larger number of ideologically biased videos AND the videos deeper in the recommendation trail are more extreme. [5/7] <LINK> To mitigate these biases, we advance a principled learning-based intervention that monitors a user’s homepage for ideological bias and mitigates it by injecting intervention videos in the user’s watch history. Hint: it works but is much harder for right-leaning users. 🥵 [6/7] <LINK> This work is part of a multi-disciplinary collaboration at UC Davis (@nshuman_chhabra, Xin Liu, @prasantm, @zubair_shafiq, and @mwojcieszak). Check out our paper and code/data at <LINK> [7/7]",https://arxiv.org/abs/2203.10666,"Recommendations algorithms of social media platforms are often criticized for placing users in ""rabbit holes"" of (increasingly) ideologically biased content. Despite these concerns, prior evidence on this algorithmic radicalization is inconsistent. Furthermore, prior work lacks systematic interventions that reduce the potential ideological bias in recommendation algorithms. We conduct a systematic audit of YouTube's recommendation system using a hundred thousand sock puppets to determine the presence of ideological bias (i.e., are recommendations aligned with users' ideology), its magnitude (i.e., are users recommended an increasing number of videos aligned with their ideology), and radicalization (i.e., are the recommendations progressively more extreme). Furthermore, we design and evaluate a bottom-up intervention to minimize ideological bias in recommendations without relying on cooperation from YouTube. We find that YouTube's recommendations do direct users -- especially right-leaning users -- to ideologically biased and increasingly radical content on both homepages and in up-next recommendations. Our intervention effectively mitigates the observed bias, leading to more recommendations to ideologically neutral, diverse, and dissimilar content, yet debiasing is especially challenging for right-leaning users. Our systematic assessment shows that while YouTube recommendations lead to ideological bias, such bias can be mitigated through our intervention. ","YouTube, The Great Radicalizer? Auditing and Mitigating Ideological |
Biases in YouTube Recommendations",7,"['Do YouTube recommendations lead users to ideologically extreme rabbit holes? Can ideological bias in recommendation algorithms be minimized? Read our new paper <LINK>. Short answers: Yes, but there’s nuance (especially among right-wing users) 👇 [1/7] <LINK>', 'Past audits either gathered data 1️⃣ from real users or 2️⃣ through sock puppets/APIs. 1️⃣ does not account for recommendation algorithms. 2️⃣ does not account for the role of user watch history. How do we reconcile these? 🤔 [2/7]', 'We do a large-scale audit of YouTube’s recommendations by training over 💯,000 sock puppets to watch videos of different political ideologies (from v. left to v. right). We then analyze the recommendations for ideological bias, its amount, and over-time radicalization. [3/7] https://t.co/Xph4DfnPOn', 'And? Both the homepage and up-next video recommendations for a trained sock puppet *are* ideologically biased: users are more likely to encounter content that is ideologically similar to their watch history. This is especially the case for the right sock puppets. 😳 [4/7] https://t.co/JXKlEUUvRy', 'Not only that: Exposure to ideologically biased videos increases: As the user follows the trail, they encounter a larger number of ideologically biased videos AND the videos deeper in the recommendation trail are more extreme. [5/7] https://t.co/dIA6qGEOKc', 'To mitigate these biases, we advance a principled learning-based intervention that monitors a user’s homepage for ideological bias and mitigates it by injecting intervention videos in the user’s watch history. Hint: it works but is much harder for right-leaning users. 🥵 [6/7] https://t.co/ZI6r2dXehV', 'This work is part of a multi-disciplinary collaboration at UC Davis (@nshuman_chhabra, Xin Liu, @prasantm, @zubair_shafiq, and @mwojcieszak). Check out our paper and code/data at https://t.co/FD8pFVy7eE [7/7]']",22,03,1765 |
3761,87,1481019555510054912,1185977761032110080,Kazumasa Ohno (大野 和正),"We posted a new paper about transmission spectrum of ringed exoplanets🪐 <LINK> Rings may give alternative explanation for flat spectra of extremely low-density exoplanets. Also check the HST observation of a puffy cold giant, HIP41378 f! <LINK> We derived analytical model of how to include ring effect in transmission spectra. We considered all possible viewing geometry, in contrast to face-on ring in my previous paper(<LINK>). <LINK> Transmission spectrum is computed by summing up the transmittance of annulus at each radial distance. For ringed planets, we need to split the annulus into ring-free and ring-overlapped parts, as in Range E in the above figure. We provide how to split the annulus. We also derived a simple model that can allow us to ""post-process"" the ring effects in a pre-computed ring-free spectrum. So, you can readily implement the ring effects in your spectrum computed by any spectrum code using our analytical prescription. In general, ring flattens the spectrum significantly except for nearly edge-on rings. There are two factors flattening the spectrum. One is the overlap of atmospheric annulus and projected ring. Second is the overestimation of atmospheric scale height for ringed planets. <LINK> Ring creates flat spectrum even at longer wavelength, in contrast to hazy spectrum showing spectral slope. So, we expect JWST observations can distinguish haze and ring. Alam et al. (<LINK>) and Ohno & Tanaka (<LINK>) discussed about this point. <LINK> We also briefly discussed if ring's spectral feature is observable. The feature could emerge if the ring's optical depth is around unity and if tiny particles present. Although these factors are not predictable, the feature, if detected, can further tell us about ring's property <LINK> These are the main parts of the paper, so I stop thread here, as the paper is still under review. I am grateful to @jjfplanet for supporting me and Munazza Alam and @PlanetaryGao at @CarnegiePlanets and HIP41378 f observation team for involving me to this exciting project! @johannateske @jjfplanet @PlanetaryGao @CarnegiePlanets Thanks for kind words! @MartianColonist @jjfplanet @PlanetaryGao @CarnegiePlanets Thanks for kind words! Your question at ESO conference also motivated this project! @nespinozap @jjfplanet @PlanetaryGao @CarnegiePlanets Thanks for kind words, Nestor!",https://arxiv.org/abs/2201.02794,"Recent observations revealed that several extremely low-density exoplanets show featureless transmission spectra. While atmospheric aerosols are a promising explanation for both the low density and featureless spectra, there is another attractive possibility: the presence of circumplanetary rings. Previous studies suggested that rings cause anomalously large transit radii. However, it remains poorly understood how rings affect the transmission spectrum. Here, we provide a framework to characterize the transmission spectra of ringed exoplanets. We develop an analytical prescription to include rings in the transmission spectra for arbitrarily viewing geometries. We also establish a simple post-processing model that can include the ring's effects on precomputed ring-free spectra. The ring flattens the transmission spectrum for a wide range of viewing geometries, consistent with the featureless spectra of extremely low-density exoplanets. Near-future observations by JWST at longer wavelengths would be able to distinguish the aerosol and ring scenarios. We also find that rocky rings might cause a silicate feature at $\sim$10 $\mu$m if the ring's optical depth is around unity. Thus, the ring's spectral features, if detected, would provide tight constrains on the physical properties of exoplanetary rings. We also discuss the ring's stability and suggest that thick rings are sustainable only at the equilibrium temperature of $\lesssim$300 K for the ring's age comparable to Kepler planets. This might indicate the intrinsic deficit of thick rings in the Kepler samples, unless rings are much younger than the planets as suggested for Saturn. ","A Framework for Characterizing Transmission Spectra of Exoplanets with |
Circumplanetary Rings",11,"['We posted a new paper about transmission spectrum of ringed exoplanets🪐\n<LINK>\nRings may give alternative explanation for flat spectra of extremely low-density exoplanets. Also check the HST observation of a puffy cold giant, HIP41378 f!\n<LINK>', 'We derived analytical model of how to include ring effect in transmission spectra. We considered all possible viewing geometry, in contrast to face-on ring in my previous paper(https://t.co/dtJ8InvHtU). https://t.co/v8bnT0KJsC', 'Transmission spectrum is computed by summing up the transmittance of annulus at each radial distance. For ringed planets, we need to split the annulus into ring-free and ring-overlapped parts, as in Range E in the above figure. We provide how to split the annulus.', 'We also derived a simple model that can allow us to ""post-process"" the ring effects in a pre-computed ring-free spectrum. So, you can readily implement the ring effects in your spectrum computed by any spectrum code using our analytical prescription.', 'In general, ring flattens the spectrum significantly except for nearly edge-on rings. There are two factors flattening the spectrum. One is the overlap of atmospheric annulus and projected ring. Second is the overestimation of atmospheric scale height for ringed planets. https://t.co/W5iVOAftXw', 'Ring creates flat spectrum even at longer wavelength, in contrast to hazy spectrum showing spectral slope. So, we expect JWST observations can distinguish haze and ring. Alam et al. (https://t.co/ftpAqoVWwk) and Ohno & Tanaka (https://t.co/dtJ8InvHtU) discussed about this point. https://t.co/ywWN3DRyKt', ""We also briefly discussed if ring's spectral feature is observable. The feature could emerge if the ring's optical depth is around unity and if tiny particles present. Although these factors are not predictable, the feature, if detected, can further tell us about ring's property https://t.co/9Yznwpuz0s"", 'These are the main parts of the paper, so I stop thread here, as the paper is still under review. I am grateful to @jjfplanet for supporting me and Munazza Alam and @PlanetaryGao at @CarnegiePlanets and HIP41378 f observation team for involving me to this exciting project!', '@johannateske @jjfplanet @PlanetaryGao @CarnegiePlanets Thanks for kind words!', '@MartianColonist @jjfplanet @PlanetaryGao @CarnegiePlanets Thanks for kind words! Your question at ESO conference also motivated this project!', '@nespinozap @jjfplanet @PlanetaryGao @CarnegiePlanets Thanks for kind words, Nestor!']",22,01,2353 |
3762,151,1266306830340370432,1525220317,Lena Frischlich,"(1/10) Follow up to #PandemicPopulism (summary thread: <LINK>) is online- again as preprint so not formally reviewed so far. In #PandemicNews (<LINK>) we use computational content analysis to study GER news papers #CoronaVirusDE coverage <LINK> (2/10) @thorstenquandt @ZwiZwaSvens and @Kudusch and I analysed >100k Facebook posts of German newspapers (national + regional) between January and end of March 2020, looking at topics (including emergence over time), central actors, overall negativity, & treatement of falsehoods (3/10) Results show how reporting about the crisis evolved over time: From focussing on core information in the beginning, over context and background info up to offering multiple perspectives and insights into how #CoronaVirusDE impacted peoples lifes later on <LINK> (4/10) In comparison to our <LINK> analysis in #PandemicPopulism we found a larger variety of voices in established journalistic media, although prominent political figures are mentioned particularly often. (5/10) Notably: Relatively spoken, chancellor Merkel is *more* present in alternative media compared to the established press- in contrast to alternative news medias self-positioning as a place for unheard voices... <LINK> (6/10) The overall tone of reporting is more critical in alternative news media but we do not see that this difference increased over time (contradicting accusations that professional news media became uncritical) <LINK> (7/10) Share of #FakeNews in the overall reporting of established media is marginal, overall more info about medical #misinformation than #ConspiracyTheory - all discussed only when #FactChecked. But see <LINK> for risks of journalistic coverage of #fakeNews <LINK> (8/10) In sum: Our data doesn't point at a radical failure of the journalistic system. Much of the stark criticism of ""the media"" seems to be based on a narrow understanding of journalism as permanent opposition. <LINK> (9/10) But thats only one part of the journalistic role as the #WorldOfJournalism Study shows (<LINK>). Instead, our data shows that German online journalism responded to #coronavirus with a multi-perspective coverage acting within the given societal system. <LINK> (10/10) Caution! We focussed only on Facebook (i.e. can differ on other plattforms) and Germany (can differ in other countries), only data till the end of March 2020 (might be different now). Study has not been peer reviewed so far. We hope you like it nonetheless! <LINK>",https://arxiv.org/abs/2005.13290,"The unfolding of the COVID-19 pandemic has been an unprecedented challenge for news media around the globe. While journalism is meant to process yet unknown events by design, the dynamically evolving situation affected all aspects of life in such profound ways that even the routines of crisis reporting seemed to be insufficient. Critics noted tendencies to horse-race reporting and uncritical coverage, with journalism being too close to official statements and too affirmative of political decisions. However, empirical data on the performance of journalistic news media during the crisis has been lacking thus far. The current study analyzes the Facebook messages of journalistic news media during the early Coronavirus crisis, based on a large German data set from January to March 2020. Using computational content analysis methods, reach and interactions, topical structure, relevant actors, negativity of messages, as well as the coverage of fabricated news and conspiracy theories were examined. The topical structure of the near-time Facebook coverage changed during various stages of the crisis, with just partial support for the claims of critics. The initial stages were somewhat lacking in topical breadth, but later stages offered a broad range of coverage on Corona-related issues and societal concerns. Further, journalistic media covered fake news and conspiracy theories during the crisis, but they consistently contextualized them as what they were and debunked the false claims circulating in public. While some criticism regarding the performance of journalism during the crisis received mild empirical support, the analysis did not find overwhelming signs of systemic dysfunctionalities. Overall, journalistic media did not default to a uniform reaction nor to sprawling, information-poor pandemic news, but they responded with a multi-perspective coverage of the crisis. ","Pandemic News: Facebook Pages of Mainstream News Media and the |
Coronavirus Crisis -- A Computational Content Analysis",10,"['(1/10) Follow up to #PandemicPopulism (summary thread: <LINK>) is online- again as preprint so not formally reviewed so far. In #PandemicNews (<LINK>) we use computational content analysis to study GER news papers #CoronaVirusDE coverage <LINK>', '(2/10) @thorstenquandt @ZwiZwaSvens and @Kudusch and I analysed >100k Facebook posts of German newspapers (national + regional) between January and end of March 2020, looking at topics (including emergence over time), central actors, overall negativity, & treatement of falsehoods', '(3/10) Results show how reporting about the crisis evolved over time: From focussing on core information in the beginning, over context and background info up to offering multiple perspectives and insights into how #CoronaVirusDE impacted peoples lifes later on https://t.co/0LyCa0vvgq', '(4/10) In comparison to our https://t.co/NOHyFBuMyw analysis in #PandemicPopulism we found a larger variety of voices in established journalistic media, although prominent political figures are mentioned particularly often.', '(5/10) Notably: Relatively spoken, chancellor Merkel is *more* present in alternative media compared to the established press- in contrast to alternative news medias self-positioning as a place for unheard voices... https://t.co/91t8MfeZMO', '(6/10) The overall tone of reporting is more critical in alternative news media but we do not see that this difference increased over time (contradicting accusations that professional news media became uncritical) https://t.co/1CkVprxyMF', '(7/10) Share of #FakeNews in the overall reporting of established media is marginal, overall more info about medical #misinformation than #ConspiracyTheory - all discussed only when #FactChecked. But see https://t.co/8bKTejtxw0 for risks of journalistic coverage of #fakeNews https://t.co/9BbYrIeKIg', '(8/10) In sum: Our data doesn\'t point at a radical failure of the journalistic system. Much of the stark criticism of ""the media"" seems to be based on a narrow understanding of journalism as permanent opposition. https://t.co/domfxhlfwf', '(9/10) But thats only one part of the journalistic role as the #WorldOfJournalism Study shows (https://t.co/50UzYNNY4x). Instead, our data shows that German online journalism responded to #coronavirus with a multi-perspective coverage acting within the given societal system. https://t.co/eY8mWK8276', '(10/10) Caution! We focussed only on Facebook (i.e. can differ on other plattforms) and Germany (can differ in other countries), only data till the end of March 2020 (might be different now). Study has not been peer reviewed so far. We hope you like it nonetheless! https://t.co/MZej5B1hdQ']",20,05,2473 |
3763,22,1387031166247743491,3306943245,Konstantin Klemmer,"New (short) paper out today, accepted at #AIMOCC workshop @ #ICLR2021! Preliminary results from our work-in-progress on conditional GANs for simulating extreme weather patterns. Joint work with @sudipansaha, Matthias Kahl, @linylinx and @xiaoxiang_zhu. <LINK> <LINK>",https://arxiv.org/abs/2104.12469,"Deep generative models are increasingly used to gain insights in the geospatial data domain, e.g., for climate data. However, most existing approaches work with temporal snapshots or assume 1D time-series; few are able to capture spatio-temporal processes simultaneously. Beyond this, Earth-systems data often exhibit highly irregular and complex patterns, for example caused by extreme weather events. Because of climate change, these phenomena are only increasing in frequency. Here, we proposed a novel GAN-based approach for generating spatio-temporal weather patterns conditioned on detected extreme events. Our approach augments GAN generator and discriminator with an encoded extreme weather event segmentation mask. These segmentation masks can be created from raw input using existing event detection frameworks. As such, our approach is highly modular and can be combined with custom GAN architectures. We highlight the applicability of our proposed approach in experiments with real-world surface radiation and zonal wind data. ","Generative modeling of spatio-temporal weather patterns with extreme |
event conditioning",1,"['New (short) paper out today, accepted at #AIMOCC workshop @ #ICLR2021! Preliminary results from our work-in-progress on conditional GANs for simulating extreme weather patterns.\n\nJoint work with @sudipansaha, Matthias Kahl, @linylinx and @xiaoxiang_zhu. \n\n<LINK> <LINK>']",21,04,267 |
3764,208,1308805309368954881,48007712,Nathan Ng,"New work with @kchonyc and @MarzyehGhassemi! arxiv: <LINK> code: <LINK> We propose a novel data augmentation scheme based on using a pair of corruption and reconstruction functions to generate new examples along an underlying data manifold. 1/ <LINK> Typical data augmentation methods improve classifiers by making them invariant to local perturbations in the data space, thereby improving generalization. However, in domains like NLP, finding augmentations that preserve meaning and semantics is difficult. 2/ <LINK> We propose a novel data augmentation method that generates examples without the need for any domain knowledge or dataset fine-tuning! A corruption function moves examples off the data manifold, and a reconstruction function projects them back on. 3/ <LINK> On NLP tasks we apply MLM training noise to move examples off the data manifold and use BERT to project them back on. New examples are pseudo-labelled using the original label or with a teacher model trained on the original training set. 4/ <LINK> We test our method’s ability to improve in-domain performance and out-of-domain generalizability across 3 tasks: sentiment analysis, NLI, and MT tasks. Across all 9 datasets and 4 model types, we see consistent performance boosts on ID and OOD data. 5/ <LINK> SSMBA is ready to use out of the box for any supervised NLP task with no additional fine-tuning. Check out the code here <LINK> and give it a try in your next project! 6/6 @bricksdont @kchonyc @MarzyehGhassemi ah good catch! I'll get that fixed for the camera ready...",https://arxiv.org/abs/2009.10195,"Models that perform well on a training domain often fail to generalize to out-of-domain (OOD) examples. Data augmentation is a common method used to prevent overfitting and improve OOD generalization. However, in natural language, it is difficult to generate new examples that stay on the underlying data manifold. We introduce SSMBA, a data augmentation method for generating synthetic training examples by using a pair of corruption and reconstruction functions to move randomly on a data manifold. We investigate the use of SSMBA in the natural language domain, leveraging the manifold assumption to reconstruct corrupted text with masked language models. In experiments on robustness benchmarks across 3 tasks and 9 datasets, SSMBA consistently outperforms existing data augmentation methods and baseline models on both in-domain and OOD data, achieving gains of 0.8% accuracy on OOD Amazon reviews, 1.8% accuracy on OOD MNLI, and 1.4 BLEU on in-domain IWSLT14 German-English. ","SSMBA: Self-Supervised Manifold Based Data Augmentation for Improving |
Out-of-Domain Robustness",7,"['New work with @kchonyc and @MarzyehGhassemi!\n\narxiv: <LINK>\ncode: <LINK> \n\nWe propose a novel data augmentation scheme based on using a pair of corruption and reconstruction functions to generate new examples along an underlying data manifold. 1/ <LINK>', 'Typical data augmentation methods improve classifiers by making them invariant to local perturbations in the data space, thereby improving generalization. However, in domains like NLP, finding augmentations that preserve meaning and semantics is difficult. 2/ https://t.co/Rf9WAnnPfh', 'We propose a novel data augmentation method that generates examples without the need for any domain knowledge or dataset fine-tuning! A corruption function moves examples off the data manifold, and a reconstruction function projects them back on. 3/ https://t.co/98HzaIh0mw', 'On NLP tasks we apply MLM training noise to move examples off the data manifold and use BERT to project them back on. New examples are pseudo-labelled using the original label or with a teacher model trained on the original training set. 4/ https://t.co/KDioT502p1', 'We test our method’s ability to improve in-domain performance and out-of-domain generalizability across 3 tasks: sentiment analysis, NLI, and MT tasks. Across all 9 datasets and 4 model types, we see consistent performance boosts on ID and OOD data. 5/ https://t.co/78IfpbGqom', 'SSMBA is ready to use out of the box for any supervised NLP task with no additional fine-tuning. Check out the code here https://t.co/th47UxaxLT and give it a try in your next project! 6/6', ""@bricksdont @kchonyc @MarzyehGhassemi ah good catch! I'll get that fixed for the camera ready...""]",20,09,1552 |
3765,17,1322386130558353410,59962128,Satoshi Matsuoka,"We have a new paper ""Matrix Engines for High Performance Computing: A Paragon of Performance or Grasping at Straws?"" is now on ArXiv & also submitted to a refereed venue. We realize the controversy of the topic and welcome input, positive & negative! <LINK>",https://arxiv.org/abs/2010.14373,"Matrix engines or units, in different forms and affinities, are becoming a reality in modern processors; CPUs and otherwise. The current and dominant algorithmic approach to Deep Learning merits the commercial investments in these units, and deduced from the No.1 benchmark in supercomputing, namely High Performance Linpack, one would expect an awakened enthusiasm by the HPC community, too. Hence, our goal is to identify the practical added benefits for HPC and machine learning applications by having access to matrix engines. For this purpose, we perform an in-depth survey of software stacks, proxy applications and benchmarks, and historical batch job records. We provide a cost-benefit analysis of matrix engines, both asymptotically and in conjunction with state-of-the-art processors. While our empirical data will temper the enthusiasm, we also outline opportunities to misuse these dense matrix-multiplication engines if they come for free. ","Matrix Engines for High Performance Computing:A Paragon of Performance |
or Grasping at Straws?",1,"['We have a new paper ""Matrix Engines for High Performance Computing: A Paragon of Performance or Grasping at Straws?"" is now on ArXiv & also submitted to a refereed venue. We realize the controversy of the topic and welcome input, positive & negative! <LINK>']",20,10,257 |
3766,115,1300680036866043909,345129453,Nan Rosemary Ke,"Our new paper out on amortized learning of neural representations: learning a fully continuous representation of causal models using neural networks! Thanks to my awesome collaborators @DaniloJRezende @janexwang @jovana_mitr, Martin Zummer <LINK> <LINK>",https://arxiv.org/abs/2008.09301,"Causal models can compactly and efficiently encode the data-generating process under all interventions and hence may generalize better under changes in distribution. These models are often represented as Bayesian networks and learning them scales poorly with the number of variables. Moreover, these approaches cannot leverage previously learned knowledge to help with learning new causal models. In order to tackle these challenges, we represent a novel algorithm called \textit{causal relational networks} (CRN) for learning causal models using neural networks. The CRN represent causal models using continuous representations and hence could scale much better with the number of variables. These models also take in previously learned information to facilitate learning of new causal models. Finally, we propose a decoding-based metric to evaluate causal models with continuous representations. We test our method on synthetic data achieving high accuracy and quick adaptation to previously unseen causal models. ",Amortized learning of neural causal representations,1,"['Our new paper out on amortized learning of neural representations: learning a fully continuous representation of causal models using neural networks! Thanks to my awesome collaborators @DaniloJRezende @janexwang @jovana_mitr, Martin Zummer <LINK> <LINK>']",20,08,253 |
3767,23,1299165697285611522,114562472,Prof. Emily Levesque 🤓✨🔭📚,"Check out this snazzy new figure from the latest @UW Massive Stars group paper, led by @trevorzaylen! 🤩 You'll want to read the whole paper: it also shares the SUPER-cool discovery of a new class of pulsating yellow supergiants 🌟 #FYPS Check it out at <LINK>! <LINK>",https://arxiv.org/abs/2008.11723,"Massive stars briefly pass through the yellow supergiant (YSG) phase as they evolve redward across the HR diagram and expand into red supergiants (RSGs). Higher-mass stars pass through the YSG phase again as they evolve blueward after experiencing significant RSG mass loss. These post-RSG objects offer us a tantalizing glimpse into which stars end their lives as RSGs, and why. One telltale sign of a post-RSG object may be an instability to pulsations, depending on the star's interior structure. Here we report the discovery of five YSGs with pulsation periods faster than 1 day, found in a sample of 76 cool supergiants observed by \tess at two-minute cadence. These pulsating YSGs are concentrated in a HR diagram region not previously associated with pulsations; we conclude that this is a genuine new class of pulsating star, Fast Yellow Pulsating Supergiants (FYPS). For each FYPS, we extract frequencies via iterative prewhitening and conduct a time-frequency analysis. One FYPS has an extracted frequency that is split into a triplet, and the amplitude of that peak is modulated on the same timescale as the frequency spacing of the triplet; neither rotation nor binary effects are likely culprits. We discuss the evolutionary status of FYPS and conclude that they are candidate post-RSGs. All stars in our sample also show the same stochastic low-frequency variability (SLFV) found in hot OB stars and attributed to internal gravity waves. Finally, we find four $\alpha$ Cygni variables in our sample, of which three are newly discovered. ","Short Term Variability of Evolved Massive Stars with TESS II: A New |
Class of Cool, Pulsating Supergiants",1,"[""Check out this snazzy new figure from the latest @UW Massive Stars group paper, led by @trevorzaylen! 🤩\n\nYou'll want to read the whole paper: it also shares the SUPER-cool discovery of a new class of pulsating yellow supergiants 🌟 #FYPS Check it out at <LINK>! <LINK>""]",20,08,266 |
3768,104,1319011775984041984,998966120706174976,Ishaq Aden-Ali,"New paper with @ashtiani_hassan and @thegautamkamath on privately learning Gaussians. <LINK> Under aprox DP, we get near optimal bounds for learning Gaussians with known covariance and (likely) near optimal bounds for general Gaussians 1/8 <LINK> We build on work by @markmbun @thegautamkamath @shortstein @zstevenwu that presented two DP algos for private hypothesis selection (hypotheses here is a set of distributions we want to pick from). The first is a pure DP algo that can learn given a finite set of distributions. 2/8 The second is an approx DP algo that can learn given an infinite set of distributions. The second algorithm requires a cover for the set of distributions that is ""locally small"" in some technical sense 3/8 Unfortunately, explicitly deriving such covers can be very tedious even for simple classes of distributions. We show that we can get around this. If a TV ball around *every* dist. in the set can be covered with a ""small"" # dists., then the class has a ''locally small'' cover 4/8 Showing that the TV ball around any distribution in the class can be covered is a simpler problem, making the analysis less hairy. 5/8 We also refine existing (and new) sample complexity bounds by doing a two step DP procedure: In the first step we learn a *really* rough estimate of the dist. using the Approx DP algo. We then build a finite cover for a TV ball around the rough estimate. 6/8 We then use the pure DP algo to learn given this finite cover. Turns out doing this gets better dependence on the parameters of the problem. Lastly, the algorithms we mentioned need to know ""how far"" the best dist. in the set is compared to the unknown dist. we sample from. 7/8 Non-privately, we dont need to know this. We give a pure DP and sample efficient agnostic algo that basically privatizes the minimum distance estimate (MDE) à la Yatracos. MDE maximizes a low sensitivity function, so plugging it into the exponential mechanism ""just works"" 8/8 We learned a lot about DP from @thegautamkamath and look forward to learning more :) 9/8 @zafateniac Thanks Zain Bhai :)",https://arxiv.org/abs/2010.09929,"We provide sample complexity upper bounds for agnostically learning multivariate Gaussians under the constraint of approximate differential privacy. These are the first finite sample upper bounds for general Gaussians which do not impose restrictions on the parameters of the distribution. Our bounds are near-optimal in the case when the covariance is known to be the identity, and conjectured to be near-optimal in the general case. From a technical standpoint, we provide analytic tools for arguing the existence of global ""locally small"" covers from local covers of the space. These are exploited using modifications of recent techniques for differentially private hypothesis selection. Our techniques may prove useful for privately learning other distribution classes which do not possess a finite cover. ","On the Sample Complexity of Privately Learning Unbounded |
High-Dimensional Gaussians",10,"['New paper with @ashtiani_hassan and @thegautamkamath on privately learning Gaussians. \n\n<LINK>\n\nUnder aprox DP, we get near optimal bounds for learning Gaussians with known covariance and (likely) near optimal bounds for general Gaussians 1/8 <LINK>', 'We build on work by @markmbun @thegautamkamath @shortstein @zstevenwu that presented two DP algos for private hypothesis selection (hypotheses here is a set of distributions we want to pick from). The first is a pure DP algo that can learn given a finite set of distributions. 2/8', 'The second is an approx DP algo that can learn given an infinite set of distributions. The second algorithm requires a cover for the set of distributions that is ""locally small"" in some technical sense 3/8', 'Unfortunately, explicitly deriving such covers can be very tedious even for simple classes of distributions. We show that we can get around this. If a TV ball around *every* dist. in the set can be covered with a ""small"" # dists., then the class has a \'\'locally small\'\' cover 4/8', 'Showing that the TV ball around any distribution in the class can be covered is a simpler problem, making the analysis less hairy. 5/8', 'We also refine existing (and new) sample complexity bounds by doing a two step DP procedure: In the first step we learn a *really* rough estimate of the dist. using the Approx DP algo. We then build a finite cover for a TV ball around the rough estimate. 6/8', 'We then use the pure DP algo to learn given this finite cover. Turns out doing this gets better dependence on the parameters of the problem. \n\nLastly, the algorithms we mentioned need to know ""how far"" the best dist. in the set is compared to the unknown dist. we sample from. 7/8', 'Non-privately, we dont need to know this. We give a pure DP and sample efficient agnostic algo that basically privatizes the minimum distance estimate (MDE) à la Yatracos. MDE maximizes a low sensitivity function, so plugging it into the exponential mechanism ""just works"" 8/8', 'We learned a lot about DP from @thegautamkamath and look forward to learning more :) 9/8', '@zafateniac Thanks Zain Bhai :)']",20,10,2086 |
3769,61,1417712570647998473,145500767,Enrique Lopez Rodriguez,"New paper w/ @asborlaff @suzIQUV from our Legacy Program on extragalactic magnetism: The gas streams of NGC1097 follow the B-field, which feeds the black hole with matter from the host galaxy. We used @ehtelescope techniques to estimate B-field modes. <LINK> <LINK>",https://arxiv.org/abs/2107.09063,"Galactic bars are frequent in disk galaxies and they may support the transfer of matter towards the central engine of active nuclei. The barred galaxy NGC 1097 has magnetic forces controlling the gas flow at several kpc scales, which suggest that magnetic fields (B-fields) are dynamically important along the bar and nuclear ring. However, the effect of the B-field on the gas flows in the central kpc scale has not been characterized. Using thermal polarized emission at $89$ $\mu$m with HAWC+/SOFIA, here, we measure that the polarized flux is spatially located at the contact regions of the outer-bar with the starburst ring. The linear polarization decomposition analysis shows that the $89$ $\mu$m and radio ($3.5$ and $6.2$ cm) polarization traces two different modes, $m$, of the B-field: a constant B-field orientation and dominated by $m=0$ at $89$ $\mu$m, and a spiral B-field dominated by $m=2$ at radio. We show that the B-field at 89 $\mu$m is concentrated in the warmest region of a shock driven by the galactic-bar dynamics in the contact regions between the outer-bar with the starburst ring. Radio polarization traces a superposition of the spiral B-field outside and within the starburst ring. According to Faraday rotation measures between $3.5$ and $6.2$ cm, the radial component of the B-field along the contact regions points toward the galaxy's center on both sides. We conclude that gas streams outside and within the starburst ring follow the B-field, which feeds the black hole with matter from the host galaxy. ","Extragalactic magnetism with SOFIA (Legacy Program) -- II: A |
magnetically-driven flow in the starburst ring of NGC 1097",1,"['New paper w/ @asborlaff @suzIQUV from our Legacy Program on extragalactic magnetism: The gas streams of NGC1097 follow the B-field, which feeds the black hole with matter from the host galaxy. We used @ehtelescope techniques to estimate B-field modes. \n<LINK> <LINK>']",21,07,265 |
3770,137,1511764384439316485,95734122,Clément Livache,"Our paper on optically excited ASE in a fully-working, high current quantum dot LED is available on arXiv! We show that we can make a solution-processed LED that supports light amplification thanks to new high-gain dots and reduced optical losses. <LINK>",https://arxiv.org/abs/2204.01929,"Laser diodes based on solution-processable materials could benefit numerous technologies including integrated electronics and photonics, telecommunication, and medical diagnostics. An attractive system for implementing these devices is colloidal semiconductor quantum dots (QDs). The primary challenge that hampered progress towards a QD laser diode (QLD) has been fast nonradiative Auger decay of optical-gain-active multicarrier states. Recently, this problem has been resolved by employing continuously graded QDs (cg-QDs) wherein Auger recombination is strongly suppressed. The use of these structures allowed for demonstrations of optical gain with electrical pumping and optically-excited lasing in multilayered LED-like devices. Here we report on achieving the next critical milestone towards a QLD, which is the demonstration of optically excited amplified spontaneous emission from a fully functional high-current density electroluminescent device. This advance has become possible due to excellent optical gain properties of novel 'compact' cg-QDs and a new LED architecture, which allows for concerted optimization of its optical and electrical properties. The results of this work strongly suggest the feasibility of the final step towards a functional QLD, which is the demonstration of lasing with electrical pumping. ","Optically Excited Two-Band Amplified Spontaneous Emission from a |
High-Current-Density Quantum-Dot LED",1,"['Our paper on optically excited ASE in a fully-working, high current quantum dot LED is available on arXiv! We show that we can make a solution-processed LED that supports light amplification thanks to new high-gain dots and reduced optical losses. <LINK>']",22,04,254 |
3771,85,973203630412173312,308587014,Robert Feldt,"Our mega-study (many, many authors :)) on artificial creativity now also on arxiv (in submission since last spring but we want to be able to also ref it during this (long) process...) <LINK> In particular, I'm very happy to be co-author with Karl Sims whose classic work on evolving virtual creatures helped me dare to apply artificial evolution in SW Engineering in my PhD work in late 90's: <LINK>",https://arxiv.org/abs/1803.03453,"Biological evolution provides a creative fount of complex and subtle adaptations, often surprising the scientists who discover them. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, evolution's creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, exposing unrecognized bugs in their code, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. Such stories routinely reveal creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems. ","The Surprising Creativity of Digital Evolution: A Collection of |
Anecdotes from the Evolutionary Computation and Artificial Life Research |
Communities",2,"['Our mega-study (many, many authors :)) on artificial creativity now also on arxiv (in submission since last spring but we want to be able to also ref it during this (long) process...)\n<LINK>', ""In particular, I'm very happy to be co-author with Karl Sims whose classic work on evolving virtual creatures helped me dare to apply artificial evolution in SW Engineering in my PhD work in late 90's:\nhttps://t.co/C2NMFhhuwq""]",18,03,399 |
3772,69,1130628581581762561,813503217158017024,Brian L Trippe,"I'm excited to share a new paper with @jhhhuggins, Raj Agrawal and @ta_broderick coming out @icmlconf! We speed up Bayesian inference in high dimensional generalized linear models using low rank approximations of data, with a method we call ""LR-GLM"". <LINK> 1/5 With LR-GLM, we make inference with Laplace approximations and MCMC faster by up to full factor of the dimensionality. The rank of the approximation defines a trade-off between the computational demands and accuracy of the approximation. 2/5 Unlike variational Bayes, this approximation is conservative; LR-GLM doesn't underestimate uncertainty. We also show how increasing computational budget increases the information extracted from data about the model. 3/5 Additionally, we provide theoretical guarantees on approximation quality, with non-asymptotic bounds on approximation error of posterior means and uncertainties. 4/5 Also check out our other ICML paper (led by Raj Agrawal) on ""The Kernel Interaction Trick"", which allows us to use Bayesian inference to efficiently identify pairwise interactions between covariates in regression models! <LINK> 5/5",https://arxiv.org/abs/1905.07499,"Due to the ease of modern data collection, applied statisticians often have access to a large set of covariates that they wish to relate to some observed outcome. Generalized linear models (GLMs) offer a particularly interpretable framework for such an analysis. In these high-dimensional problems, the number of covariates is often large relative to the number of observations, so we face non-trivial inferential uncertainty; a Bayesian approach allows coherent quantification of this uncertainty. Unfortunately, existing methods for Bayesian inference in GLMs require running times roughly cubic in parameter dimension, and so are limited to settings with at most tens of thousand parameters. We propose to reduce time and memory costs with a low-rank approximation of the data in an approach we call LR-GLM. When used with the Laplace approximation or Markov chain Monte Carlo, LR-GLM provides a full Bayesian posterior approximation and admits running times reduced by a full factor of the parameter dimension. We rigorously establish the quality of our approximation and show how the choice of rank allows a tunable computational-statistical trade-off. Experiments support our theory and demonstrate the efficacy of LR-GLM on real large-scale datasets. ","LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data |
Approximations",5,"['I\'m excited to share a new paper with @jhhhuggins, Raj Agrawal and @ta_broderick coming out @icmlconf! We speed up Bayesian inference in high dimensional generalized linear models using low rank approximations of data, with a method we call ""LR-GLM"".\n<LINK> 1/5', 'With LR-GLM, we make inference with Laplace approximations and MCMC faster by up to full factor of the dimensionality. The rank of the approximation defines a trade-off between the computational demands and accuracy of the approximation. 2/5', ""Unlike variational Bayes, this approximation is conservative; LR-GLM doesn't underestimate uncertainty. We also show how increasing computational budget increases the information extracted from data about the model. 3/5"", 'Additionally, we provide theoretical guarantees on approximation quality, with non-asymptotic bounds on approximation error of posterior means and uncertainties. 4/5', 'Also check out our other ICML paper (led by Raj Agrawal) on ""The Kernel Interaction Trick"", which allows us to use Bayesian inference to efficiently identify pairwise interactions between covariates in regression models!\nhttps://t.co/dxM470TzW4 5/5']",19,05,1121 |
3773,85,1348629306055004161,1192152664412475393,Fulvio Gesmundo,"We study the dimension of Tensor Network Varieties, with a focus on cases relevant for quantum many-body physics such as MPS and PEPS. Take a look at our new paper on ""Dimension of Tensor Network Varieties"", with A. Bernardi and C. De Lazzari, at <LINK> In this work, we give a general upper bound for the dimension of these varieties and we show this bound is sharp in a particular range. We highlight few examples where the varieties are ""smaller than expected"". Feel free to comment and give feedback!",https://arxiv.org/abs/2101.03148,"The tensor network variety is a variety of tensors associated to a graph and a set of positive integer weights on its edges, called bond dimensions. We determine an upper bound on the dimension of the tensor network variety. A refined upper bound is given in cases relevant for applications such as varieties of matrix product states and projected entangled pairs states. We provide a range (the ""supercritical range"") of the parameters where the upper bound is sharp. ",Dimension of Tensor Network varieties,2,"['We study the dimension of Tensor Network Varieties, with a focus on cases relevant for quantum many-body physics such as MPS and PEPS.\n\nTake a look at our new paper on ""Dimension of Tensor Network Varieties"", with A. Bernardi and C. De Lazzari, at <LINK>', 'In this work, we give a general upper bound for the dimension of these varieties and we show this bound is sharp in a particular range. We highlight few examples where the varieties are ""smaller than expected"".\n\nFeel free to comment and give feedback!']",21,01,504 |
3774,61,963347790335234050,135150782,Michele Ginolfi,"My new paper ""Extended and broad Lyα emission around a BAL quasar at z~5"", accepted by MNRAS, just appeared on the @arxiv! Our Lya Nebula (in the pic below), observed with the @ESO VLT-MUSE telescope, shows some interesting properties..find out more at <LINK> <LINK>",https://arxiv.org/abs/1802.03400,"In this work we report deep MUSE observations of a Broad Absorption Line (BAL) quasar at z ~ 5, revealing a Lyman alpha nebula with a maximum projected linear size of ~ 60 kpc around the quasar (down to our 2-sigma SB limit per layer of ~ 9e-19 erg/s/cm^2/arcsec^2 for a 1 arcsec^2 aperture). After correcting for the cosmological surface brightness dimming, we find that our nebula, at z ~ 5, has an intrinsically less extended Lyman alpha emission than nebulae at lower redshift. However, such a discrepancy is greatly reduced when referring to comoving distances, which take into account the cosmological growth of dark matter (DM) haloes, suggesting a positive correlation between the size of Lyman alpha nebulae and the sizes of DM haloes/structures around quasars. Differently from the typical nebulae around radio-quiet non-BAL quasars, in the inner regions (~ 10 kpc) of the circumgalactic medium (CGM) of our source, the velocity dispersion of the Lyman alpha emission is very high (FWHM > 1000 km/s), suggesting that in our case we may be probing outflowing material associated with the quasar. ",Extended and broad Lyman alpha emission around a BAL quasar at z~5,1,"['My new paper ""Extended and broad Lyα emission around a BAL quasar at z~5"", accepted by MNRAS, just appeared on the @arxiv! \nOur Lya Nebula (in the pic below), observed with the @ESO VLT-MUSE telescope, shows some interesting properties..find out more at <LINK> <LINK>']",18,02,266 |
3775,38,1042054735539449857,112053784,Kaley Brauer 💫,"My new paper with @alexanderpji is on arXiv! It’s about how we can likely use old stars we see today (in this case, r-II stars) to learn about the smallest galaxies that merged together to form the Milky Way. This is an example of *stellar archaeology*!✨ <LINK> The old stars in the extended outskirts of the Milky Way (called our “stellar halo”) preserve a ton of information about the early universe and the history of our galaxy. We don’t know how to decode that info, though, so I’ve been developing a model to help. That’s why it’s called “stellar archaeology”: we’re looking at old stars to learn about the history of our galaxy/universe! 🌟 This paper shows the first result of the model I’ve been working on. We’ve found a likely connection between old r-process-enhanced stars and small, faint galaxies that were destroyed during the formation of the Milky Way, the smallest building blocks of our galaxy. This is extra exciting because the r-process forms the heaviest elements in our galaxy (including gold 🥇), so this could also help us understand the origin of these elements.",https://arxiv.org/abs/1809.05539,"The highly r-process enhanced (r-II) metal-poor halo stars we observe today could play a key role in understanding early ultra-faint dwarf galaxies, the smallest building blocks of the Milky Way. If a significant fraction of metal-poor r-II halo stars originated in the ultra-faint dwarf galaxies that merged to help form the Milky Way, observations of r-II stars could help us study these now-destroyed systems and probe the formation history of our Galaxy. To conduct our initial investigation into this possible connection, we use high-resolution cosmological simulations of Milky-Way-mass galaxies from the Caterpillar suite in combination with a simple, empirically motivated treatment of r-process enrichment. We determine the fraction of metal-poor halo stars that could have formed from highly r-process enhanced gas in now-destroyed low-mass ultra-faint dwarf galaxies, the simulated r-II fraction, and compare it to the ""as observed"" r-II fraction. We find that the simulated fraction, f_{r-II,sim} ~ 1-2%, can account for around half of the ""as observed"" fraction, f_{r-II,obs} ~ 2-4%. The ""as observed"" fraction likely overrepresents the fraction of r-II stars due to incomplete sampling, though, meaning f_{r-II,sim} likely accounts for more than half of the true f_{r-II,obs}. Further considering some parameter variations and scatter between individual simulations, the simulated fraction can account for around 20-80% of the ""as observed"" fraction. ","The Origin of r-process Enhanced Metal-Poor Halo Stars In Now-Destroyed |
Ultra-Faint Dwarf Galaxies",5,"['My new paper with @alexanderpji is on arXiv! It’s about how we can likely use old stars we see today (in this case, r-II stars) to learn about the smallest galaxies that merged together to form the Milky Way. This is an example of *stellar archaeology*!✨ <LINK>', 'The old stars in the extended outskirts of the Milky Way (called our “stellar halo”) preserve a ton of information about the early universe and the history of our galaxy. We don’t know how to decode that info, though, so I’ve been developing a model to help.', 'That’s why it’s called “stellar archaeology”: we’re looking at old stars to learn about the history of our galaxy/universe! 🌟', 'This paper shows the first result of the model I’ve been working on. We’ve found a likely connection between old r-process-enhanced stars and small, faint galaxies that were destroyed during the formation of the Milky Way, the smallest building blocks of our galaxy.', 'This is extra exciting because the r-process forms the heaviest elements in our galaxy (including gold 🥇), so this could also help us understand the origin of these elements.']",18,09,1088 |
3776,139,1476602076939661324,806996203,Sumon Biswas,Preprint of our ICSE'22 paper is available <LINK> Researching on the architecture & design of data science pipelines OR building new pipelines? Take a look. @hridesh Glad to end the year by sharing @ICSEconf paper. Hopefully will meet all this time @Pittsburgh <LINK>,https://arxiv.org/abs/2112.01590,"Increasingly larger number of software systems today are including data science components for descriptive, predictive, and prescriptive analytics. The collection of data science stages from acquisition, to cleaning/curation, to modeling, and so on are referred to as data science pipelines. To facilitate research and practice on data science pipelines, it is essential to understand their nature. What are the typical stages of a data science pipeline? How are they connected? Do the pipelines differ in the theoretical representations and that in the practice? Today we do not fully understand these architectural characteristics of data science pipelines. In this work, we present a three-pronged comprehensive study to answer this for the state-of-the-art, data science in-the-small, and data science in-the-large. Our study analyzes three datasets: a collection of 71 proposals for data science pipelines and related concepts in theory, a collection of over 105 implementations of curated data science pipelines from Kaggle competitions to understand data science in-the-small, and a collection of 21 mature data science projects from GitHub to understand data science in-the-large. Our study has led to three representations of data science pipelines that capture the essence of our subjects in theory, in-the-small, and in-the-large. ","The Art and Practice of Data Science Pipelines: A Comprehensive Study of |
Data Science Pipelines In Theory, In-The-Small, and In-The-Large",1,"[""Preprint of our ICSE'22 paper is available <LINK>\nResearching on the architecture & design of data science pipelines OR building new pipelines? Take a look. @hridesh \n\nGlad to end the year by sharing @ICSEconf paper. Hopefully will meet all this time @Pittsburgh <LINK>""]",21,12,268 |
3777,161,1264705883662987265,2228815292,Lewis Mitchell,"Curtis Murray tracks the ‘arc’ of patient experience with #Covid19 using Reddit posts to /r/COVIDPositive — we find that you can extract symptoms, and the emotions associated with them. Check it out on the arxiv: <LINK> <LINK> Using sentiment analysis, we find that there are two main symptom-emotion clusters (+ and -), and map the landscape of these in the narratives. Pretty pictures abound! <LINK> We hope that this might be able to help with rapid identification of symptoms, and also of emotional/mental health issues associated with #Covid19. Joint work with @jonotuke and Mark Mackay. @MathsUOA @ACEMathStats <LINK>",https://arxiv.org/abs/2005.10454,"Social media discussion of COVID-19 provides a rich source of information into how the virus affects people's lives that is qualitatively different from traditional public health datasets. In particular, when individuals self-report their experiences over the course of the virus on social media, it can allow for identification of the emotions each stage of symptoms engenders in the patient. Posts to the Reddit forum r/COVID19Positive contain first-hand accounts from COVID-19 positive patients, giving insight into personal struggles with the virus. These posts often feature a temporal structure indicating the number of days after developing symptoms the text refers to. Using topic modelling and sentiment analysis, we quantify the change in discussion of COVID-19 throughout individuals' experiences for the first 14 days since symptom onset. Discourse on early symptoms such as fever, cough, and sore throat was concentrated towards the beginning of the posts, while language indicating breathing issues peaked around ten days. Some conversation around critical cases was also identified and appeared at a roughly constant rate. We identified two clear clusters of positive and negative emotions associated with the evolution of these symptoms and mapped their relationships. Our results provide a perspective on the patient experience of COVID-19 that complements other medical data streams and can potentially reveal when mental health issues might appear. ","Symptom extraction from the narratives of personal experiences with |
COVID-19 on Reddit",3,"['Curtis Murray tracks the ‘arc’ of patient experience with #Covid19 using Reddit posts to /r/COVIDPositive — we find that you can extract symptoms, and the emotions associated with them. Check it out on the arxiv: <LINK> <LINK>', 'Using sentiment analysis, we find that there are two main symptom-emotion clusters (+ and -), and map the landscape of these in the narratives. Pretty pictures abound! https://t.co/gCQAURAgLP', 'We hope that this might be able to help with rapid identification of symptoms, and also of emotional/mental health issues associated with #Covid19. Joint work with @jonotuke and Mark Mackay. @MathsUOA @ACEMathStats https://t.co/ct7N1akalN']",20,05,623 |
3778,130,1436237195074015233,16389141,Massimo Nicosia,📄 Our new EMNLP paper is on arXiv! 📄 1⃣ Train an mT5 filler model to reconstruct full parses from English utterances + parse signatures 2⃣ Run it on translations and parse signatures to obtain high quality i18n synthetic data! More here:👉 <LINK> 👈 @Google,https://arxiv.org/abs/2109.04319,"While multilingual pretrained language models (LMs) fine-tuned on a single language have shown substantial cross-lingual task transfer capabilities, there is still a wide performance gap in semantic parsing tasks when target language supervision is available. In this paper, we propose a novel Translate-and-Fill (TaF) method to produce silver training data for a multilingual semantic parser. This method simplifies the popular Translate-Align-Project (TAP) pipeline and consists of a sequence-to-sequence filler model that constructs a full parse conditioned on an utterance and a view of the same parse. Our filler is trained on English data only but can accurately complete instances in other languages (i.e., translations of the English training utterances), in a zero-shot fashion. Experimental results on three multilingual semantic parsing datasets show that data augmentation with TaF reaches accuracies competitive with similar systems which rely on traditional alignment techniques. ","Translate & Fill: Improving Zero-Shot Multilingual Semantic Parsing with |
Synthetic Data",1,['📄 Our new EMNLP paper is on arXiv! 📄\n\n1⃣ Train an mT5 filler model to reconstruct full parses from English utterances + parse signatures\n2⃣ Run it on translations and parse signatures to obtain high quality i18n synthetic data!\n\nMore here:👉 <LINK> 👈\n\n @Google'],21,09,256 |
3779,4,1325453435601260545,765403985205354496,Tian Xu,A theoretical study of two famous IL methods: BC and GAIL for imitating both policies and environments. Our analysis of imitating environments also suggests a new direction in MBRL. Check our NeurIPS paper for details! <LINK> Joint work with @ZiniuLi and Yang Yu <LINK>,https://arxiv.org/abs/2010.11876,"Imitation learning trains a policy by mimicking expert demonstrations. Various imitation methods were proposed and empirically evaluated, meanwhile, their theoretical understanding needs further studies. In this paper, we firstly analyze the value gap between the expert policy and imitated policies by two imitation methods, behavioral cloning and generative adversarial imitation. The results support that generative adversarial imitation can reduce the compounding errors compared to behavioral cloning, and thus has a better sample complexity. Noticed that by considering the environment transition model as a dual agent, imitation learning can also be used to learn the environment model. Therefore, based on the bounds of imitating policies, we further analyze the performance of imitating environments. The results show that environment models can be more effectively imitated by generative adversarial imitation than behavioral cloning, suggesting a novel application of adversarial imitation for model-based reinforcement learning. We hope these results could inspire future advances in imitation learning and model-based reinforcement learning. ",Error Bounds of Imitating Policies and Environments,1,['A theoretical study of two famous IL methods: BC and GAIL for imitating both policies and environments. Our analysis of imitating environments also suggests a new direction in MBRL. Check our NeurIPS paper for details! <LINK> Joint work with @ZiniuLi and Yang Yu <LINK>'],20,10,269 |
3780,195,1403263384657793028,1311352436,Andrew Sellek,"New paper out today! <LINK> We investigate how self-similar models from Clarke and Alexander (2016) generalise to non-isothermal winds and winds launched from an elevated base. Reasonable temperature gradients make quite small differences... (1/5) <LINK> …the most key parameter in setting the maximum launch speed (Mach number) turns out to be the elevation of the wind base (phi_b). We tested the applicability of the models with hydrodynamic simulations. Not only do the scale free simulations agree excellently as before… (2/5) <LINK> …but so do those with gravity included, and even broken power law models for the density, in which the self-similar models predict launch velocities excellently away from the transition radius, though winds are not launched beyond wherever the density steepens beyond r^-2. (3/5) <LINK> We find that much of the behaviour can be understood by the idea that the wind must fill the available space. For example, if we limit the elevation to which the wind can rise, then the wind adopts more curved solutions with lower launch velocities than the maximum allowed. (4/5) <LINK> Finally we consider the impact of the elevated base on [Ne II] line profiles, for which the dependence of the FWHM on inclination is reduced compared to winds launched from the midplane – such effects should be considered important if they are used to infer wind properties. (5/5) <LINK> Forgot to add that for anyone interested in using the solutions, I've made my python script to calculate them available here: <LINK>! Also includes a few examples used in the paper and some tables from our parameter searches (see contour plot in tweet 2) (6/5). @RTal421 Thanks, Rosie! @r_d_alexander Thanks, Richard, I'm glad to hear your feedback given how well you know these solutions! I agree it's probably just about at our limit right now if we want to firmly measure the effect of elevation, but hopefully they can help us understand observations more robustly soon!",https://arxiv.org/abs/2106.05362,"Thermal disc winds occur in many contexts and may be particularly important to the secular evolution and dispersal of protoplanetary discs heated by high energy radiation from their central star. In this paper we generalise previous models of self-similar thermal winds - which have self-consistent morphology and variation of flow variables - to the case of launch from an elevated base and to non-isothermal conditions. These solutions are well-reproduced by hydrodynamic simulations, in which, as in the case of isothermal winds launched from the mid-plane, we find winds launch at the maximum Mach number for which the streamline solutions extend to infinity without encountering a singularity. We explain this behaviour based on the fact that lower Mach number solutions do not fill the spatial domain. We also show that hydrodynamic simulations reflect the corresponding self-similar models across a range of conditions appropriate to photoevaporating protoplanetary discs, even when gravity, centrifugal forces, or changes in the density gradient mean the problem is not inherently scale free. Of all the parameters varied, the elevation of the wind base affected the launch velocity and flow morphology most strongly, with temperature gradients causing only minor differences. We explore how launching from an elevated base affects Ne II line profiles from winds, finding it increases (reduces) the full width at half maximum (FWHM) of the line at low (high) inclination to the line of sight compared with models launched from the disc mid-plane and thus weakens the dependence of the FWHM on inclination. ","The general applicability of self-similar solutions for thermal disc |
winds",8,"['New paper out today! <LINK>\nWe investigate how self-similar models from Clarke and Alexander (2016) generalise to non-isothermal winds and winds launched from an elevated base. Reasonable temperature gradients make quite small differences... (1/5) <LINK>', '…the most key parameter in setting the maximum launch speed (Mach number) turns out to be the elevation of the wind base (phi_b). We tested the applicability of the models with hydrodynamic simulations. Not only do the scale free simulations agree excellently as before… (2/5) https://t.co/qrDPM7Tfwv', '…but so do those with gravity included, and even broken power law models for the density, in which the self-similar models predict launch velocities excellently away from the transition radius, though winds are not launched beyond wherever the density steepens beyond r^-2. (3/5) https://t.co/X2aZb5rMMk', 'We find that much of the behaviour can be understood by the idea that the wind must fill the available space. For example, if we limit the elevation to which the wind can rise, then the wind adopts more curved solutions with lower launch velocities than the maximum allowed. (4/5) https://t.co/d84Bmz7760', 'Finally we consider the impact of the elevated base on [Ne II] line profiles, for which the dependence of the FWHM on inclination is reduced compared to winds launched from the midplane – such effects should be considered important if they are used to infer wind properties. (5/5) https://t.co/adziAOmMun', ""Forgot to add that for anyone interested in using the solutions, I've made my python script to calculate them available here: https://t.co/UBg3kt0bIz! Also includes a few examples used in the paper and some tables from our parameter searches (see contour plot in tweet 2) (6/5)."", '@RTal421 Thanks, Rosie!', ""@r_d_alexander Thanks, Richard, I'm glad to hear your feedback given how well you know these solutions! I agree it's probably just about at our limit right now if we want to firmly measure the effect of elevation, but hopefully they can help us understand observations more robustly soon!""]",21,06,1976 |
3781,158,1423670629383933959,977906884886827008,Marcos Mariño,"In my last paper with friends at @Sissaschool and @UniTrieste (<LINK>) we study the interplay between resurgence and the 1/N expansion in integrable quantum field theories in 2d. We find that, in certain cases, resurgence is undermined by the large N limit! This might be an indication that the strong version of resurgence is not valid in theories with renormalons, or it might be an effect of truncating the perturbative expansion to a fixed order in 1/N On the positive side, we find a new, explicit example of resurgence in the large N limit of the principal chiral field, which involves an infinite sequence of IR renormalons whose trans-series can be computed analytically. Quite beautiful. In retrospect, this tension between resurgence and the 1/N expansion can be seen to be present in older calculations in solvable models, but it was not pointed out explicitly",https://arxiv.org/abs/2108.02647,"In theories with renormalons the perturbative series is factorially divergent even after restricting to a given order in $1/N$, making the $1/N$ expansion a natural testing ground for the theory of resurgence. We study in detail the interplay between resurgent properties and the $1/N$ expansion in various integrable field theories with renormalons. We focus on the free energy in the presence of a chemical potential coupled to a conserved charge, which can be computed exactly with the thermodynamic Bethe ansatz (TBA). In some examples, like the first $1/N$ correction to the free energy in the non-linear sigma model, the terms in the $1/N$ expansion can be fully decoded in terms of a resurgent trans-series in the coupling constant. In the principal chiral field we find a new, explicit solution for the large $N$ free energy which can be written as the median resummation of a trans-series with infinitely many, analytically computable IR renormalon corrections. However, in other examples, like the Gross-Neveu model, each term in the $1/N$ expansion includes non-perturbative corrections which can not be predicted by a resurgent analysis of the corresponding perturbative series. We also study the properties of the series in $1/N$. In the Gross-Neveu model, where this is convergent, we analytically continue the series beyond its radius of convergence and show how the continuation matches with known dualities with sine-Gordon theories. ",Resurgence and $1/N$ Expansion in Integrable Field Theories,4,"['In my last paper with friends at @Sissaschool and @UniTrieste (<LINK>) we study the interplay between resurgence and the 1/N expansion in integrable quantum field theories in 2d. We find that, in certain cases, resurgence is undermined by the large N limit!', 'This might be an indication that the strong version of resurgence is not valid in theories with renormalons, or it might be an effect of truncating the perturbative expansion to a fixed order in 1/N', 'On the positive side, we find a new, explicit example of resurgence in the large N limit of the principal chiral field, which involves an infinite sequence of IR renormalons whose trans-series can be computed analytically. Quite beautiful.', 'In retrospect, this tension between resurgence and the 1/N expansion can be seen to be present in older calculations in solvable models, but it was not pointed out explicitly']",21,08,871 |
3782,180,1508791099887923200,871719324867854336,Christos Charalambous,"How can, a small group of people with prejudices, convince the rest of a community about their preference faster? We find that this requires the biased agents to form a more endogamous community, while the unbiased agents a more exogamous one. <LINK> In the classical voter model, agents copy their neighbor’s opinion, if this is different. In the variation considered here, all agents have confidence, i.e. a non-zero probability to remain with their current opinion. A subgroup of agents has an opinion-dependent confidence, modelling a prejudice towards one of the opinions.We study the problem analytically and numerically,at both the thermodynamic limit and when finite size effects are in place,on a complete graph and on an ER random network. Finally, we consider the case where the topology of interactions depends on the type of agents residing at each node. We ask the question “how can the heterogeneous topology be exploited by the biased agents to convince the rest of the society about their prejudice faster?” Given a constant global average number of interactions in the community, we find that increasing the number of confrontations among members of the two groups(inter-group biased and unbiased) at the expense of the interactions among their peers, does not benefit the biased agents. On the contrary, if the inter-group (biased interacting with unbiased) interactions are kept constant, and the interactions of biased agents among themselves are increased at the expense of the interactions of unbiased agents among themselves, this reduces the time to consensus. A closed biased group (biased agents interact more among themselves rather than with unbiased agents) and an open unbiased group (unbiased agents interact more with biased agents rather than among themselves) is a scenario that favors the biased agents to spread their preference.",https://arxiv.org/abs/2203.05376,"We study the voter model dynamics in the presence of confidence and bias. We assume two types of voters. Unbiased voters whose confidence is indifferent to the state of the voter and biased voters whose confidence is biased towards a common fixed preferred state. We study the problem analytically on the complete graph using mean field theory and on an Erd\H{o}s-R\'enyi random network topology using the pair approximation, where we assume that the network of interactions topology is independent of the type of voters. We find that for the case of a random initial setup, and for sufficiently large number of voters $N$, the time to consensus increases proportionally to $\log(N)/\gamma v$, with $\gamma$ the fraction of biased voters and $v$ the parameter quantifying the bias of the voters ($v=0$ no bias). We verify our analytical results through numerical simulations. We study this model on a biased-dependent topology of the network of interactions and examine two distinct, global average-degree preserving strategies (model I and model II) to obtain such biased-dependent random topologies starting from the biased-independent random topology case as the initial setup. Keeping all other parameters constant, in model I, $\mu_{BU}$, the average number of links among biased (B) and unbiased (U) voters is varied at the expense of $\mu_{UU}$ and $\mu_{BB}$, i.e. the average number of links among only unbiased and biased voters respectively. In model II, $\mu_{BU}$ is kept constant, while $\mu_{BB}$ is varied at the expense of $\mu_{UU}$. We find that if the agents follow the strategy described by model II, they can achieve a significant reduction in the time to reach consensus as well as an increment in the probability to reach consensus to the preferred state. ",Biased-voter model: how persuasive a small group can be?,7,"['How can, a small group of people with prejudices, convince the rest of a community about their preference faster? We find that this requires the biased agents to form a more endogamous community, while the unbiased agents a more exogamous one. <LINK>', 'In the classical voter model, agents copy their neighbor’s opinion, if this is different. In the variation considered here, all agents have confidence, i.e. a non-zero probability to remain with their current opinion.', 'A subgroup of agents has an opinion-dependent confidence, modelling a prejudice towards one of the opinions.We study the problem analytically and numerically,at both the thermodynamic limit and when finite size effects are in place,on a complete graph and on an ER random network.', 'Finally, we consider the case where the topology of interactions depends on the type of agents residing at each node. We ask the question “how can the heterogeneous topology be exploited by the biased agents to convince the rest of the society about their prejudice faster?”', 'Given a constant global average number of interactions in the community, we find that increasing the number of confrontations among members of the two groups(inter-group biased and unbiased) at the expense of the interactions among their peers, does not benefit the biased agents.', 'On the contrary, if the inter-group (biased interacting with unbiased) interactions are kept constant, and the interactions of biased agents among themselves are increased at the expense of the interactions of unbiased agents among themselves, this reduces the time to consensus.', 'A closed biased group (biased agents interact more among themselves rather than with unbiased agents) and an open unbiased group (unbiased agents interact more with biased agents rather than among themselves) is a scenario that favors the biased agents to spread their preference.']",22,03,1866 |
3783,62,1149477660013162499,890966966726479874,Jesse Thomason,"Check out our new preprint, ""Vision-and-Dialog Navigation"", where we extend the VLN paradigm with a corpus of two-sided, cooperative human-human navigation dialogs! Work with @mmurz, @mayacakmak, and @LukeZettlemoyer. paper: <LINK> code: <LINK> <LINK> The Cooperative Vision-and-Language Navigation (CVDN) corpus adds a dynamic, dialog-based language component to the already-dynamic visual context of VLN, further contrasting tasks like VisDial that have dynamic, dialog-based language context but static visual context. <LINK> CVDN is a resource for training agents that navigate, ask questions, and answer navigation questions. We first focus on navigation, and demonstrate an initial seq2seq model that benefits from longer dialog histories as language context for navigating towards a goal location. <LINK> Feel free to get in touch with us about using CVDN! The full CVDN dialog data as well as the Navigation from Dialog History task data are available along with the code linked above. You crowdsourcing interface too: <LINK> (use two tabs to connect with yourself)",https://arxiv.org/abs/1907.04957,"Robots navigating in human environments should use language to ask for assistance and be able to understand human responses. To study this challenge, we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k embodied, human-human dialogs situated in simulated, photorealistic home environments. The Navigator asks questions to their partner, the Oracle, who has privileged access to the best next steps the Navigator should take according to a shortest path planner. To train agents that search an environment for a goal location, we define the Navigation from Dialog History task. An agent, given a target object and a dialog history between humans cooperating to find that object, must infer navigation actions towards the goal in unexplored environments. We establish an initial, multi-modal sequence-to-sequence model and demonstrate that looking farther back in the dialog history improves performance. Sourcecode and a live interface demo can be found at this https URL ",Vision-and-Dialog Navigation,4,"['Check out our new preprint, ""Vision-and-Dialog Navigation"", where we extend the VLN paradigm with a corpus of two-sided, cooperative human-human navigation dialogs! Work with @mmurz, @mayacakmak, and @LukeZettlemoyer.\npaper: <LINK>\ncode: <LINK> <LINK>', 'The Cooperative Vision-and-Language Navigation (CVDN) corpus adds a dynamic, dialog-based language component to the already-dynamic visual context of VLN, further contrasting tasks like VisDial that have dynamic, dialog-based language context but static visual context. https://t.co/hDKdYJ9qcK', 'CVDN is a resource for training agents that navigate, ask questions, and answer navigation questions. We first focus on navigation, and demonstrate an initial seq2seq model that benefits from longer dialog histories as language context for navigating towards a goal location. https://t.co/nSbzQ9jUGA', 'Feel free to get in touch with us about using CVDN! The full CVDN dialog data as well as the Navigation from Dialog History task data are available along with the code linked above.\nYou crowdsourcing interface too: https://t.co/AOey4IvBRa (use two tabs to connect with yourself)']",19,07,1073 |
3784,58,1027547137075294221,1202760024,Stacy McGaugh,"New paper on the arxiv: <LINK> The EDGES 21cm absorption signal that is impossible in LCDM occurs naturally if there is no dark matter. Also extend the prediction to the dark ages (z~100) where the absorption should again be stronger than possible in LCDM. @MatBethermin I specifically chose the model that fits the CMB up for L<600. It does not fit L>600. LCDM does not fit EDGES, and cannot do so without serious modification. So once again, we are up the sh*t creek without a paddle: DM modifications may not even be the right sh*t creek. @MatBethermin Y’know, that’s exactly what folks said when my prediction for the second peak came true. “It’ll falsify MOND If it gets bigger”, the assumption being it would. It didn’t. Still exactly what I predicted with no CDM. The third peak is bigger than that model predicts, so something @MatBethermin more has to be going on. That falsified the simple ansatz I made, but that had to happen at some level. Lots of people wanted to constrain MOND to the no CDM line (when it itself makes no prediction at all) whereas pretty much anything else was taken as a victory for LCDM. @MatBethermin Now it’s the other way around. Planck constrains LCDM so well it has to be exactly the red line in the figure in my paper. Anything else kills it. It may be indicative that no CDM fits, or that might be a remarkable coincidence again. More interesting to me is how LCDM fares. @MatBethermin That’d be neat: I’d like to think there’s still something new to learn about the universe. If only we’ll listen... @MatBethermin Indeed. I’m not pretending MOND is a complete answer, for all the reasons you give. But there is also something more to it than most scientists seem to appreciate. Trying to put it all together has repeatedly left me up olde proverbial creek. @maximetrebitsch @MatBethermin Of course we want confirmation. But EDGES have done an excellent job, so I’m also reluctant to play blame the observer. I was just mulling over ideas like the one you suggest and... it is really hard to come up with something that helps without messing up the CMB.",https://arxiv.org/abs/1808.02532,"We consider the 21cm absorption signal expected at high redshift in cosmologies with and without non-baryonic cold dark matter. The expansion of the early universe decelerates strongly with dark matter, but approximately coasts without it. This results in a different path length across the epochs when absorption is expected, with the consequence that the absorption is predicted to be a factor of $\sim 2$ greater without dark matter than with it. Observation of such a signal would motivate consideration of extended theories of gravity in lieu of dark matter. ","Predictions for the sky-averaged depth of the 21cm absorption signal at |
high redshift in cosmologies with and without non-baryonic cold dark matter",8,"['New paper on the arxiv: <LINK> The EDGES 21cm absorption signal that is impossible in LCDM occurs naturally if there is no dark matter. Also extend the prediction to the dark ages (z~100) where the absorption should again be stronger than possible in LCDM.', '@MatBethermin I specifically chose the model that fits the CMB up for L<600. It does not fit L>600. LCDM does not fit EDGES, and cannot do so without serious modification. So once again, we are up the sh*t creek without a paddle: DM modifications may not even be the right sh*t creek.', '@MatBethermin Y’know, that’s exactly what folks said when my prediction for the second peak came true. “It’ll falsify MOND If it gets bigger”, the assumption being it would. It didn’t. Still exactly what I predicted with no CDM. The third peak is bigger than that model predicts, so something', '@MatBethermin more has to be going on. That falsified the simple ansatz I made, but that had to happen at some level. Lots of people wanted to constrain MOND to the no CDM line (when it itself makes no prediction at all) whereas pretty much anything else was taken as a victory for LCDM.', '@MatBethermin Now it’s the other way around. Planck constrains LCDM so well it has to be exactly the red line in the figure in my paper. Anything else kills it. It may be indicative that no CDM fits, or that might be a remarkable coincidence again. More interesting to me is how LCDM fares.', '@MatBethermin That’d be neat: I’d like to think there’s still something new to learn about the universe. If only we’ll listen...', '@MatBethermin Indeed. I’m not pretending MOND is a complete answer, for all the reasons you give. But there is also something more to it than most scientists seem to appreciate. Trying to put it all together has repeatedly left me up olde proverbial creek.', '@maximetrebitsch @MatBethermin Of course we want confirmation. But EDGES have done an excellent job, so I’m also reluctant to play blame the observer. I was just mulling over ideas like the one you suggest and... it is really hard to come up with something that helps without messing up the CMB.']",18,08,2101 |
3785,174,1495806095205584896,795089712864051200,Barret Zoph,"Interested in using sparse expert models, but find they are unstable, hard to design or don’t fine-tune well? We address these key issues and train 269B param MoE model (w/ FLOPs of 32B dense model) that improves SOTA on NLP benchmarks liked SuperGLUE. <LINK> <LINK> We do a large-scale study of the quality-stability trade-offs of stability techniques We observe that our router z-loss fixes stability, while slightly improving quality The router z-loss is an auxiliary loss that makes the logits of the router smaller for numerical stability <LINK> We study the fine-tuning of sparse vs dense models The optimal batch sizes and learning rates for sparse vs dense models are very different In certain scenarios wrong values masked any of the pre-training performance improvements of sparse models over the dense models <LINK> We also find sparse models to be incredibly robust to “dropping” tokens during fine-tuning. Sparse models have a fixed batch size ahead of time, so if there is overflow to a specific expert, the token is “dropped” and passed to the next layer unchanged. <LINK> For two models with the same FLOPs per token, we find sparse models to outperform their dense counterpart on a large suite of fine-tuning tasks. <LINK> We also propose a few heuristics and architectural modifications to design Pareto efficient architectures. <LINK> We study how the experts specialize on different tokens and find that they end up semantically specializing to different categories such as punctuation, verbs and proper names. <LINK> We finally combine our improvements and train a sparse model with 269B parameters (FLOP matched to a 32B dense model). This model achieve SOTA on a wide range of NLP tasks: SuperGLUE, XSum, CNN-DM, ANLI R3, ARC-Easy/Challenge, CB WebQA, CB NatQA. <LINK>",http://arxiv.org/abs/2202.08906,"Scale has opened new frontiers in natural language processing -- but at a high cost. In response, Mixture-of-Experts (MoE) and Switch Transformers have been proposed as an energy efficient path to even larger and more capable language models. But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning. Our work focuses on these issues and acts as a design guide. We conclude by scaling a sparse model to 269B parameters, with a computational cost comparable to a 32B dense encoder-decoder Transformer (Stable and Transferable Mixture-of-Experts or ST-MoE-32B). For the first time, a sparse model achieves state-of-the-art performance in transfer learning, across a diverse set of tasks including reasoning (SuperGLUE, ARC Easy, ARC Challenge), summarization (XSum, CNN-DM), closed book question answering (WebQA, Natural Questions), and adversarially constructed tasks (Winogrande, ANLI R3). ",ST-MoE: Designing Stable and Transferable Sparse Expert Models,8,"['Interested in using sparse expert models, but find they are unstable, hard to design or don’t fine-tune well?\n\nWe address these key issues and train 269B param MoE model (w/ FLOPs of 32B dense model) that improves SOTA on NLP benchmarks liked SuperGLUE.\n\n<LINK> <LINK>', 'We do a large-scale study of the quality-stability trade-offs of stability techniques\n\nWe observe that our router z-loss fixes stability, while slightly improving quality\n\nThe router z-loss is an auxiliary loss that makes the logits of the router smaller for numerical stability https://t.co/yP8Idybzjo', 'We study the fine-tuning of sparse vs dense models\n\nThe optimal batch sizes and learning rates for sparse vs dense models are very different\n\nIn certain scenarios wrong values masked any of the pre-training performance improvements of sparse models over the dense models https://t.co/zbMWHfZ5G1', 'We also find sparse models to be incredibly robust to “dropping” tokens during fine-tuning. \n\nSparse models have a fixed batch size ahead of time, so if there is overflow to a specific expert, the token is “dropped” and passed to the next layer unchanged. https://t.co/HQwSPufNo2', 'For two models with the same FLOPs per token, we find sparse models to outperform their dense counterpart on a large suite of fine-tuning tasks. https://t.co/DrBLzX5Ij7', 'We also propose a few heuristics and architectural modifications to design Pareto efficient architectures. https://t.co/VIfsTpSEN6', 'We study how the experts specialize on different tokens and find that they end up semantically specializing to different categories such as punctuation, verbs and proper names. https://t.co/aYXT3M5ZY6', 'We finally combine our improvements and train a sparse model with 269B parameters (FLOP matched to a 32B dense model).\n\nThis model achieve SOTA on a wide range of NLP tasks: SuperGLUE, XSum, CNN-DM, ANLI R3, ARC-Easy/Challenge, CB WebQA, CB NatQA. https://t.co/XBxeXzcNWU']",22,02,1792 |
3786,24,1165792253224214528,913238472357437445,Fuminobu TAKAHASHI,"Our new paper appeared on arXiv today. An extremely large number of e-folds is possible in a simple single-field inflation where the potential has a shallow local min around the hilltop. Then, light scalars can reach the Bunch-Davies distribution. <LINK>",https://arxiv.org/abs/1908.08694,"We propose a class of single-field, slow-roll inflation models in which a typical number of $e$-folds can be extremely large. The key point is to introduce a very shallow local minimum near the top of the potential in a hilltop inflation model. In particular, a typical number of $e$-folds is enhanced if classical behavior dominates around the local minimum such that the inflaton probability distribution is drifted to the local minimum as a whole. After the inflaton escapes from the local minimum due to the stochastic dynamics, the ordinary slow-roll inflation follows and it can generate the primordial density perturbation consistent with observation. Interestingly, our scenario inherits the advantages of the old and new inflation: the typical $e$-folds can be extremely large as in the old inflation, and slow-roll inflation naturally follows after the stochastic regime as in the new inflation. In our numerical example, the typical number of $e$-folds can be as large as $10^{10^{10}}$, which is large enough for various light scalars such the QCD axion to reach the Bunch-Davies distribution. ",Stochastic inflation with an extremely large number of $e$-folds,1,"['Our new paper appeared on arXiv today. An extremely large number of e-folds is possible in a simple single-field inflation where the potential has a shallow local min around the hilltop. Then, light scalars can reach the Bunch-Davies distribution. <LINK>']",19,08,254 |
3787,85,1337520529612402689,201350518,Po-Wei Wang,"1/ A new paper on unleashing the power of SDPs to ML problems! This time, we detect communities (clusters) by approximate modularity maximization with an efficient low-cardinality SDP. Joint work with @zicokolter. paper: <LINK> code: <LINK> <LINK> 2/ Core idea: Convert the Kronecker delta to the dot-products between standard basis, relax the domain, and control the cardinality/sparsity in the decomposed space! This leads to an efficient solver that scales linearly to the number of edges, applicable to millions of nodes. <LINK> 3/ The SDP relaxation provides a significant boost over the greedy method, leading to a 30% improvement over the state-of-the-art discrete algorithms, at the same time being faster because requiring less random restarts! Try it at the Colab: <LINK>",https://arxiv.org/abs/2012.02676,"Modularity maximization has been a fundamental tool for understanding the community structure of a network, but the underlying optimization problem is nonconvex and NP-hard to solve. State-of-the-art algorithms like the Louvain or Leiden methods focus on different heuristics to help escape local optima, but they still depend on a greedy step that moves node assignment locally and is prone to getting trapped. In this paper, we propose a new class of low-cardinality algorithm that generalizes the local update to maximize a semidefinite relaxation derived from max-k-cut. This proposed algorithm is scalable, empirically achieves the global semidefinite optimality for small cases, and outperforms the state-of-the-art algorithms in real-world datasets with little additional time cost. From the algorithmic perspective, it also opens a new avenue for scaling-up semidefinite programming when the solutions are sparse instead of low-rank. ",Community detection using fast low-cardinality semidefinite programming,3,"['1/ A new paper on unleashing the power of SDPs to ML problems! This time, we detect communities (clusters) by approximate modularity maximization with an efficient low-cardinality SDP. Joint work with @zicokolter.\n\npaper: <LINK>\ncode: <LINK> <LINK>', '2/ Core idea: Convert the Kronecker delta to the dot-products between standard basis, relax the domain, and control the cardinality/sparsity in the decomposed space! This leads to an efficient solver that scales linearly to the number of edges, applicable to millions of nodes. https://t.co/3FjEESHmCl', '3/ The SDP relaxation provides a significant boost over the greedy method, leading to a 30% improvement over the state-of-the-art discrete algorithms, at the same time being faster because requiring less random restarts!\n\nTry it at the Colab: https://t.co/nBivCCO8Wh']",20,12,781 |
3788,59,1384184301613289478,4747768463,Cam Buzard,"My new paper has been accepted for publication!! It's about how our choice of observation nights might have a lot to do with how well we can see planets - Key point: try for when the star is in the telluric reference frame <LINK> This was originally a quick figure in my first paper, and I'm so glad my advisor wanted to pull it out and dig into it a bit deeper I had some fun with this work and owe thanks to many people, specifically thanks to @kellecruz and #bdnyc for teaching me about stats and the KS test which I relied on heavily here! @abehmard thank you!!! 🥰",https://arxiv.org/abs/2104.07790,"Cross correlation analyses of high resolution spectroscopic data have recently shown great success in directly detecting planetary signals and enabling the characterization of their atmospheres. One such technique aims to observe a system at multiple epochs and combine the measured planetary radial velocities from each epoch into a measurement of the planetary Keplerian orbital velocity $K_p$, constituting a direct detection of the planetary signal. Recent work has shown that in few-epoch ($\sim$5) data sets, unintended structure can arise at a high level, obscuring the planetary detection. In this work, we look to simulations to examine whether there are ways to reduce this structured noise in few-epoch data sets by careful planning of observations. The choice of observation date allows observers to select the primary (stellar) velocity - through a set systemic velocity and chosen barycentric velocity - and the planetary orbital phase, and so we focus on the effects of these two parameters. We find that epochs taken when the primary velocity is near zero, and the stellar lines remain relatively fixed to the telluric rest-frame, greatly reduce the level of structured noise and allow for much stronger planetary detections, on average more than twice the significance of detections made with epochs using randomly selected primary velocities. Following these results, we recommend that observers looking to build up high-resolution multi-epoch data sets target nights when their system has a near-zero primary velocity. ","Primary Velocity and Orbital Phase Effects on Planetary Detectability |
from Small Epoch Number Data Sets",4,"[""My new paper has been accepted for publication!! It's about how our choice of observation nights might have a lot to do with how well we can see planets - Key point: try for when the star is in the telluric reference frame <LINK>"", ""This was originally a quick figure in my first paper, and I'm so glad my advisor wanted to pull it out and dig into it a bit deeper"", 'I had some fun with this work and owe thanks to many people, specifically thanks to @kellecruz and #bdnyc for teaching me about stats and the KS test which I relied on heavily here!', '@abehmard thank you!!! 🥰']",21,04,568 |
3789,219,1512131361682907140,842208793463201792,Wei-Hong Li,"We've released a preprint of our work where we propose a unified look at jointly learning multiple vision tasks and visual domains (many-shot and few-shot learning) through universal representations, a single deep neural network. See <LINK> In the paper, we propose a Universal Representation Learning framework to learn a single universal network for multiple tasks/domains by aligning representations of a single universal network and single-task/domain networks through small capacity task/domain-specific adapters. We show that our method generalizes over multi-task dense prediction tasks, multi-domain many-shot learning, cross-domain few-shot learning. The code will be available at <LINK>",https://arxiv.org/abs/2204.02744,"We propose a unified look at jointly learning multiple vision tasks and visual domains through universal representations, a single deep neural network. Learning multiple problems simultaneously involves minimizing a weighted sum of multiple loss functions with different magnitudes and characteristics and thus results in unbalanced state of one loss dominating the optimization and poor results compared to learning a separate model for each problem. To this end, we propose distilling knowledge of multiple task/domain-specific networks into a single deep neural network after aligning its representations with the task/domain-specific ones through small capacity adapters. We rigorously show that universal representations achieve state-of-the-art performances in learning of multiple dense prediction problems in NYU-v2 and Cityscapes, multiple image classification problems from diverse domains in Visual Decathlon Dataset and cross-domain few-shot learning in MetaDataset. Finally we also conduct multiple analysis through ablation and qualitative studies. ","Universal Representations: A Unified Look at Multiple Task and Domain |
Learning",3,"[""We've released a preprint of our work where we propose a unified look at jointly learning multiple vision tasks and visual domains (many-shot and few-shot learning) through universal representations, a single deep neural network. See <LINK>"", 'In the paper, we propose a Universal Representation Learning framework to learn a single universal network for multiple tasks/domains by aligning representations of a single universal network and single-task/domain networks through small capacity task/domain-specific adapters.', 'We show that our method generalizes over multi-task dense prediction tasks, multi-domain many-shot learning, cross-domain few-shot learning. The code will be available at https://t.co/xMp7YAQR2T']",22,04,696 |
3790,66,1427794393373626368,3874714693,augustus odena,"New paper! <LINK> We use big language models to synthesize computer programs, execute programs, solve math problems, and dialog with humans to iteratively refine code. The models can solve 60% and 81% of the programming and math problems, respectively. A thread: <LINK> First, we evaluate models from 244M to 137B params on a new data-set we created <LINK>. Number of problems solved scales pretty cleanly with model-size. <LINK> Larger models not only solve problems that smaller models can't solve, they also more reliably solve easier problems that smaller models solve less frequently. <LINK> We have a pretty thorough error analysis in the paper, but one thing I thought was especially fun is that the model sometimes “cheats” and hard-codes an answer that passes the tests but does not solve the problem. <LINK> Second, we evaluate whether these models can interact with a human to iteratively refine their outputs. We find that 4 turns of dialog with a human can double the number of problems solved by the model. <LINK> Third, we try (and largely fail) to get language models to ‘execute’ programs. This casts some doubt on the extent to which the models ‘understand’ the code they’re emitting. <LINK> Fourth, we convert a dataset (MathQA) of math word problems into synthesis problems and show that this allows language models to solve a large majority of the problems. <LINK> There’s a bunch of other fun stuff in there, but these are the main things. Thanks to the other lead author @jacobaustin132, and all other authors: @Maxwell_Nye, Maarten Bosma, Henryk Michalewski, @dmdohan, Ellen Jiang, Carrie Cai, Michael Terry, @quocleix, and @RandomlyWalking",https://arxiv.org/abs/2108.07732,"This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fine-tuning regimes. Our benchmarks are designed to measure the ability of these models to synthesize short Python programs from natural language descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974 programming tasks, designed to be solvable by entry-level programmers. The MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text. On both datasets, we find that synthesis performance scales log-linearly with model size. Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59.6 percent of the problems from MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a held-out portion of the dataset improves performance by about 10 percentage points across most model sizes. On the MathQA-Python dataset, the largest fine-tuned model achieves 83.8 percent accuracy. Going further, we study the model's ability to engage in dialog about code, incorporating human feedback to improve its solutions. We find that natural language feedback from a human halves the error rate compared to the model's initial prediction. Additionally, we conduct an error analysis to shed light on where these models fall short and what types of programs are most difficult to generate. Finally, we explore the semantic grounding of these models by fine-tuning them to predict the results of program execution. We find that even our best models are generally unable to predict the output of a program given a specific input. ",Program Synthesis with Large Language Models,8,"['New paper! <LINK>\nWe use big language models to synthesize computer programs, execute programs, solve math problems, and dialog with humans to iteratively refine code.\nThe models can solve 60% and 81% of the programming and math problems, respectively. A thread: <LINK>', 'First, we evaluate models from 244M to 137B params on a new data-set we created https://t.co/UvnLquoENP.\nNumber of problems solved scales pretty cleanly with model-size. https://t.co/AnZQd47e4L', ""Larger models not only solve problems that smaller models can't solve, they also more reliably solve easier problems that smaller models solve less frequently. https://t.co/Tflgg6WTC3"", 'We have a pretty thorough error analysis in the paper, but one thing I thought was especially fun is that the model sometimes “cheats” and hard-codes an answer that passes the tests but does not solve the problem. https://t.co/eMOApB0L1W', 'Second, we evaluate whether these models can interact with a human to iteratively refine their outputs. We find that 4 turns of dialog with a human can double the number of problems solved by the model. https://t.co/GNFTEGJ0zB', 'Third, we try (and largely fail) to get language models to ‘execute’ programs. This casts some doubt on the extent to which the models ‘understand’ the code they’re emitting. https://t.co/Nw81suyC9y', 'Fourth, we convert a dataset (MathQA) of math word problems into synthesis problems and show that this allows language models to solve a large majority of the problems. https://t.co/gVtfUEWPw7', 'There’s a bunch of other fun stuff in there, but these are the main things. Thanks to the other lead author @jacobaustin132, and all other authors: @Maxwell_Nye, Maarten Bosma, Henryk Michalewski, @dmdohan, Ellen Jiang, Carrie Cai, Michael Terry, @quocleix, and @RandomlyWalking']",21,08,1664 |
3791,304,1318948567910895620,383903725,Xavier Puig,"Excited to share “Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration”! We propose a challenge where agents need to infer human goals in a household and help perform them efficiently. arxiv: <LINK> code: <LINK> <LINK> We build a realistic multi-agent platform on top of VirtualHome (<LINK>) and design a benchmark with multiple helping agent baselines. We evaluate their performance when collaborating with simulated and real humans in different tasks. <LINK> Joint work with @tianminshu, @ShuangL13799063, Zilin Wang, Josh Tenenbaum, @FidlerSanja and Antonio Torralba.",http://arxiv.org/abs/2010.09890,"In this paper, we introduce Watch-And-Help (WAH), a challenge for testing social intelligence in agents. In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently. To succeed, the AI agent needs to i) understand the underlying goal of the task by watching a single demonstration of the human-like agent performing the same task (social perception), and ii) coordinate with the human-like agent to solve the task in an unseen environment as fast as possible (human-AI collaboration). For this challenge, we build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines. We evaluate the performance of AI agents with the human-like agent as well as with real humans using objective metrics and subjective user ratings. Experimental results demonstrate that the proposed challenge and virtual environment enable a systematic evaluation on the important aspects of machine social intelligence at scale. ","Watch-And-Help: A Challenge for Social Perception and Human-AI |
Collaboration",3,"['Excited to share “Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration”! \nWe propose a challenge where agents need to infer human goals in a household and help perform them efficiently.\narxiv: <LINK>\ncode: <LINK> <LINK>', 'We build a realistic multi-agent platform on top of VirtualHome (https://t.co/wWEpn8jKoZ) and design a benchmark with multiple helping agent baselines. We evaluate their performance when collaborating with simulated and real humans in different tasks. \nhttps://t.co/F9qj9oF981', 'Joint work with @tianminshu, @ShuangL13799063, Zilin Wang, Josh Tenenbaum, @FidlerSanja and Antonio Torralba.']",20,10,595 |
3792,72,1148944337700610048,2180768821,Erik Hoel,"Do networks have an intrinsic scale? A new paper by myself & @jkbren on causal emergence shows they do, and how ""scale-free"" networks are the critical point for developing scale. I'm particularly proud of this paper, which is packed with interesting stuff. <LINK> <LINK> Some of this interesting stuff: how random networks contain a fixed amount of effective information, how to construct accurate macroscales using higher-order dependencies, biological networks being more causally emergent than technological networks (continued) more interesting stuff: random networks contain effectively no causal emergence, causal emergence is basically a clustering algorithm for noise in a network, effective information grows only when structure does, how to categorize all networks in terms of determinism/degeneracy... ... and I could go on. Possibly this could have been several papers but I'm happy to have something so dense and enjoyable and hope that others find these techniques to accurately model higher scales in networks useful. This is a preprint - it's under submission at a good journal",https://arxiv.org/abs/1907.03902,"The connectivity of a network contains information about the relationships between nodes, which can denote interactions, associations, or dependencies. We show that this information can be analyzed by measuring the uncertainty (and certainty) contained in paths along nodes and links in a network. Specifically, we derive from first principles a measure known as effective information and describe its behavior in common network models. Networks with higher effective information contain more information in the relationships between nodes. We show how subgraphs of nodes can be grouped into macro-nodes, reducing the size of a network while increasing its effective information (a phenomenon known as causal emergence). We find that informative higher scales are common in simulated and real networks across biological, social, informational, and technological domains. These results show that the emergence of higher scales in networks can be directly assessed and that these higher scales offer a way to create certainty out of uncertainty. ",The emergence of informative higher scales in complex networks,4,"['Do networks have an intrinsic scale? A new paper by myself & @jkbren on causal emergence shows they do, and how ""scale-free"" networks are the critical point for developing scale. I\'m particularly proud of this paper, which is packed with interesting stuff. <LINK> <LINK>', 'Some of this interesting stuff: how random networks contain a fixed amount of effective information, how to construct accurate macroscales using higher-order dependencies, biological networks being more causally emergent than technological networks (continued)', 'more interesting stuff: random networks contain effectively no causal emergence, causal emergence is basically a clustering algorithm for noise in a network, effective information grows only when structure does, how to categorize all networks in terms of determinism/degeneracy...', ""... and I could go on. Possibly this could have been several papers but I'm happy to have something so dense and enjoyable and hope that others find these techniques to accurately model higher scales in networks useful. This is a preprint - it's under submission at a good journal""]",19,07,1093 |
3793,56,1384377593755996162,513512479,Guillermo Navas-Palencia,"Happy to share my new paper, ""Optimal Counterfactual Explanations for Scorecard modelling"". This work presents MIPs to generate multiple counterfactual explanations with diversity constraints simultaneously for scorecard models. <LINK> #orms #MachineLearning #XAI The presented algorithm will be part of OptBinning 0.11.0. Providing tools to enhance the explainability of scorecard models with a binary or continuous target.",https://arxiv.org/abs/2104.08619,"Counterfactual explanations is one of the post-hoc methods used to provide explainability to machine learning models that have been attracting attention in recent years. Most examples in the literature, address the problem of generating post-hoc explanations for black-box machine learning models after the rejection of a loan application. In contrast, in this work, we investigate mathematical programming formulations for scorecard models, a type of interpretable model predominant within the banking industry for lending. The proposed mixed-integer programming formulations combine objective functions to ensure close, realistic and sparse counterfactuals using multi-objective optimization techniques for a binary, probability or continuous outcome. Moreover, we extend these formulations to generate multiple optimal counterfactuals simultaneously while guaranteeing diversity. Experiments on two real-world datasets confirm that the presented approach can generate optimal diverse counterfactuals addressing desired properties with assumable CPU times for practice use. ",Optimal Counterfactual Explanations for Scorecard modelling,2,"['Happy to share my new paper, ""Optimal Counterfactual Explanations for Scorecard modelling"". This work presents MIPs to generate multiple counterfactual explanations with diversity constraints simultaneously for scorecard models. <LINK>\n#orms #MachineLearning #XAI', 'The presented algorithm will be part of OptBinning 0.11.0. Providing tools to enhance the explainability of scorecard models with a binary or continuous target.']",21,04,424 |
3794,50,1363847490190135297,114485232,Jimmy Lin,"We've released a new version (v0.11.0.0) of our Pyserini Python toolkit to support replicable IR research, now providing first-stage retrieval for sparse, dense, and hybrid representations. <LINK> Our new arXiv paper provides an overview: <LINK> Also of interest in the paper is the culture of replicability we've been trying to cultivate in the group: both social processes (dog-fooding, replicability as a shared norm, properly-aligned incentive structures) as well as technical infrastructure (regression testing).",https://arxiv.org/abs/2102.10073,"Pyserini is an easy-to-use Python toolkit that supports replicable IR research by providing effective first-stage retrieval in a multi-stage ranking architecture. Our toolkit is self-contained as a standard Python package and comes with queries, relevance judgments, pre-built indexes, and evaluation scripts for many commonly used IR test collections. We aim to support, out of the box, the entire research lifecycle of efforts aimed at improving ranking with modern neural approaches. In particular, Pyserini supports sparse retrieval (e.g., BM25 scoring using bag-of-words representations), dense retrieval (e.g., nearest-neighbor search on transformer-encoded representations), as well as hybrid retrieval that integrates both approaches. This paper provides an overview of toolkit features and presents empirical results that illustrate its effectiveness on two popular ranking tasks. We also describe how our group has built a culture of replicability through shared norms and tools that enable rigorous automated testing. ","Pyserini: An Easy-to-Use Python Toolkit to Support Replicable IR |
Research with Sparse and Dense Representations",2,"[""We've released a new version (v0.11.0.0) of our Pyserini Python toolkit to support replicable IR research, now providing first-stage retrieval for sparse, dense, and hybrid representations. <LINK> Our new arXiv paper provides an overview: <LINK>"", ""Also of interest in the paper is the culture of replicability we've been trying to cultivate in the group: both social processes (dog-fooding, replicability as a shared norm, properly-aligned incentive structures) as well as technical infrastructure (regression testing).""]",21,02,517 |
3795,10,1300713906575376384,944291984675614721,Tobias de Jong,"New paper on arXiv: <LINK> 😁 We analyze the heterogeneity of twisted bilayer #graphene near the #magicangle, extracting the relative displacement of the layers from the #moire pattern as observed in #STM. <LINK> We worked with the Milan Allan group (also @LeidenPhysics ) to extract these properties using geometric phase analysis from moire patterns as measured on devices made in Barcelona @ICFOnians by the Efetov group. The relevant properties here are the relative strain between the two lattices, and the effective twist angle, which both influence the band structure of #superconducting #magic angle #twisted bilayer #graphene significantly (at least according to theory). The cool part to me here was that the moire pattern magnifies any shift of the two lattices by a large ( ~50 at magic angle) factor, but does so with a literal twist of about 90 degrees: horizontal shifts of moire pattern indicate vertical shifts of the lattices. I should make an animation to visualize this.😅",https://arxiv.org/abs/2008.13766,"We introduce a new method to continuously map inhomogeneities of a moir\'e lattice and apply it to large-area topographic images we measure on open-device twisted bilayer graphene (TBG). We show that the variation in the twist angle of a TBG device, which is frequently conjectured to be the reason for differences between devices with a supposed similar twist angle, is about 0.08{\deg} around the average of 2.02{\deg} over areas of several hundred nm, comparable to devices encapsulated between hBN slabs. We distinguish between an effective twist angle and local anisotropy and relate the latter to heterostrain. Our results imply that for our devices, twist angle heterogeneity has a roughly equal effect to the electronic structure as local strain. The method introduced here is applicable to results from different imaging techniques, and on different moir\'e materials. ","Measuring local moir\'e lattice heterogeneity of twisted bilayer |
graphene",5,"['New paper on arXiv: <LINK> 😁 We analyze the heterogeneity of twisted bilayer #graphene near the #magicangle, extracting the relative displacement of the layers from the #moire pattern as observed in #STM. <LINK>', 'We worked with the Milan Allan group (also @LeidenPhysics ) to extract these properties using geometric phase analysis from moire patterns as measured on devices made in Barcelona @ICFOnians by the Efetov group.', 'The relevant properties here are the relative strain between the two lattices, and the effective twist angle, which both influence the band structure of #superconducting #magic angle #twisted bilayer #graphene significantly (at least according to theory).', 'The cool part to me here was that the moire pattern magnifies any shift of the two lattices by a large ( ~50 at magic angle) factor, but does so with a literal twist of about 90 degrees: horizontal shifts of moire pattern indicate vertical shifts of the lattices.', 'I should make an animation to visualize this.😅']",20,08,990 |
3796,306,1318703939877957633,97939183,Yuandong Tian,"<LINK>. 3-min video for our theoretical framework on self-supervised methods (SimCLR/BYOL) with deep ReLU networks. We find 1) an analytic form of weight update per layer, 2) how feature emerges, 3) why BYOL works without negative pairs. <LINK> <LINK>",https://arxiv.org/abs/2010.00578,"We propose a novel theoretical framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks (e.g., SimCLR). First, we prove that in each SGD update of SimCLR with various loss functions, including simple contrastive loss, soft Triplet loss and InfoNCE loss, the weights at each layer are updated by a \emph{covariance operator} that specifically amplifies initial random selectivities that vary across data samples but survive averages over data augmentations. To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a \emph{hierarchical latent tree model} (HLTM) and prove that the hidden neurons of deep ReLU networks can learn the latent variables in HLTM, despite the fact that the network receives \emph{no direct supervision} from these unobserved latent variables. This leads to a provable emergence of hierarchical features through the amplification of initially random selectivities through contrastive SSL. Extensive numerical studies justify our theoretical findings. Code is released in this https URL ",Understanding Self-supervised Learning with Dual Deep Networks,1,"['<LINK>. 3-min video for our theoretical framework on self-supervised methods (SimCLR/BYOL) with deep ReLU networks. We find 1) an analytic form of weight update per layer, 2) how feature emerges, 3) why BYOL works without negative pairs. <LINK> <LINK>']",20,10,251 |
3797,4,1091272386098487302,1020088099,Umberto Picchini,"new neural network architecture to automatically learn summaries for approximate Bayesian computation (ABC). Specifically designed for Markov data, works very well! See thread by @samuel_wiqvist , with @pamattei and @jesfrellsen. Paper at <LINK> Love this team! <LINK>",https://arxiv.org/abs/1901.10230,"We present a novel family of deep neural architectures, named partially exchangeable networks (PENs) that leverage probabilistic symmetries. By design, PENs are invariant to block-switch transformations, which characterize the partial exchangeability properties of conditionally Markovian processes. Moreover, we show that any block-switch invariant function has a PEN-like representation. The DeepSets architecture is a special case of PEN and we can therefore also target fully exchangeable data. We employ PENs to learn summary statistics in approximate Bayesian computation (ABC). When comparing PENs to previous deep learning methods for learning summary statistics, our results are highly competitive, both considering time series and static models. Indeed, PENs provide more reliable posterior samples even when using less training data. ","Partially Exchangeable Networks and Architectures for Learning Summary |
Statistics in Approximate Bayesian Computation",1,"['new neural network architecture to automatically learn summaries for approximate Bayesian computation (ABC). Specifically designed for Markov data, works very well! See thread by @samuel_wiqvist , with @pamattei and @jesfrellsen. Paper at <LINK> Love this team! <LINK>']",19,01,268 |
3798,236,1371986590105399297,969190164764372993,Xudong Sun,"We have two student papers out on arXiv today! The first is from Anna Payne (<LINK>). We studied EUV dimming associated with flux emergence, which is termed “emerging dimming”. We used AIA and HMI data to probe its origin. 1/5 <LINK> The dimming occurs only in 171 A, and coincides with brightening in 211 A. We performed DEM analysis on 18 events. The amount of sub-MK plasma decreases, and the 1-2 MK plasma increases. The changes are correlated over 8 orders of magnitude! 2/5 <LINK> We also look at the magnetic fields. The quiet-Sun photospheric field in the dimming region doesn’t change much. However, a potential field model shows that they are now connected to the emerged active region. 3/5 <LINK> Because all regions are quiet-Sun like with no access to open field, we think the dimming is caused by cool plasma being heated and moving out of the 171 A temperature range rather than outflows. 4/5 We conclude that reconnection between the quiet Sun and emerging AR heats the corona and creates the “emerging dimming”. This seems somewhat different from the dark moats around well formed ARs in 171 A. 5/5",https://arxiv.org/abs/2103.09087,"Emerging dimming occurs in isolated solar active regions (ARs) during the early stages of magnetic flux emergence. Observed by the Atmospheric Imaging Assembly, it features a rapid decrease in extreme-ultraviolet (EUV) emission in the 171 \r{A} channel images, and a simultaneous increase in the 211 \r{A} images. Here, we analyze the coronal thermodynamic and magnetic properties to probe its physical origin. We calculate the time-dependent differential emission measures for a sample of 18 events between 2010 and 2012. The emission measure (EM) decrease in the temperature range $5.7 \le \log_{10}T \le 5.9$ is well correlated with the EM increase in $6.2 \le \log_{10}T \le 6.4$ over eight orders of magnitude. This suggests that the coronal plasma is being heated from the quiet-Sun, sub-MK temperature to 1-2 MK, more typical for ARs. Potential field extrapolation indicates significant change in the local magnetic connectivity: the dimming region is now linked to the newly emerged flux via longer loops. We conclude that emerging dimming is likely caused by coronal heating episodes, powered by reconnection between the emerging and the ambient magnetic fields. ",Emerging Dimming as Coronal Heating Episodes,5,"['We have two student papers out on arXiv today! The first is from Anna Payne (<LINK>). We studied EUV dimming associated with flux emergence, which is termed “emerging dimming”. We used AIA and HMI data to probe its origin. 1/5 <LINK>', 'The dimming occurs only in 171 A, and coincides with brightening in 211 A. We performed DEM analysis on 18 events. The amount of sub-MK plasma decreases, and the 1-2 MK plasma increases. The changes are correlated over 8 orders of magnitude! 2/5 https://t.co/NbqF8SHGkA', 'We also look at the magnetic fields. The quiet-Sun photospheric field in the dimming region doesn’t change much. However, a potential field model shows that they are now connected to the emerged active region. 3/5 https://t.co/3Ukl6N9OaD', 'Because all regions are quiet-Sun like with no access to open field, we think the dimming is caused by cool plasma being heated and moving out of the 171 A temperature range rather than outflows. 4/5', 'We conclude that reconnection between the quiet Sun and emerging AR heats the corona and creates the “emerging dimming”. This seems somewhat different from the dark moats around well formed ARs in 171 A. 5/5']",21,03,1115 |
3799,166,1474119561129742345,1339691759719342085,Sebastian Will,Check out our latest preprint - if you have ever wondered whether carbon molecules can be laser cooled this paper may offer some answers. Thanks so much to Niccolo Bigagli for leading this study that is a bit more out of the box than what we usually do <LINK> <LINK> Hope you enjoy it and let us know your thoughts!,https://arxiv.org/abs/2112.10745,We report on a scheme for laser cooling of $^{12}$C$_2$. We have calculated the branching ratios for cycling and repumping transitions and calculated the number of photon scatterings required to achieve deflection and laser cooling of a beam of $C_2$ molecules under realistic experimental conditions. Our results demonstrate that C$_2$ cooling using the Swan ($d^3\Pi_\text{g} \leftrightarrow a^3\Pi_\text{u}$) and Duck ($d^3\Pi_\text{g} \leftrightarrow c^3\Sigma_\text{u}^+$) bands is achievable via techniques similar to state-of-the-art molecular cooling experiments. The Phillips ($A^1\Pi_\text{u} \leftrightarrow X^1\Sigma_\text{g}^+$) and Ballik-Ramsay ($b^3\Sigma_\text{g}^- \leftrightarrow a^3\Pi_\text{u}$) bands offer the potential for narrow-line cooling. This work opens up a path to cooling of molecules with carbon-carbon bonds and may pave the way toward quantum control of organic molecules. ,Laser Cooling Scheme for the Carbon Dimer ($^{12}$C$_2$),2,"['Check out our latest preprint - if you have ever wondered whether carbon molecules can be laser cooled this paper may offer some answers. Thanks so much to Niccolo Bigagli for leading this study that is a bit more out of the box than what we usually do <LINK> <LINK>', 'Hope you enjoy it and let us know your thoughts!']",21,12,315 |
3800,113,1324321597243740160,430217063,Dúalta Ó Fionnagáin,New paper was recently accepted to #MNRAS and is up on #arxiv now. We examined how the wind of Lambda Andromedae is driven (coronal or Alfvén waves) using the batsrus/awsom code. Our results are constrained with radio observations! <LINK> <LINK> Plots showing the average temperature and density distribution of some of our different models (there are a few more models in the paper) <LINK>,https://arxiv.org/abs/2011.02406,"We investigate the wind of lambda And, a solar-mass star that has evolved off the main sequence becoming a sub-giant. We present spectropolarimetric observations and use them to reconstruct the surface magnetic field of lambda And. Although much older than our Sun, this star exhibits a stronger (reaching up to 83 G) large-scale magnetic field, which is dominated by the poloidal component. To investigate the wind of lambda And, we use the derived magnetic map to simulate two stellar wind scenarios, namely a polytropic wind (thermally-driven) and an Alfven-wave driven wind with turbulent dissipation. From our 3D magnetohydrodynamics simulations, we calculate the wind thermal emission and compare it to previously published radio observations and more recent VLA observations, which we present here. These observations show a basal sub-mJy quiescent flux level at ~5 GHz and, at epochs, a much larger flux density (>37 mJy), likely due to radio flares. By comparing our model results with the radio observations of lambda And, we can constrain its mass-loss rate Mdot. There are two possible conclusions. 1) Assuming the quiescent radio emission originates from the stellar wind, we conclude that lambda And has Mdot ~ 3e-9 Msun/yr, which agrees with the evolving mass-loss rate trend for evolved solar-mass stars. 2) Alternatively, if the quiescent emission does not originate from the wind, our models can only place an upper limit on mass-loss rates, indicating that Mdot <~ 3e-9 Msun/yr. ",Lambda And: A post-main sequence wind from a solar-mass star,2,"['New paper was recently accepted to #MNRAS and is up on #arxiv now. We examined how the wind of Lambda Andromedae is driven (coronal or Alfvén waves) using the batsrus/awsom code. Our results are constrained with radio observations!\n\n<LINK> <LINK>', 'Plots showing the average temperature and density distribution of some of our different models (there are a few more models in the paper) https://t.co/2LWAQ0HWwx']",20,11,390 |
3801,70,1079931577990172673,1030366876453371904,Rowan McAllister,"How should robots react to strange new observations x? We find projecting out-of-distribution x to uncertain in-distribution x using generative models can improve already-trained models, e.g. collision predictors. With Gregory Kahn, @jeffclune, @svlevine: <LINK> <LINK>",https://arxiv.org/abs/1812.10687,"Deep learning provides a powerful tool for machine perception when the observations resemble the training data. However, real-world robotic systems must react intelligently to their observations even in unexpected circumstances. This requires a system to reason about its own uncertainty given unfamiliar, out-of-distribution observations. Approximate Bayesian approaches are commonly used to estimate uncertainty for neural network predictions, but can struggle with out-of-distribution observations. Generative models can in principle detect out-of-distribution observations as those with a low estimated density. However, the mere presence of an out-of-distribution input does not by itself indicate an unsafe situation. In this paper, we present a method for uncertainty-aware robotic perception that combines generative modeling and model uncertainty to cope with uncertainty stemming from out-of-distribution states. Our method estimates an uncertainty measure about the model's prediction, taking into account an explicit (generative) model of the observation distribution to handle out-of-distribution inputs. This is accomplished by probabilistically projecting observations onto the training distribution, such that out-of-distribution inputs map to uncertain in-distribution observations, which in turn produce uncertain task-related predictions, but only if task-relevant parts of the image change. We evaluate our method on an action-conditioned collision prediction task with both simulated and real data, and demonstrate that our method of projecting out-of-distribution observations improves the performance of four standard Bayesian and non-Bayesian neural network approaches, offering more favorable trade-offs between the proportion of time a robot can remain autonomous and the proportion of impending crashes successfully avoided. ","Robustness to Out-of-Distribution Inputs via Task-Aware Generative |
Uncertainty",1,"['How should robots react to strange new observations x? We find projecting out-of-distribution x to uncertain in-distribution x using generative models can improve already-trained models, e.g. collision predictors. With Gregory Kahn, @jeffclune, @svlevine: <LINK> <LINK>']",18,12,269 |
3802,186,1375968978250567681,3877821072,Shuai Tang,"Check out our new paper on linearising neural networks for fast adaptation. paper: <LINK> code: <LINK> 1) gradients w.r.t. params of a pretrained neural network as features for data samples. 2) degenerated GP as the prediction model with uncertainty estimation for a new task the hardest part is to make the linear system efficient. wesley made it possible through his novel implementation of scalable Fisher vector product coauthors made this pic to illustrate the high-level idea :D @ wesley (well, he is not twitter) @andrewgwils @pgmoren @adamianou <LINK>",https://arxiv.org/abs/2103.01439,"The inductive biases of trained neural networks are difficult to understand and, consequently, to adapt to new settings. We study the inductive biases of linearizations of neural networks, which we show to be surprisingly good summaries of the full network functions. Inspired by this finding, we propose a technique for embedding these inductive biases into Gaussian processes through a kernel designed from the Jacobian of the network. In this setting, domain adaptation takes the form of interpretable posterior inference, with accompanying uncertainty estimation. This inference is analytic and free of local optima issues found in standard techniques such as fine-tuning neural network weights to a new task. We develop significant computational speed-ups based on matrix multiplies, including a novel implementation for scalable Fisher vector products. Our experiments on both image classification and regression demonstrate the promise and convenience of this framework for transfer learning, compared to neural network fine-tuning. Code is available at this https URL ",Fast Adaptation with Linearized Neural Networks,4,"['Check out our new paper on linearising neural networks for fast adaptation. paper: <LINK> code: <LINK>', '1) gradients w.r.t. params of a pretrained neural network as features for data samples. \n2) degenerated GP as the prediction model with uncertainty estimation for a new task', 'the hardest part is to make the linear system efficient. wesley made it possible through his novel implementation of scalable Fisher vector product', 'coauthors made this pic to illustrate the high-level idea :D @ wesley (well, he is not twitter) @andrewgwils @pgmoren @adamianou https://t.co/tqBx2sfeeV']",21,03,559 |
3803,4,1312511129013297154,3245312065,Xuezhe Ma (Max),"Check our new optimizer, which outperforms SGD and variants of Adam on both convergence speed and generalization. paper: <LINK> code: <LINK> My last work done @LTIatCMU ? @USC_ISI @USCViterbi @CSatUSC <LINK> For image classification (1st pic), we used ResNet-110 on CIFAR-10 and ResNeXt-50 on ImageNet. For language modeling (2nd pic and 1st table), we used two-layer LSTM on one billion words. For NMT (2nd table), we trained Transformer-base models on WMT-14 EN-DE.",https://arxiv.org/abs/2009.13586,"In this paper, we introduce Apollo, a quasi-Newton method for nonconvex stochastic optimization, which dynamically incorporates the curvature of the loss function by approximating the Hessian via a diagonal matrix. Importantly, the update and storage of the diagonal approximation of Hessian is as efficient as adaptive first-order optimization methods with linear complexity for both time and memory. To handle nonconvexity, we replace the Hessian with its rectified absolute value, which is guaranteed to be positive-definite. Experiments on three tasks of vision and language show that Apollo achieves significant improvements over other stochastic optimization methods, including SGD and variants of Adam, in term of both convergence speed and generalization performance. The implementation of the algorithm is available at this https URL ","Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for |
Nonconvex Stochastic Optimization",2,"['Check our new optimizer, which outperforms SGD and variants of Adam on both convergence speed and generalization.\npaper: <LINK>\ncode: <LINK>\n\nMy last work done @LTIatCMU ?\n@USC_ISI @USCViterbi @CSatUSC <LINK>', 'For image classification (1st pic), we used ResNet-110 on CIFAR-10 and ResNeXt-50 on ImageNet.\nFor language modeling (2nd pic and 1st table), we used two-layer LSTM on one billion words.\nFor NMT (2nd table), we trained Transformer-base models on WMT-14 EN-DE.']",20,09,467 |
3804,84,1425784358154158082,864555701783474179,julesh,"New preprint! “Composing games into complex institutions” with Seth Frey, Josh Tan and Philipp Zahn This is a general-audience social science paper, summarising our explorations of using open games to think about institutions and governance <LINK> <LINK> Writing a research paper on Google Docs has been a new one for me",https://arxiv.org/abs/2108.05318,"Game theory is used by all behavioral sciences, but its development has long centered around tools for relatively simple games and toy systems, such as the economic interpretation of equilibrium outcomes. Our contribution, compositional game theory, permits another approach of equally general appeal: the high-level design of large games for expressing complex architectures and representing real-world institutions faithfully. Compositional game theory, grounded in the mathematics underlying programming languages, and introduced here as a general computational framework, increases the parsimony of game representations with abstraction and modularity, accelerates search and design, and helps theorists across disciplines express real-world institutional complexity in well-defined ways. ",Composing games into complex institutions,2,"['New preprint!\n\n“Composing games into complex institutions” with Seth Frey, Josh Tan and Philipp Zahn\n\nThis is a general-audience social science paper, summarising our explorations of using open games to think about institutions and governance\n<LINK> <LINK>', 'Writing a research paper on Google Docs has been a new one for me']",21,08,320 |
3805,162,1476519046816448514,1285579351950598144,Karolina Stanczak,"In ""A Survey on Gender Bias in Natural Language Processing"" with @IAugenstein we present a study of 304 papers on gender bias in NLP. Despite the growing interest, we find 4 major limitations and see overcoming them as crucial for future research. <LINK> #NLProc <LINK>",https://arxiv.org/abs/2112.14168,"Language can be used as a means of reproducing and enforcing harmful stereotypes and biases and has been analysed as such in numerous research. In this paper, we present a survey of 304 papers on gender bias in natural language processing. We analyse definitions of gender and its categories within social sciences and connect them to formal definitions of gender bias in NLP research. We survey lexica and datasets applied in research on gender bias and then compare and contrast approaches to detecting and mitigating gender bias. We find that research on gender bias suffers from four core limitations. 1) Most research treats gender as a binary variable neglecting its fluidity and continuity. 2) Most of the work has been conducted in monolingual setups for English or other high-resource languages. 3) Despite a myriad of papers on gender bias in NLP methods, we find that most of the newly developed algorithms do not test their models for bias and disregard possible ethical considerations of their work. 4) Finally, methodologies developed in this line of research are fundamentally flawed covering very limited definitions of gender bias and lacking evaluation baselines and pipelines. We suggest recommendations towards overcoming these limitations as a guide for future research. ",A Survey on Gender Bias in Natural Language Processing,1,"['In ""A Survey on Gender Bias in Natural Language Processing"" with @IAugenstein we present a study of 304 papers on gender bias in NLP. Despite the growing interest, we find 4 major limitations and see overcoming them as crucial for future research.\n\n<LINK>\n#NLProc <LINK>']",21,12,269 |
3806,58,1440831535788093447,1129375875856773120,Efrat Shimron,"Check out our new paper - Subtle Inverse Crimes! This work shows why naïve usage of open datasets leads to overly-optimistic performance of Compressed Sensing, Dictionary Learning and Deep Learning algorithms. Paper: <LINK> co-authors @jtsense1 @KewangKe @sm313 <LINK> <LINK>",https://arxiv.org/abs/2109.08237,"While open databases are an important resource in the Deep Learning (DL) era, they are sometimes used ""off-label"": data published for one task are used for training algorithms for a different one. This work aims to highlight that in some cases, this common practice may lead to biased, overly-optimistic results. We demonstrate this phenomenon for inverse problem solvers and show how their biased performance stems from hidden data preprocessing pipelines. We describe two preprocessing pipelines typical of open-access databases and study their effects on three well-established algorithms developed for Magnetic Resonance Imaging (MRI) reconstruction: Compressed Sensing (CS), Dictionary Learning (DictL), and DL. In this large-scale study we performed extensive computations. Our results demonstrate that the CS, DictL and DL algorithms yield systematically biased results when naively trained on seemingly-appropriate data: the Normalized Root Mean Square Error (NRMSE) improves consistently with the preprocessing extent, showing an artificial increase of 25%-48% in some cases. Since this phenomenon is generally unknown, biased results are sometimes published as state-of-the-art; we refer to that as subtle data crimes. This work hence raises a red flag regarding naive off-label usage of Big Data and reveals the vulnerability of modern inverse problem solvers to the resulting bias. ","Subtle Data Crimes: Naively training machine learning algorithms could |
lead to overly-optimistic results",1,"['Check out our new paper - Subtle Inverse Crimes!\n\nThis work shows why naïve usage of open datasets leads to overly-optimistic performance of Compressed Sensing, Dictionary Learning and Deep Learning algorithms.\n\nPaper: <LINK>\nco-authors @jtsense1 @KewangKe @sm313 <LINK> <LINK>']",21,09,275 |
3807,108,1491036828421718021,17520295,Ilija Bogunovic,"Our new paper presents a Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian Process Bandits. Paper <LINK> With: Zihan Li, @arkrause, @j_m_scarlett The key idea behind our algorithm is to incorporate a rare switching, along with a novel robust estimator, enlarged confidence bounds, and a minimal number of plays of each selected action. This led to significantly tighter regret bounds for several commonly-considered kernels. <LINK> The paper contains the first empirical study of robustness in the corrupted kernelized bandit setting. The key finding is that our algorithm is robust against a variety of adversarial attacks. <LINK>",https://arxiv.org/abs/2202.01850,"We consider the sequential optimization of an unknown, continuous, and expensive to evaluate reward function, from noisy and adversarially corrupted observed rewards. When the corruption attacks are subject to a suitable budget $C$ and the function lives in a Reproducing Kernel Hilbert Space (RKHS), the problem can be posed as corrupted Gaussian process (GP) bandit optimization. We propose a novel robust elimination-type algorithm that runs in epochs, combines exploration with infrequent switching to select a small subset of actions, and plays each action for multiple time instants. Our algorithm, Robust GP Phased Elimination (RGP-PE), successfully balances robustness to corruptions with exploration and exploitation such that its performance degrades minimally in the presence (or absence) of adversarial corruptions. When $T$ is the number of samples and $\gamma_T$ is the maximal information gain, the corruption-dependent term in our regret bound is $O(C \gamma_T^{3/2})$, which is significantly tighter than the existing $O(C \sqrt{T \gamma_T})$ for several commonly-considered kernels. We perform the first empirical study of robustness in the corrupted GP bandit setting, and show that our algorithm is robust against a variety of adversarial attacks. ","A Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian |
Process Bandits",3,"['Our new paper presents a Robust Phased Elimination Algorithm for Corruption-Tolerant Gaussian Process Bandits.\n\nPaper <LINK>\nWith: Zihan Li, @arkrause, @j_m_scarlett', 'The key idea behind our algorithm is to incorporate a rare switching, along with a novel robust estimator, enlarged confidence bounds, and a minimal number of plays of each selected action.\n\nThis led to significantly tighter regret bounds for several commonly-considered kernels. https://t.co/h039j5HxJn', 'The paper contains the first empirical study of robustness in the corrupted kernelized bandit setting. The key finding is that our algorithm is robust against a variety of adversarial attacks. https://t.co/hbUBnZ17BK']",22,02,650 |
3808,31,1242919852634849280,989251872107085824,Quoc Le,New paper: Meta Pseudo Labels Self-training has a pre-trained teacher to generate pseudo labels to train a student. Here we use the student’s performance to meta-train the teacher to generate better pseudo labels. Works well on ImageNet 10%. Link: <LINK> <LINK> This work continues our efforts on semi-supervised learning such as UDA: <LINK> MixMatch: <LINK> FixMatch: <LINK> Noisy Student: <LINK> etc. Joint work with @hieupham789 @QizheXie @ZihangDai Some people pointed out some bugs with the figure. We fixed the comparisons and the updated figure is below. <LINK>,https://arxiv.org/abs/2003.10580,"We present Meta Pseudo Labels, a semi-supervised learning method that achieves a new state-of-the-art top-1 accuracy of 90.2% on ImageNet, which is 1.6% better than the existing state-of-the-art. Like Pseudo Labels, Meta Pseudo Labels has a teacher network to generate pseudo labels on unlabeled data to teach a student network. However, unlike Pseudo Labels where the teacher is fixed, the teacher in Meta Pseudo Labels is constantly adapted by the feedback of the student's performance on the labeled dataset. As a result, the teacher generates better pseudo labels to teach the student. Our code will be available at this https URL ",Meta Pseudo Labels,3,"['New paper: Meta Pseudo Labels\n\nSelf-training has a pre-trained teacher to generate pseudo labels to train a student. Here we use the student’s performance to meta-train the teacher to generate better pseudo labels. Works well on ImageNet 10%.\n\nLink: <LINK> <LINK>', 'This work continues our efforts on semi-supervised learning such as\n\nUDA: https://t.co/J74Nn4i8no\nMixMatch: https://t.co/34ztIFttUQ\nFixMatch: https://t.co/3qTUVbPO0N\nNoisy Student: https://t.co/ZYDaef6sdp\netc.\n\nJoint work with @hieupham789 @QizheXie @ZihangDai', 'Some people pointed out some bugs with the figure. We fixed the comparisons and the updated figure is below. https://t.co/onP3mr9iS7']",20,03,568 |
3809,234,1372907428493283336,1359059238,Dima Damen,"What is wrong with video retrieval benchmarks? Our #CVPR2021 work now on ArXiv <LINK> w M Wray @doughty_hazel Prior works are based on instance-based assumption. We propose to rank videos by their semantic similarity, with multiple videos being equally relevant. <LINK> Analysing 3 common benchmarks (MSR-VTT, YouCook2 &EPIC-KITCHENS), that consider the corresponding caption (in bold) as correct, with many equally similar captions deemed irrelevant. a method that potentially randomly gets the bold captions higher in the ranking is considered SoTA <LINK> We propose four proxies for semantic similarity in large-scale benchmarks, without any additional annotations. We demonstrate that methods do not improve over simple baselines when semantic similarity is incorporated. <LINK> Watch 3-min intro at: <LINK>",https://arxiv.org/abs/2103.10095,"Current video retrieval efforts all found their evaluation on an instance-based assumption, that only a single caption is relevant to a query video and vice versa. We demonstrate that this assumption results in performance comparisons often not indicative of models' retrieval capabilities. We propose a move to semantic similarity video retrieval, where (i) multiple videos/captions can be deemed equally relevant, and their relative ranking does not affect a method's reported performance and (ii) retrieved videos/captions are ranked by their similarity to a query. We propose several proxies to estimate semantic similarities in large-scale retrieval datasets, without additional annotations. Our analysis is performed on three commonly used video retrieval datasets (MSR-VTT, YouCook2 and EPIC-KITCHENS). ",On Semantic Similarity in Video Retrieval,4,"['What is wrong with video retrieval benchmarks?\nOur #CVPR2021 work now on ArXiv\n<LINK>\nw M Wray @doughty_hazel \nPrior works are based on instance-based assumption.\nWe propose to rank videos by their semantic similarity, with multiple videos being equally relevant. <LINK>', 'Analysing 3 common benchmarks (MSR-VTT, YouCook2 &EPIC-KITCHENS), that consider the corresponding caption (in bold) as correct, with many equally similar captions deemed irrelevant. a method that potentially randomly gets the bold captions higher in the ranking is considered SoTA https://t.co/rvXsTBDGL1', 'We propose four proxies for semantic similarity in large-scale benchmarks, without any additional annotations.\nWe demonstrate that methods do not improve over simple baselines when semantic similarity is incorporated. https://t.co/3YUd7tJ55T', 'Watch 3-min intro at: https://t.co/iVujK3KeiE']",21,03,811 |
3810,11,1234541606369296384,1210312444221935616,Cyrus Rashtchian,"Very excited about our new paper on explainable clustering with Sanjoy Dasgupta, Nave Frost, and Michal Moshkovitz <LINK> #XAI #MachineLearning We consider k-means/medians clustering using decision trees with k leaves, so that cluster assignments can be explained with a handful of single feature thresholds Some surprises (1) existing DT algs don't work well (2) we get an O(k) approx for k-medians and O(k^2) for k-means (3) for two centers, we get a constant factor approx with a single threshold cut! and (4) proof techniques are very different than usual clustering arguments The set-up in pictures: we determine clusters using axis-aligned cuts, which gives globally consistent explanations for every point in the dataset <LINK> We are currently working on a series of blog posts and an implementation + empirical evaluation. Stay tuned!",https://arxiv.org/abs/2002.12538,"Clustering is a popular form of unsupervised learning for geometric data. Unfortunately, many clustering algorithms lead to cluster assignments that are hard to explain, partially because they depend on all the features of the data in a complicated way. To improve interpretability, we consider using a small decision tree to partition a data set into clusters, so that clusters can be characterized in a straightforward manner. We study this problem from a theoretical viewpoint, measuring cluster quality by the $k$-means and $k$-medians objectives: Must there exist a tree-induced clustering whose cost is comparable to that of the best unconstrained clustering, and if so, how can it be found? In terms of negative results, we show, first, that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and second, that any tree-induced clustering must in general incur an $\Omega(\log k)$ approximation factor compared to the optimal clustering. On the positive side, we design an efficient algorithm that produces explainable clusters using a tree with $k$ leaves. For two means/medians, we show that a single threshold cut suffices to achieve a constant factor approximation, and we give nearly-matching lower bounds. For general $k \geq 2$, our algorithm is an $O(k)$ approximation to the optimal $k$-medians and an $O(k^2)$ approximation to the optimal $k$-means. Prior to our work, no algorithms were known with provable guarantees independent of dimension and input size. ",Explainable $k$-Means and $k$-Medians Clustering,5,"['Very excited about our new paper on explainable clustering with Sanjoy Dasgupta, Nave Frost, and Michal Moshkovitz <LINK> #XAI #MachineLearning', 'We consider k-means/medians clustering using decision trees with k leaves, so that cluster assignments can be explained with a handful of single feature thresholds', ""Some surprises (1) existing DT algs don't work well (2) we get an O(k) approx for k-medians and O(k^2) for k-means (3) for two centers, we get a constant factor approx with a single threshold cut! and (4) proof techniques are very different than usual clustering arguments"", 'The set-up in pictures: we determine clusters using axis-aligned cuts, which gives globally consistent explanations for every point in the dataset https://t.co/QP2BHRNpKL', 'We are currently working on a series of blog posts and an implementation + empirical evaluation. Stay tuned!']",20,02,843 |
3811,16,1034607219164295169,5850692,Aaron Roth,"A new paper with Sampath Kannan and Juba Ziani: ""The Downstream Effects of Affirmative Action"": <LINK> We study a two stage model with a school and employers. Suppose the school only see noisy signals about student types (exam scores). 1/5 A school can choose an admissions policy (mapping from scores to acceptance probability) for each group, and a grading policy (variance of grade, which is also a noisy signal). 2/5 Employers are Bayesians, condition on everything, and hire students if their posterior expectation is above a threshold. What goals can the school try and achieve via differential admissions policies if group type distributions differ? Here are two: 3/5 1) Equal opportunity: The probability of making it through the pipeline (admitted to school and then hired by employer) should be independent of group membership given type. 2) Group independent hiring: Employers' hiring rules should be independent of group membership. 4/5 The punchline: In general, its not possible to achieve either one of these goals if the school gives out informative grades (finite, nonzero variance). But a sufficiently selective school can achieve both if it withholds grades. 5/5",https://arxiv.org/abs/1808.09004,"We study a two-stage model, in which students are 1) admitted to college on the basis of an entrance exam which is a noisy signal about their qualifications (type), and then 2) those students who were admitted to college can be hired by an employer as a function of their college grades, which are an independently drawn noisy signal of their type. Students are drawn from one of two populations, which might have different type distributions. We assume that the employer at the end of the pipeline is rational, in the sense that it computes a posterior distribution on student type conditional on all information that it has available (college admissions, grades, and group membership), and makes a decision based on posterior expectation. We then study what kinds of fairness goals can be achieved by the college by setting its admissions rule and grading policy. For example, the college might have the goal of guaranteeing equal opportunity across populations: that the probability of passing through the pipeline and being hired by the employer should be independent of group membership, conditioned on type. Alternately, the college might have the goal of incentivizing the employer to have a group blind hiring rule. We show that both goals can be achieved when the college does not report grades. On the other hand, we show that under reasonable conditions, these goals are impossible to achieve even in isolation when the college uses an (even minimally) informative grading policy. ",Downstream Effects of Affirmative Action,5,"['A new paper with Sampath Kannan and Juba Ziani: ""The Downstream Effects of Affirmative Action"": <LINK> We study a two stage model with a school and employers. Suppose the school only see noisy signals about student types (exam scores). 1/5', 'A school can choose an admissions policy (mapping from scores to acceptance probability) for each group, and a grading policy (variance of grade, which is also a noisy signal). 2/5', 'Employers are Bayesians, condition on everything, and hire students if their posterior expectation is above a threshold. What goals can the school try and achieve via differential admissions policies if group type distributions differ? Here are two: 3/5', ""1) Equal opportunity: The probability of making it through the pipeline (admitted to school and then hired by employer) should be independent of group membership given type. 2) Group independent hiring: Employers' hiring rules should be independent of group membership. 4/5"", 'The punchline: In general, its not possible to achieve either one of these goals if the school gives out informative grades (finite, nonzero variance). But a sufficiently selective school can achieve both if it withholds grades. 5/5']",18,08,1181 |
3812,50,1217990000777867264,3018751880,Prof. Katelin Schutz,"New paper out tonight! <LINK> The short summary: there is too much dark matter substructure in halos (inferred gravitationally) for it to be very fuzzy, more granularity is needed This granularity can only be accommodated if the de Broglie wavelength of dark matter is sufficiently short, otherwise the dark matter gets smeared out and the observations can't be explained. This puts a lower bound on the mass of dark matter particles, 2.1 x 10^-21 eV Other bounds of similar strength exist, but it's always great to have independent corroboration with different observations, analyses, systematics, and assumptions. Maybe you could poke holes in one limit, but it's hard to poke holes in several at the same time This mass limit is high enough that it precludes some of the funkier phenomena that could have happened if dark matter were allowed to be lighter. Sadly, Nature doesn't care whether dark matter behaves in crazy cool ways, Nature just does its thing and doesn't care what we think ... that's science! Another day, another theory of dark matter to test! <LINK> @acollierastro Thank you! Tbh, pressing submit was one of the top 5 scariest things I’ve ever done 😂",https://arxiv.org/abs/2001.05503,"Warm dark matter has recently become increasingly constrained by observational inferences about the low-mass end of the subhalo mass function, which would be suppressed by dark matter free streaming in the early Universe. In this work, we point out that a constraint can be placed on ultralight bosonic dark matter (often referred to as ""fuzzy dark matter"") based on similar considerations. Recent limits on warm dark matter from strong gravitational lensing of quasars and from fluctuations in stellar streams separately translate to a lower limit of $\sim 2.1 \times 10^{-21}$ eV on the mass of an ultralight boson comprising all dark matter. These limits are complementary to constraints on ultralight dark matter from the Lyman-$\alpha$ forest and are subject to a completely different set of assumptions and systematic uncertainties. Taken together, these probes strongly suggest that dark matter with a mass $\sim 10^{-22}$ eV is not a viable way to reconcile differences between cold dark matter simulations and observations of structure on small scales. ",The Subhalo Mass Function and Ultralight Bosonic Dark Matter,6,"['New paper out tonight! <LINK> The short summary: there is too much dark matter substructure in halos (inferred gravitationally) for it to be very fuzzy, more granularity is needed', ""This granularity can only be accommodated if the de Broglie wavelength of dark matter is sufficiently short, otherwise the dark matter gets smeared out and the observations can't be explained. This puts a lower bound on the mass of dark matter particles, 2.1 x 10^-21 eV"", ""Other bounds of similar strength exist, but it's always great to have independent corroboration with different observations, analyses, systematics, and assumptions. Maybe you could poke holes in one limit, but it's hard to poke holes in several at the same time"", ""This mass limit is high enough that it precludes some of the funkier phenomena that could have happened if dark matter were allowed to be lighter. Sadly, Nature doesn't care whether dark matter behaves in crazy cool ways, Nature just does its thing and doesn't care what we think"", ""... that's science! Another day, another theory of dark matter to test! https://t.co/SzKfUeW0Im"", '@acollierastro Thank you! Tbh, pressing submit was one of the top 5 scariest things I’ve ever done 😂']",20,01,1172 |
3813,43,1255396953728352256,561899047,Aki Vehtari,"While Federico Pavone @FritzPfau was visiting @CSAalto, he analyzed the properties of projpred <LINK>, and now with @JuhoPiironen, @paulbuerkner and I, we have a new paper ""Using reference models in variable selection"" <LINK> <LINK> Previously @JuhoPiironen , Markus and I had demonstrated that projpred (projection predictive variable selection) can find small models with similar predictive performance as the full model (beating, e.g. lasso and glmnet), e.g. <LINK> and <LINK> <LINK> We are not usually interested in something like FDR in variable selection, as most of time we don't assume zero effects, but, e.g., @f2harrel did ask and we now demonstrate the stability, FDR, etc. of various variable selection methods when there are also ""true zero coefficients"" We also wanted to see how much of the good performance of projpred comes from 1) Bayesian inference, 2) a reference model, and 3) projection after the selection. We demonstrate that other variable selection methods can also be improved using a reference model. <LINK> projpred excels in minimal subset variable selection measured with predictive performance, FDR and selection stability. projpred wasn't designed for complete variable selection and the simple iterative approach we tested didn't beat methods specifically designed to control FDR. <LINK> Excellent @FritzPfau is now a PhD student at @Unibocconi, Milan. The code is available at <LINK> (we don't have Python implementation, yet). There will be soon a new projpred release with many new goodies. @dan_p_simpson @FritzPfau @CSAalto @JuhoPiironen @paulbuerkner Yes, please! It's a coincidence that this week turned out to be an Aalto visitor paper theme week, but yes I very much like having visitors! @vianey_lb @dan_p_simpson @FritzPfau @CSAalto @JuhoPiironen @paulbuerkner I hope you can visit us, too!",https://arxiv.org/abs/2004.13118,"Variable selection, or more generally, model reduction is an important aspect of the statistical workflow aiming to provide insights from data. In this paper, we discuss and demonstrate the benefits of using a reference model in variable selection. A reference model acts as a noise-filter on the target variable by modeling its data generating mechanism. As a result, using the reference model predictions in the model selection procedure reduces the variability and improves stability leading to improved model selection performance. Assuming that a Bayesian reference model describes the true distribution of future data well, the theoretically preferred usage of the reference model is to project its predictive distribution to a reduced model leading to projection predictive variable selection approach. Alternatively, reference models may also be used in an ad-hoc manner in combination with common variable selection methods. In several numerical experiments, we investigate the performance of the projective prediction approach as well as alternative variable selection methods with and without reference models. Our results indicate that the use of reference models generally translates into better and more stable variable selection. Additionally, we demonstrate that the projection predictive approach shows superior performance as compared to alternative variable selection methods independently of whether or not they use reference models. ",Using reference models in variable selection,8,"['While Federico Pavone @FritzPfau was visiting @CSAalto, he analyzed the properties of projpred <LINK>, and now with @JuhoPiironen, @paulbuerkner and I, we have a new paper ""Using reference models in variable selection"" <LINK> <LINK>', 'Previously @JuhoPiironen , Markus and I had demonstrated that projpred (projection predictive variable selection) can find small models with similar predictive performance as the full model (beating, e.g. lasso and glmnet), e.g. https://t.co/DezfrN1hhG and https://t.co/Iw8cY5CCF2 https://t.co/XdVdft2xnJ', 'We are not usually interested in something like FDR in variable selection, as most of time we don\'t assume zero effects, but, e.g., @f2harrel did ask and we now demonstrate the stability, FDR, etc. of various variable selection methods when there are also ""true zero coefficients""', 'We also wanted to see how much of the good performance of projpred comes from 1) Bayesian inference, 2) a reference model, and 3) projection after the selection. We demonstrate that other variable selection methods can also be improved using a reference model. https://t.co/UmICXrNSHb', ""projpred excels in minimal subset variable selection measured with predictive performance, FDR and selection stability. projpred wasn't designed for complete variable selection and the simple iterative approach we tested didn't beat methods specifically designed to control FDR. https://t.co/0z8rOLJuJO"", ""Excellent @FritzPfau is now a PhD student at @Unibocconi, Milan. The code is available at https://t.co/Opd3QqnOAj (we don't have Python implementation, yet). There will be soon a new projpred release with many new goodies."", ""@dan_p_simpson @FritzPfau @CSAalto @JuhoPiironen @paulbuerkner Yes, please! It's a coincidence that this week turned out to be an Aalto visitor paper theme week, but yes I very much like having visitors!"", '@vianey_lb @dan_p_simpson @FritzPfau @CSAalto @JuhoPiironen @paulbuerkner I hope you can visit us, too!']",20,04,1835 |
3814,88,1202311874173382656,2797706535,olga afanasjeva,"Attending #neurips2019? Join ""Multi Agent Reinforcement Learning (MARL and related topics)"" meetup at 12/12/2019 7:30 PM through the Whova app. Place TBD. Brought by the people behind #NeurIPS2016 epic party😎😄 Also, check out our new paper: <LINK> <LINK>",https://arxiv.org/abs/1912.01513,"In this work, we propose a novel memory-based multi-agent meta-learning architecture and learning procedure that allows for learning of a shared communication policy that enables the emergence of rapid adaptation to new and unseen environments by learning to learn learning algorithms through communication. Behavior, adaptation and learning to adapt emerges from the interactions of homogeneous experts inside a single agent. The proposed architecture should allow for generalization beyond the level seen in existing methods, in part due to the use of a single policy shared by all experts within the agent as well as the inherent modularity of 'Badger'. ","BADGER: Learning to (Learn [Learning Algorithms] through Multi-Agent |
Communication)",1,"['Attending #neurips2019? Join ""Multi Agent Reinforcement Learning (MARL and related topics)"" meetup at 12/12/2019 7:30 PM through the Whova app. Place TBD. Brought by the people behind #NeurIPS2016 epic party😎😄 Also, check out our new paper: <LINK> <LINK>']",19,12,254 |
3815,189,1335929920657240064,933091456574808064,Joaquín García de la Cruz ۞,"It's been a long way, but I can finally say it's paper day! Check out my first author paper where we use MW-like simulated galaxies to study how the flaring of geometrical thick discs is linked with different aspects of the galaxy, mergers included! 1/6 <LINK> Studying the flaring of Mono-Age Populations (MAPs) in the disc, we address three main aspects: 1) thick disc flaring, 2) age radial gradients, 3) are the geometrical thin and thick disc actually two different components? Finally, how does all the above connect with mergers? 2/6 <LINK> We find that galaxies with flat thick disc all have age radial gradients, their thin&thick disc are part of the same structure, and have quiescent merger histories (like the MW!) 3/6 <LINK> Galaxies with flared thick discs are quite more diverse, but those with busier merger histories tend to have flatter age radial gradients & their thin&thick discs tend to form a geometrical bimodal structure. 4/6 <LINK> By looking at the aspects mentioned in 2/6 we are able to find galaxies that resemble the MW. We also find galaxies different from the MW but similar to external galaxies. This helps us to place the MW in the context of the larger population of spiral galaxies. 5/6 Please, find all the details in the link and if you have any comments or questions, feel free to reach out!! 6/6",https://arxiv.org/abs/2012.02741,"Using simulated galaxies in their cosmological context, we analyse how the flaring of mono-age populations (MAPs) influences the flaring and the age structure of geometrically-defined thick discs. We also explore under which circumstances the geometric thin and thick discs are meaningfully distinct components, or are part of a single continuous structure as in the Milky Way. We find that flat thick discs are created when MAPs barely flare or have low surface density at the radius where they start flaring. When looking at the vertical distribution of MAPs, these galaxies show a continuous thin/thick structure. They also have radial age gradients and tend to have quiescent merger histories. Those characteristics are consistent with what is observed in the Milky Way. Flared thick discs, on the other hand, are created when the MAPs that flare have a high surface density at the radius where they start flaring. The thick discs' scale-heights can either be dominated by multiple MAPs or just a few, depending on the mass and scale-height distribution of the MAPs. In a large fraction of these galaxies, thin and thick discs are clearly distinct structures. Finally, flared thick discs have diverse radial age gradients and merger histories, with galaxies that are more massive or that have undergone massive mergers showing flatter age radial gradients in their thick disc. ",On the Flaring of Thick Disc of Galaxies: Insights from Simulations,6,"[""It's been a long way, but I can finally say it's paper day! Check out my first author paper where we use MW-like simulated galaxies to study how the flaring of geometrical thick discs is linked with different aspects of the galaxy, mergers included! 1/6\n<LINK>"", 'Studying the flaring of Mono-Age Populations (MAPs) in the disc, we address three main aspects: 1) thick disc flaring, 2) age radial gradients, 3) are the geometrical thin and thick disc actually two different components? Finally, how does all the above connect with mergers? 2/6 https://t.co/sqYhli2SdJ', 'We find that galaxies with flat thick disc all have age radial gradients, their thin&thick disc are part of the same structure, and have quiescent merger histories (like the MW!) 3/6 https://t.co/U8BSRsBubD', 'Galaxies with flared thick discs are quite more diverse, but those with busier merger histories tend to have flatter age radial gradients & their thin&thick discs tend to form a geometrical bimodal structure. 4/6 https://t.co/aqWXzlr7Jb', 'By looking at the aspects mentioned in 2/6 we are able to find galaxies that resemble the MW. We also find galaxies different from the MW but similar to external galaxies. This helps us to place the MW in the context of the larger population of spiral galaxies. 5/6', 'Please, find all the details in the link and if you have any comments or questions, feel free to reach out!! 6/6']",20,12,1336 |
3816,47,1255151465808592902,855118392348610560,Joost Huizinga,"New Go-Explore paper “First return then explore”, featuring: superhuman performance on all unsolved* and all hard-exploration Atari games, tackling of a hard-exploration robotics task, goal-conditioned policies to deal with stochasticity, and more! Paper: <LINK> * Here we consider a game solved when an agent obtains super human performance when evaluated on an environment with sticky actions. Shoutout to Agent57, which recently managed to achieve superhuman performance on all Atari games, though without sticky actions. The new paper also features even further improved scores on Montezuma’s Revenge and Pitfall and it demonstrates the potential of leveraging a policy for exploration. Paper in collaboration with @AdrienLE (shared first author), @joelbot3000, @kenneth0stanley, and @jeffclune",http://arxiv.org/abs/2004.12919,"The promise of reinforcement learning is to solve complex sequential decision problems autonomously by specifying a high-level reward function only. However, reinforcement learning algorithms struggle when, as is often the case, simple and intuitive rewards provide sparse and deceptive feedback. Avoiding these pitfalls requires thoroughly exploring the environment, but creating algorithms that can do so remains one of the central challenges of the field. We hypothesise that the main impediment to effective exploration originates from algorithms forgetting how to reach previously visited states (""detachment"") and from failing to first return to a state before exploring from it (""derailment""). We introduce Go-Explore, a family of algorithms that addresses these two challenges directly through the simple principles of explicitly remembering promising states and first returning to such states before intentionally exploring. Go-Explore solves all heretofore unsolved Atari games and surpasses the state of the art on all hard-exploration games, with orders of magnitude improvements on the grand challenges Montezuma's Revenge and Pitfall. We also demonstrate the practical potential of Go-Explore on a sparse-reward pick-and-place robotics task. Additionally, we show that adding a goal-conditioned policy can further improve Go-Explore's exploration efficiency and enable it to handle stochasticity throughout training. The substantial performance gains from Go-Explore suggest that the simple principles of remembering states, returning to them, and exploring from them are a powerful and general approach to exploration, an insight that may prove critical to the creation of truly intelligent learning agents. ","First return, then explore",3,"['New Go-Explore paper “First return then explore”, featuring: superhuman performance on all unsolved* and all hard-exploration Atari games, tackling of a hard-exploration robotics task, goal-conditioned policies to deal with stochasticity, and more! Paper: <LINK>', '* Here we consider a game solved when an agent obtains super human performance when evaluated on an environment with sticky actions. Shoutout to Agent57, which recently managed to achieve superhuman performance on all Atari games, though without sticky actions.', 'The new paper also features even further improved scores on Montezuma’s Revenge and Pitfall and it demonstrates the potential of leveraging a policy for exploration. Paper in collaboration with @AdrienLE (shared first author), @joelbot3000, @kenneth0stanley, and @jeffclune']",20,04,798 |
3817,7,1078219542638276608,892059194240532480,Mikel Artetxe,"Check out our new paper ""Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond"" (w/ Holger Schwenk). New SOTA on cross-lingual transfer (XNLI, MLDoc) and bitext mining (BUCC) using a shared encoder for 93 languages! <LINK> <LINK>",https://arxiv.org/abs/1812.10464,"We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared BPE vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI dataset), cross-lingual document classification (MLDoc dataset) and parallel corpus mining (BUCC dataset) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low-resource languages. Our implementation, the pre-trained encoder and the multilingual test set are available at this https URL ","Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual |
Transfer and Beyond",1,"['Check out our new paper ""Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond"" (w/ Holger Schwenk). New SOTA on cross-lingual transfer (XNLI, MLDoc) and bitext mining (BUCC) using a shared encoder for 93 languages!\n<LINK> <LINK>']",18,12,266 |
3818,66,1350126549270470666,742698661818339329,Markus Löning,"What are the key software design patterns that are used in the development of #MachineLearning #Toolkits like @scikit_learn, #Weka or #mlr3? We describe some of the common patterns in our new paper: <LINK> This is a first draft - feedback very welcome!",https://arxiv.org/abs/2101.04938,"Machine learning (ML) and AI toolboxes such as scikit-learn or Weka are workhorses of contemporary data scientific practice -- their central role being enabled by usable yet powerful designs that allow to easily specify, train and validate complex modeling pipelines. However, despite their universal success, the key design principles in their construction have never been fully analyzed. In this paper, we attempt to provide an overview of key patterns in the design of AI modeling toolboxes, taking inspiration, in equal parts, from the field of software engineering, implementation patterns found in contemporary toolboxes, and our own experience from developing ML toolboxes. In particular, we develop a conceptual model for the AI/ML domain, with a new type system, called scientific types, at its core. Scientific types capture the scientific meaning of common elements in ML workflows based on the set of operations that we usually perform with them (i.e. their interface) and their statistical properties. From our conceptual analysis, we derive a set of design principles and patterns. We illustrate that our analysis can not only explain the design of existing toolboxes, but also guide the development of new ones. We intend our contribution to be a state-of-art reference for future toolbox engineers, a summary of best practices, a collection of ML design patterns which may become useful for future research, and, potentially, the first steps towards a higher-level programming paradigm for constructing AI. ","Designing Machine Learning Toolboxes: Concepts, Principles and Patterns",1,"['What are the key software design patterns that are used in the development of #MachineLearning #Toolkits like @scikit_learn, #Weka or #mlr3? We describe some of the common patterns in our new paper: <LINK> This is a first draft - feedback very welcome!']",21,01,252 |
3819,196,1272574484336324609,48712353,Sungjin Ahn 🇺🇦,"Humans can build 3D models of individual objects from partial observations of a complex scene. Check out our new paper about ROOTS for unsupervised representation and rendering of modular, compositional, and 3D objects and scenes: <LINK> More videos follow: <LINK> with Chang Chen (@ShenC) & Fei Deng Model Overview <LINK> Object-wise Disentanglement <LINK> Compositional Rendering <LINK>",https://arxiv.org/abs/2006.06130,"A crucial ability of human intelligence is to build up models of individual 3D objects from partial scene observations. Recent works achieve object-centric generation but without the ability to infer the representation, or achieve 3D scene representation learning but without object-centric compositionality. Therefore, learning to represent and render 3D scenes with object-centric compositionality remains elusive. In this paper, we propose a probabilistic generative model for learning to build modular and compositional 3D object models from partial observations of a multi-object scene. The proposed model can (i) infer the 3D object representations by learning to search and group object areas and also (ii) render from an arbitrary viewpoint not only individual objects but also the full scene by compositing the objects. The entire learning process is unsupervised and end-to-end. In experiments, in addition to generation quality, we also demonstrate that the learned representation permits object-wise manipulation and novel scene generation, and generalizes to various settings. Results can be found on our project website: this https URL ",ROOTS: Object-Centric Representation and Rendering of 3D Scenes,5,"['Humans can build 3D models of individual objects from partial observations of a complex scene. Check out our new paper about ROOTS for unsupervised representation and rendering of modular, compositional, and 3D objects and scenes: <LINK> \n\nMore videos follow: <LINK>', 'with Chang Chen (@ShenC) & Fei Deng', 'Model Overview https://t.co/fRTfPaNwUd', 'Object-wise Disentanglement https://t.co/fWpsXac9VX', 'Compositional Rendering https://t.co/3WpkI14OMU']",20,06,389 |
3820,27,1453551530812993537,1350139361824673792,Duan Jiafei,A new workshop paper from my group accepted to #NeurIPS2021 workshop! We worked with Prof Renée's group to reconstruct her classic work on VOE in a 3D environment for training AI in high-level physical reasoning using abstract features and rules. <LINK>,https://arxiv.org/abs/2110.05836,"Recent work in cognitive reasoning and computer vision has engendered an increasing popularity for the Violation-of-Expectation (VoE) paradigm in synthetic datasets. Inspired by work in infant psychology, researchers have started evaluating a model's ability to discriminate between expected and surprising scenes as a sign of its reasoning ability. Existing VoE-based 3D datasets in physical reasoning only provide vision data. However, current cognitive models of physical reasoning by psychologists reveal infants create high-level abstract representations of objects and interactions. Capitalizing on this knowledge, we propose AVoE: a synthetic 3D VoE-based dataset that presents stimuli from multiple novel sub-categories for five event categories of physical reasoning. Compared to existing work, AVoE is armed with ground-truth labels of abstract features and rules augmented to vision data, paving the way for high-level symbolic predictions in physical reasoning tasks. ","AVoE: A Synthetic 3D Dataset on Understanding Violation of Expectation |
for Artificial Cognition",1,"[""A new workshop paper from my group accepted to #NeurIPS2021 workshop!\n\nWe worked with Prof Renée's group to reconstruct her classic work on VOE in a 3D environment for training AI in high-level physical reasoning using abstract features and rules. \n<LINK>""]",21,10,253 |
3821,143,1184856036076863488,1030500158159695872,Patrick Lewis,"QA Models should work in any language. So, we're releasing MLQA, a new cross-lingual QA evaluation dataset! Check out the paper and dataset: <LINK> <LINK> With Barlas Oguz, Ruty Rinott, @riedelcastro, @SchwenkHolger @facebookai @ucl_nlp 🚀 <LINK> @riedelcastro @SchwenkHolger @facebookai @ucl_nlp MLQA is a highly parallel Extractive QA dataset in 7 diverse languages - English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. Because its highly parallel, we can even evaluate unusual language combinations like questions in Arabic and documents in Hindi @riedelcastro @SchwenkHolger @facebookai @ucl_nlp We train QA models using SQuAD, and test *zero-shot* in target languages. Even powerful models like XLM and MT approaches struggle to transfer without significant performance drops. We hope that MLQA will be a useful testbed going forward for cross-lingual research. @riedelcastro @SchwenkHolger @facebookai @ucl_nlp I've always loved chocolate... @sameer_ @riedelcastro @SchwenkHolger @facebookai @ucl_nlp I pronounce it ""emm eeel kwah"".....",https://arxiv.org/abs/1910.07475,"Question answering (QA) models have shown rapid progress enabled by the availability of large, high-quality benchmark datasets. Such annotated datasets are difficult and costly to collect, and rarely exist in languages other than English, making training QA systems in other languages challenging. An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language. In order to develop such systems, it is crucial to invest in high quality multilingual evaluation benchmarks to measure progress. We present MLQA, a multi-way aligned extractive QA evaluation benchmark intended to spur research in this area. MLQA contains QA instances in 7 languages, namely English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. It consists of over 12K QA instances in English and 5K in each other language, with each QA instance being parallel between 4 languages on average. MLQA is built using a novel alignment context strategy on Wikipedia articles, and serves as a cross-lingual extension to existing extractive QA datasets. We evaluate current state-of-the-art cross-lingual representations on MLQA, and also provide machine-translation-based baselines. In all cases, transfer results are shown to be significantly behind training-language performance. ",MLQA: Evaluating Cross-lingual Extractive Question Answering,5,"[""QA Models should work in any language. So, we're releasing MLQA, a new cross-lingual QA evaluation dataset! \n\nCheck out the paper and dataset:\n<LINK>\n<LINK>\nWith Barlas Oguz, Ruty Rinott, @riedelcastro, @SchwenkHolger\n\n@facebookai @ucl_nlp 🚀 <LINK>"", '@riedelcastro @SchwenkHolger @facebookai @ucl_nlp MLQA is a highly parallel Extractive QA dataset in 7 diverse languages - English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. \n\nBecause its highly parallel, we can even evaluate unusual language combinations like questions in Arabic and documents in Hindi', '@riedelcastro @SchwenkHolger @facebookai @ucl_nlp We train QA models using SQuAD, and test *zero-shot* in target languages. Even powerful models like XLM and MT approaches struggle to transfer without significant performance drops.\n\nWe hope that MLQA will be a useful testbed going forward for cross-lingual research.', ""@riedelcastro @SchwenkHolger @facebookai @ucl_nlp I've always loved chocolate..."", '@sameer_ @riedelcastro @SchwenkHolger @facebookai @ucl_nlp I pronounce it ""emm eeel kwah"".....']",19,10,1067 |
3822,44,1276315120650588160,82733042,Shubham Kanodia,"I'm happy to share our new @HPFspectrograph paper on the confirmation of warm Super Neptune TOI-1728b orbiting an M0 star. This planet is a part of a population of just 2 other super Neptunes around M dwarfs. <LINK> (1/n) <LINK> Detected in @TESSatMIT data, we confirmed the transit with ground based photometry from @PSUScience Davey Lab and Perkins 17"" to measure a radius of ~ 5 Earth radii and a period of 3.5 days. (2/n) <LINK> We used @HPFspectrograph to measure its mass to be ~ 26 Earth masses, and also followed up during transit to place an upper limit on He 10830 absorption. (3/n) Furthermore, its Transmission Spectroscopy Metric of ~ 130 and a relatively bright host star (J~9.6) make it a gootd candidate for transmission spectroscopy with #JWST, as well Ly Alpha searches with @NASAHubble !! (4/n) <LINK> This has been a collaborative effort with significant work put in groups across multiple institutions. Thank you everyone! @gummiks, @SuvrathM, @Astro_Wright, @lesliehebb, @rcterrien, @jiayin_dong - sorry if I missed anyone. (5/n)",https://arxiv.org/abs/2006.14546,"We confirm the planetary nature of TOI-1728b using a combination of ground-based photometry, near-infrared Doppler velocimetry and spectroscopy with the Habitable-zone Planet Finder.TOI-1728 is an old, inactive M0 star with \teff{} $= 3980^{+31}_{-32}$ K, which hosts a transiting super Neptune at an orbital period of $\sim$ 3.49 days. Joint fitting of the radial velocities and TESS and ground-based transits yields a planetary radius of $5.05_{-0.17}^{+0.16}$ R$_{\oplus}$, mass $26.78_{-5.13}^{+5.43}$ M$_{\oplus}$ and eccentricity $0.057_{-0.039}^{+0.054}$. We estimate the stellar properties, and perform a search for He 10830 \AA absorption during the transit of this planet and claim a null detection with an upper limit of 1.1$\%$ with 90\% confidence. A deeper level of He 10830 \AA ~ absorption has been detected in the planet atmosphere of GJ 3470b, a comparable gaseous planet. TOI-1728b is the largest super Neptune -- the intermediate subclass of planets between Neptune and the more massive gas-giant planets -- discovered around an M dwarf. With its relatively large mass and radius, TOI-1728 represents a valuable datapoint in the M-dwarf exoplanet mass-radius diagram, bridging the gap between the lighter Neptune-sized planets and the heavier Jovian planets known to orbit M-dwarfs. With a low bulk density of $1.14_{-0.24}^{+0.26}$ g/cm$^3$, and orbiting a bright host star (J $\sim 9.6$, V $\sim 12.4$), TOI-1728b is also a promising candidate for transmission spectroscopy both from the ground and from space, which can be used to constrain planet formation and evolutionary models. ","TOI-1728b: The Habitable-zone Planet Finder confirms a warm super |
Neptune orbiting an M dwarf host",5,"[""I'm happy to share our new @HPFspectrograph paper on the confirmation of warm Super Neptune TOI-1728b orbiting an M0 star. This planet is a part of a population of just 2 other super Neptunes around M dwarfs.\xa0\n<LINK>\n\n(1/n) <LINK>"", 'Detected in @TESSatMIT data, we confirmed the transit with ground based photometry from @PSUScience Davey Lab and Perkins 17"" to measure a radius of ~ 5 Earth radii and a period of 3.5 days. \n\n(2/n) https://t.co/2BsnRO9Dtq', 'We used @HPFspectrograph to measure its mass to be ~ 26 Earth masses, and also followed up during transit to place an upper limit on He 10830 absorption.\n\n(3/n)', 'Furthermore, its Transmission Spectroscopy Metric of ~ 130 and a relatively bright host star (J~9.6) make it a gootd candidate for transmission\xa0spectroscopy with #JWST, as well Ly Alpha searches with\xa0@NASAHubble !!\n\n(4/n) https://t.co/1K4X4T46US', 'This has been a collaborative effort with significant work put in groups across multiple institutions. Thank you everyone!\n@gummiks, @SuvrathM, @Astro_Wright, @lesliehebb, @rcterrien, @jiayin_dong - sorry if I missed anyone.\n(5/n)']",20,06,1053 |