text
stringlengths 64
6.93k
|
---|
266,59,1321139111063814144,1074633382452051969,Kimin,"Excited to share our #NeurIPS2020 paper that introduces a new model-based RL method to learn the multi-modal transition distribution in an unsupervised manner 🎓<LINK> 💻<LINK> w/@younggyoseo @clavera_i @KurutachThanard @jinwoos0417 @pabbeel 1/N Learning an ensemble of dynamics models is a common practice in model-based RL. However, when the transition dynamics of environments change, dynamics models fail to provide accurate predictions because the target transition dynamics follow a multi-modal distribution. 2/N <LINK> To tackle this problem, we propose a trajectory-wise multiple choice learning (T-MCL) that is a novel combination of MCL and model-based RL. The main idea is to force each dynamics model to specialize in different environments by only updating the most accurate dynamics model 3/N <LINK> To utilize the specialized dynamics models more effectively, we propose adaptive planning that selects actions using the most accurate model over a recent experience, which can be interpreted as finding the nearest cluster to the current environment. 4/N <LINK> Can T-MCL make dynamics models be specialized for a certain subset of training environments with similar dynamics? Yes. T-MCL can separate trajectories from different environments in an unsupervised manner (i.e., without any additional information about environments). 5/N <LINK> We also found that learning specialized dynamics models can improve the generalization performance on unseen (yet related) environments with different transition dynamics. We show T-MCL outperforms model-based RL methods (CaDM, GrBAL) and model-free meta-RL method (PEARL). 6/N <LINK> More qualitative analysis: We first train T-MCL on CartPole environments with different masses and visualize the agents' behavior by manually assigning specialized dynamics models. We show agents act as if they are light-weight or heavy-weight according to assigned models. 7/N <LINK> For more details, please check out our paper: <LINK> open-source code: <LINK> webpage: <LINK> Thank you for your attention! 8/N Special thanks to co-first author @younggyoseo and collaborators @clavera_i,@KurutachThanard, @jinwoos0417, @pabbeel 🙏",https://arxiv.org/abs/2010.13303,"Model-based reinforcement learning (RL) has shown great potential in various control tasks in terms of both sample-efficiency and final performance. However, learning a generalizable dynamics model robust to changes in dynamics remains a challenge since the target transition dynamics follow a multi-modal distribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning, that learns a multi-headed dynamics model for dynamics generalization. The main idea is updating the most accurate prediction head to specialize each head in certain environments with similar dynamics, i.e., clustering environments. Moreover, we incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector, enabling the model to perform online adaptation to unseen environments. Finally, to utilize the specialized prediction heads more effectively, we propose an adaptive planning method, which selects the most accurate prediction head over a recent experience. Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods. Source code and videos are available at this https URL ","Trajectory-wise Multiple Choice Learning for Dynamics Generalization in |
Reinforcement Learning",9,"['Excited to share our #NeurIPS2020 paper that introduces a new model-based RL method to learn the multi-modal transition distribution in an unsupervised manner\n🎓<LINK>\n💻<LINK>\nw/@younggyoseo @clavera_i @KurutachThanard @jinwoos0417 @pabbeel \n1/N', 'Learning an ensemble of dynamics models is a common practice in model-based RL. However, when the transition dynamics of environments change, dynamics models fail to provide accurate predictions because the target transition dynamics follow a multi-modal distribution.\n\n2/N https://t.co/yiNmVqTNUt', 'To tackle this problem, we propose a trajectory-wise multiple choice learning (T-MCL) that is a novel combination of MCL and model-based RL. The main idea is to force each dynamics model to specialize in different environments by only updating the most accurate dynamics model\n3/N https://t.co/5cJlGJBFqt', 'To utilize the specialized dynamics models more effectively, we propose adaptive planning that selects actions using the most accurate model over a recent experience, which can be interpreted as finding the nearest cluster to the current environment. \n\n4/N https://t.co/Pym57oaVVz', 'Can T-MCL make dynamics models be specialized for a certain subset of training environments with similar dynamics?\n\nYes. T-MCL can separate trajectories from different environments in an unsupervised manner (i.e., without any additional information about environments).\n\n5/N https://t.co/Cl4eGUvIGA', 'We also found that learning specialized dynamics models can improve the generalization performance on unseen (yet related) environments with different transition dynamics. We show T-MCL outperforms model-based RL methods (CaDM, GrBAL) and model-free meta-RL method (PEARL).\n6/N https://t.co/nhGFENVAuL', ""More qualitative analysis: We first train T-MCL on CartPole environments with different masses and visualize the agents' behavior by manually assigning specialized dynamics models. We show agents act as if they are light-weight or heavy-weight according to assigned models.\n\n7/N https://t.co/jY1T2oFzPl"", 'For more details, please check out \nour paper: https://t.co/nhKgJJJ9JW \nopen-source code: https://t.co/fRg25TSOdR \nwebpage: https://t.co/0u8oGOPPpj \n\nThank you for your attention! \n\n8/N', 'Special thanks to co-first author @younggyoseo and collaborators @clavera_i,@KurutachThanard, @jinwoos0417, @pabbeel 🙏']",20,10,2173 |
267,69,1383068017123336196,22148802,Leo C. Stein 🦁,"🚨 New preprint day! 🎉 ""Comparing Remnant Properties from Horizon Data and Asymptotic Data in Numerical Relativity"" Iozzo, Khera, Stein, et al. <LINK> What's this paper all about? A 🧵 for #BlackHoleWeek 1/8 <LINK> When two black holes merge, we get one remnant black hole and a bunch of gravitational waves. Measuring the properties of the remnant is very important — it tells us things like its mass, spin, or ""Will this black hole be kicked out of its host galaxy?"" 2/8 <LINK> Previously, numerical relativity codes measured these remnant properties in the ""bulk"" of the spacetime — more precisely, as integrals over its (apparent) horizon. Some of these integrals are well justified, others are not! But, there's another way to measure those properties 3/8 We can go infinitely far away, to ""asymptotic future null infinity,"" where the gravitational waves end up. There, we have well-defined integrals for mass, momentum, and angular momentum (technically: BMS charges). 4/8 To perform those integrals at future null infinity, we needed a robust code that takes data from the bulk of the spacetime to its boundary. This step is called Cauchy-characteristic evolution (CCE) and our improved code has been around for ~1 year (<LINK>) 5/8 So now that we have good asymptotic data, we can do those charge integrals ""at infinity"" correctly. Of course we compare to the old method on the horizons, and find really great agreement 6/8 <LINK> Moreover, we have a good theoretical understanding for *why* the two approaches should agree — but only for some of the remnant quantities! This has to do with symmetries and our patron saint, Dr. Emmy Noether 7/8 <LINK> In the future, we'll use the asymptotic quantities instead of the horizon remnant quantities, because we trust them more. The simulation data will be made public, and the code to do this is already in our python package, 'scri'. <LINK> Happy #BlackHoleWeek! 8/8 @CJHandmer Indeed! We have another paper in the works where we do care about the supertranslations (not in this paper, though — here we just needed a Poincare subalgebra of BMS)",https://arxiv.org/abs/2104.07052,"We present a new study of remnant black hole properties from 13 binary black hole systems, numerically evolved using the Spectral Einstein Code. The mass, spin, and recoil velocity of each remnant were determined quasi-locally from apparent horizon data and asymptotically from Bondi data $(h, \psi_4, \psi_3, \psi_2, \psi_1)$ computed at future null infinity using SpECTRE's Cauchy characteristic evolution. We compare these independent measurements of the remnant properties in the bulk and on the boundary of the spacetime, giving insight into how well asymptotic data are able to reproduce local properties of the remnant black hole in numerical relativity. We also discuss the theoretical framework for connecting horizon quantities to asymptotic quantities and how it relates to our results. This study recommends a simple improvement to the recoil velocities reported in the Simulating eXtreme Spacetimes waveform catalog, provides an improvement to future surrogate remnant models, and offers new analysis techniques for evaluating the physical accuracy of numerical simulations. ","Comparing Remnant Properties from Horizon Data and Asymptotic Data in |
Numerical Relativity",9,"['🚨 New preprint day! 🎉\n""Comparing Remnant Properties from Horizon Data and Asymptotic Data in Numerical Relativity""\nIozzo, Khera, Stein, et al.\n<LINK>\nWhat\'s this paper all about? A 🧵 for #BlackHoleWeek 1/8 <LINK>', 'When two black holes merge, we get one remnant black hole and a bunch of gravitational waves. Measuring the properties of the remnant is very important — it tells us things like its mass, spin, or ""Will this black hole be kicked out of its host galaxy?"" 2/8 https://t.co/wx5671nwop', 'Previously, numerical relativity codes measured these remnant properties in the ""bulk"" of the spacetime — more precisely, as integrals over its (apparent) horizon. Some of these integrals are well justified, others are not!\nBut, there\'s another way to measure those properties\n3/8', 'We can go infinitely far away, to ""asymptotic future null infinity,"" where the gravitational waves end up. There, we have well-defined integrals for mass, momentum, and angular momentum (technically: BMS charges). 4/8', 'To perform those integrals at future null infinity, we needed a robust code that takes data from the bulk of the spacetime to its boundary. This step is called Cauchy-characteristic evolution (CCE) and our improved code has been around for ~1 year (https://t.co/UvbK6mpYGA) 5/8', 'So now that we have good asymptotic data, we can do those charge integrals ""at infinity"" correctly. Of course we compare to the old method on the horizons, and find really great agreement 6/8 https://t.co/G7pTOd8SVj', 'Moreover, we have a good theoretical understanding for *why* the two approaches should agree — but only for some of the remnant quantities! This has to do with symmetries and our patron saint, Dr. Emmy Noether 7/8 https://t.co/IRYfAA9uXP', ""In the future, we'll use the asymptotic quantities instead of the horizon remnant quantities, because we trust them more. The simulation data will be made public, and the code to do this is already in our python package, 'scri'.\n\nhttps://t.co/6ajt3d5Lk2\n\nHappy #BlackHoleWeek! 8/8"", '@CJHandmer Indeed! We have another paper in the works where we do care about the supertranslations (not in this paper, though — here we just needed a Poincare subalgebra of BMS)']",21,04,2097 |
268,70,1014875131842359298,252867237,Juan Miguel Arrazola,"New paper by @XanaduAI : Gaussian Boson Sampling using threshold detectors <LINK> It's a special paper because (i) it's my first collaboration with my close friend @corsairenard and (ii) it's the first appearance of the Torontonian <LINK> @quantum_neville @XanaduAI @corsairenard Yes! I do try to have fun with my research, otherwise what's the point of doing it for a living ;-) @quantumVerd @XanaduAI @corsairenard ON states where a coincidence, but yes, the Torontonian is an homage to this great city that we call home.",https://arxiv.org/abs/1807.01639,"We study what is arguably the most experimentally appealing Boson Sampling architecture: Gaussian states sampled with threshold detectors. We show that in this setting, the probability of observing a given outcome is related to a matrix function that we name the Torontonian, which plays an analogous role to the permanent or the Hafnian in other models. We also prove that, provided that the probability of observing two or more photons in a single output mode is sufficiently small, our model remains intractable to simulate classically under standard complexity-theoretic conjectures. Finally, we leverage the mathematical simplicity of the model to introduce a physically motivated, exact sampling algorithm for all Boson Sampling models that employ Gaussian states and threshold detectors. ",Gaussian Boson Sampling using threshold detectors,3,"[""New paper by @XanaduAI : Gaussian Boson Sampling using threshold detectors <LINK> \nIt's a special paper because (i) it's my first collaboration with my close friend @corsairenard and (ii) it's the first appearance of the Torontonian <LINK>"", ""@quantum_neville @XanaduAI @corsairenard Yes! I do try to have fun with my research, otherwise what's the point of doing it for a living ;-)"", '@quantumVerd @XanaduAI @corsairenard ON states where a coincidence, but yes, the Torontonian is an homage to this great city that we call home.']",18,07,523 |
269,57,971818884335534081,1202760024,Stacy McGaugh,"New, very brief paper on the the arXiv pointing out that the “surprisingly” strong 21cm absorption detected at z=17 is entirely expected if the universe is devoid of non-baryonic dark matter. <LINK> I was talking about this 20 years ago. <LINK> Hopefully there will be more data like these on a timescale shorter than decades. And yes, I understand all the problems this raises. That’s why it is important. See <LINK> @MBKplus Everything heads towards the Om=1 solution in that limit. If we’re going to contemplate some modification of gravity that does this, it has to recover GR in the right limit (of course) and not change the expansion history in the early U. Late times a different matter of course. @MBKplus Gonna refer everyone to the many articles I’ve written on this. Most are old now but ADS will know then. But in a nutshell I envision the minimal damage possible to cosmology as we know it. It’s just the “dark” pieces that I suspect of being proxies for something deeper. @MBKplus Not sure that one is! @MBKplus D’oh. Now you’ve made me think. I think Bob Sanders did this a loooong time ago, but can’t remember where. I may be forced to reread our ARA&A review. @MBKplus It looks like the mass density enters through the relation for H(z). So when we rewrite the equation in terms of fb, we’re implicitly setting Om=Ob = small, so we’re effectively reducing H, so increasing the path length in the early universe, increasing the opacity. @MBKplus I’m not sure we can. This is an eternal complaint of mine: anything for which we need distance metrics, LCDM works great. Don’t even know how else to talk about it. Vice-Versa for the dynamics of bound systems. perhaps the early universe really is this simple? @MBKplus Which is to say, things are definitely messed up!",https://arxiv.org/abs/1803.02365,"The recently reported detection of redshifted 21cm absorption at $z \approx 17$ is a significant advance in the exploration of the cosmic dark ages. The observed signal ($T_{\mathrm{21}} \approx -0.5$ K with the limit $T_{\mathrm{21}} < -0.3$ K at 99\% c.l.) is anomalously strong for $\Lambda$CDM, which predicts at most $-0.24$ K. Here I show that the strong observed signal is expected in a purely baryonic universe. ","Strong Hydrogen Absorption at Cosmic Dawn: the Signature of a Baryonic |
Universe",9,"['New, very brief paper on the the arXiv pointing out that the “surprisingly” strong 21cm absorption detected at z=17 is entirely expected if the universe is devoid of non-baryonic dark matter. \n<LINK>\nI was talking about this 20 years ago. <LINK>', 'Hopefully there will be more data like these on a timescale shorter than decades. \n\nAnd yes, I understand all the problems this raises. That’s why it is important. See\n\nhttps://t.co/5LEqFhRup2', '@MBKplus Everything heads towards the Om=1 solution in that limit. If we’re going to contemplate some modification of gravity that does this, it has to recover GR in the right limit (of course) and not change the expansion history in the early U. Late times a different matter of course.', '@MBKplus Gonna refer everyone to the many articles I’ve written on this. Most are old now but ADS will know then. But in a nutshell I envision the minimal damage possible to cosmology as we know it. It’s just the “dark” pieces that I suspect of being proxies for something deeper.', '@MBKplus Not sure that one is!', '@MBKplus D’oh. Now you’ve made me think. I think Bob Sanders did this a loooong time ago, but can’t remember where. I may be forced to reread our ARA&A review.', '@MBKplus It looks like the mass density enters through the relation for H(z). So when we rewrite the equation in terms of fb, we’re implicitly setting Om=Ob = small, so we’re effectively reducing H, so increasing the path length in the early universe, increasing the opacity.', '@MBKplus I’m not sure we can. This is an eternal complaint of mine: anything for which we need distance metrics, LCDM works great. Don’t even know how else to talk about it. Vice-Versa for the dynamics of bound systems. perhaps the early universe really is this simple?', '@MBKplus Which is to say, things are definitely messed up!']",18,03,1783 |
270,266,1313644071294955520,762359343656361984,James Zou,"Telehealth is rapidly growing (esp. due to #COVID19), but poor quality image is a critical challenge. We propose a #AI approach to identify photos that are subpar for clinical use and help guide patients to take better photos. New #psb21 paper <LINK> <LINK> Super work by Kailas, @RoxanaDaneshjou @RobNovoaMD @AChiouMD and Justin Ko!",https://arxiv.org/abs/2010.02086,"Telehealth is an increasingly critical component of the health care ecosystem, especially due to the COVID-19 pandemic. Rapid adoption of telehealth has exposed limitations in the existing infrastructure. In this paper, we study and highlight photo quality as a major challenge in the telehealth workflow. We focus on teledermatology, where photo quality is particularly important; the framework proposed here can be generalized to other health domains. For telemedicine, dermatologists request that patients submit images of their lesions for assessment. However, these images are often of insufficient quality to make a clinical diagnosis since patients do not have experience taking clinical photos. A clinician has to manually triage poor quality images and request new images to be submitted, leading to wasted time for both the clinician and the patient. We propose an automated image assessment machine learning pipeline, TrueImage, to detect poor quality dermatology photos and to guide patients in taking better photos. Our experiments indicate that TrueImage can reject 50% of the sub-par quality images, while retaining 80% of good quality images patients send in, despite heterogeneity and limitations in the training data. These promising results suggest that our solution is feasible and can improve the quality of teledermatology care. ","TrueImage: A Machine Learning Algorithm to Improve the Quality of |
Telehealth Photos",2,"['Telehealth is rapidly growing (esp. due to #COVID19), but poor quality image is a critical challenge. \n\nWe propose a #AI approach to identify photos that are subpar for clinical use and help guide patients to take better photos. New #psb21 paper <LINK> <LINK>', 'Super work by Kailas, @RoxanaDaneshjou @RobNovoaMD @AChiouMD and Justin Ko!']",20,10,334 |
271,175,1326959189789454340,2883271903,Yuxiang Wu,"Having scalability issues with your ODQA systems?🆘 Adaptive Computation can help! We find that adaptive computation and global scheduling can reduce computation by 4.3x while retaining 95% of the accuracy on SQuAD-Open!🚀 #EMNLP2020 <LINK> [1/5] <LINK> For ODQA systems, the computational bottleneck often lies in the number of layers that the document reader has to process the passages through. We introduce an adaptive computation and global scheduling strategy for learning to allocate computation across multiple passages! [2/5] The global scheduling strategy is further enhanced with reinforcement learning. Our experiments show that both global scheduling and RL training are essential for improving our adaptive computation method. [3/5] <LINK> Our further analysis demonstrates that the proposed method can focus its computation on passages that contain the answer, and its scheduling policy learns an exploration-exploitation trade-off. [4/5] <LINK> With my amazing co-authors @PMinervini, Pontus, @riedelcastro at @ucl_nlp - paper: <LINK>, slides <LINK>. Join us at #EMNLP2020, Gather Session 2J (10:00am UTC, Nov 17)! [5/5]",https://arxiv.org/abs/2011.05435,"Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer. Previous works have shown that as the number of retrieved passages increases, so does the performance of the reader. However, they assume all retrieved passages are of equal importance and allocate the same amount of computation to them, leading to a substantial increase in computational cost. To reduce this cost, we propose the use of adaptive computation to control the computational budget allocated for the passages to be read. We first introduce a technique operating on individual passages in isolation which relies on anytime prediction and a per-layer estimation of an early exit probability. We then introduce SkylineBuilder, an approach for dynamically deciding on which passage to allocate computation at each step, based on a resource allocation policy trained via reinforcement learning. Our results on SQuAD-Open show that adaptive computation with global prioritisation improves over several strong static and adaptive methods, leading to a 4.3x reduction in computation while retaining 95% performance of the full model. ","Don't Read Too Much into It: Adaptive Computation for Open-Domain |
Question Answering",5,"['Having scalability issues with your ODQA systems?🆘 Adaptive Computation can help! We find that adaptive computation and global scheduling can reduce computation by 4.3x while retaining 95% of the accuracy on SQuAD-Open!🚀 #EMNLP2020 <LINK> [1/5] <LINK>', 'For ODQA systems, the computational bottleneck often lies in the number of layers that the document reader has to process the passages through. We introduce an adaptive computation and global scheduling strategy for learning to allocate computation across multiple passages! [2/5]', 'The global scheduling strategy is further enhanced with reinforcement learning. Our experiments show that both global scheduling and RL training are essential for improving our adaptive computation method. [3/5] https://t.co/Mf2UDlUHYZ', 'Our further analysis demonstrates that the proposed method can focus its computation on passages that contain the answer, and its scheduling policy learns an exploration-exploitation trade-off. [4/5] https://t.co/hviv4pu6ql', 'With my amazing co-authors @PMinervini, Pontus, @riedelcastro at @ucl_nlp - paper: https://t.co/PlrFrCBdRG, slides https://t.co/Z7jIIa0EKV. Join us at #EMNLP2020, Gather Session 2J (10:00am UTC, Nov 17)! [5/5]']",20,11,1134 |
272,27,1408461856532930561,268203365,gerald pao,"Our new paper is a 1st attempt to download brains from organisms into computers with using whole brain electrophysiology & embedding activity into a network of manifolds to reproduce realistic behavioral as well as neuronal dynamics in generative mode. <LINK> <LINK> @_rdgao Thanks Richard. Scale free embedding! @S_Sigfusson Also larval Zebrafish and human visual pathways @snpc_404 real data has noise both from observation and process noise so my general approach is to keep it simple. Using the geodesic might be better but I generally try to do as little as possible to the data to prevent overfitting. Therefore I use the Sugihara minimalist approach. @snpc_404 I am not opposed to anything in principle as long as it works better. Observatoins are not smooth so for it to be you either need to interpolate or fit with ODEs which is something that requires you to know the underlying variables that might be missing. @snpc_404 So we use a combination of real time series of observed neurons and delays thereof as formulated in the generalized Takens theorem, where you can combine real observables with ""placeholder"" unknowns that show up as delays in the embedding. <LINK> @snpc_404 In summary Riemannian manifolds would be nice because they would be nice but I don't know if they would work better with real world data. But I am open to is if it works. Thank you for your comment and question",https://arxiv.org/abs/2106.10627,"We propose an algorithm grounded in dynamical systems theory that generalizes manifold learning from a global state representation, to a network of local interacting manifolds termed a Generative Manifold Network (GMN). Manifolds are discovered using the convergent cross mapping (CCM) causal inference algorithm which are then compressed into a reduced redundancy network. The representation is a network of manifolds embedded from observational data where each orthogonal axis of a local manifold is an embedding of a individually identifiable neuron or brain area that has exact correspondence in the real world. As such these can be experimentally manipulated to test hypotheses derived from theory and data analysis. Here we demonstrate that this representation preserves the essential features of the brain of flies,larval zebrafish and humans. In addition to accurate near-term prediction, the GMN model can be used to synthesize realistic time series of whole brain neuronal activity and locomotion viewed over the long term. Thus, as a final validation of how well GMN captures essential dynamic information, we show that the artificially generated time series can be used as a training set to predict out-of-sample observed fly locomotion, as well as brain activity in out of sample withheld data not used in model building. Remarkably, the artificially generated time series show realistic novel behaviors that do not exist in the training data, but that do exist in the out-of-sample observational data. This suggests that GMN captures inherently emergent properties of the network. We suggest our approach may be a generic recipe for mapping time series observations of any complex nonlinear network into a model that is able to generate naturalistic system behaviors that identifies variables that have real world correspondence and can be experimentally manipulated. ",Experimentally testable whole brain manifolds that recapitulate behavior,7,"['Our new paper is a 1st attempt to download brains from organisms into computers with using whole brain electrophysiology & embedding activity into a network of manifolds to reproduce realistic behavioral as well as neuronal dynamics in generative mode.\n<LINK> <LINK>', '@_rdgao Thanks Richard. Scale free embedding!', '@S_Sigfusson Also larval Zebrafish and human visual pathways', '@snpc_404 real data has noise both from observation and process noise so my general approach is to keep it simple. Using the geodesic might be better but I generally try to do as little as possible to the data to prevent overfitting. Therefore I use the Sugihara minimalist approach.', '@snpc_404 I am not opposed to anything in principle as long as it works better. Observatoins are not smooth so for it to be you either need to interpolate or fit with ODEs which is something that requires you to know the underlying variables that might be missing.', '@snpc_404 So we use a combination of real time series of observed neurons and delays thereof as formulated in the generalized Takens theorem, where you can combine real observables with ""placeholder"" unknowns that show up as delays in the embedding.\nhttps://t.co/rMyP03Ax2J', ""@snpc_404 In summary Riemannian manifolds would be nice because they would be nice but I don't know if they would work better with real world data. But I am open to is if it works. Thank you for your comment and question""]",21,06,1400 |
273,85,1059503172270673921,157014702,Jessie Shelton,"New paper today about footprints of dark matter self-interactions in our solar system: <LINK> Let's put the fine print up front: this mechanism for discovering dark matter self-interactions only works if dark matter ALSO talks to nuclear matter at rates that would lead to detectable signals soon-ish. 2/N #darkmatter that does interact with nuclei can scatter inside the sun or earth, and if it loses enough energy in the collision, it will then be gravitationally bound. In this way the sun and earth can build up dark matter balls in their cores 3/N If #darkmatter can scatter off itself too, then in the sun that is just one more way that the bound population grow. But the Earth is tiny and its gravitational potential well is shallow. In the earth turning on self-interactions EJECTS previously-captured dark matter 4/N What Cristian Gaidau and I showed in today's paper is that comparing the size of the populations of #darkmatter bound to the sun and the earth is a sensitive diagnostic of dark matter self-interactions 5/N By sensitive, I mean: the impact of self-interactions can be very large (multiple orders of magnitude), in particular for astrophysically interesting cross-sections 6/6 @davemckeen Thanks, Dave! Cristian did some really great work on this project.",https://arxiv.org/abs/1811.00557,"Dark matter (DM) self-interactions affect the gravitational capture of DM in the Sun and Earth differently as a simple consequence of the differing kinematics of collisions within the two potential wells: the dominant effect of self-interactions in the Sun is to provide an additional channel for capture, while the dominant effect in the Earth is to eject previously captured DM. We point out that this simple observation can be used to deduce the existence of DM self-interactions by comparing the annihilation rates of DM gravitationally bound within the Sun and Earth. We compute the Sun and Earth annihilation fluxes for DM with spin-independent nuclear cross-sections and thermal annihilation cross-sections and demonstrate that, for cross-sections allowed by direct detection, self-interactions can easily suppress the expected Earth flux by multiple orders of magnitude. This suppression is potentially significant even for self-interaction cross-sections orders of magnitude below the Bullet Cluster bounds, making this solar system comparison a leading test of dark matter self-interactions. Additionally, we consider thermalization of the captured DM population with the nuclei of the capturing body in some detail, accounting for both nuclear and self-interactions, and point out some consequential and broadly applicable considerations. ",A Solar System Test of Self-Interacting Dark Matter,7,"['New paper today about footprints of dark matter self-interactions in our solar system: <LINK>', ""Let's put the fine print up front: this mechanism for discovering dark matter self-interactions only works if dark matter ALSO talks to nuclear matter at rates that would lead to detectable signals soon-ish. 2/N"", '#darkmatter that does interact with nuclei can scatter inside the sun or earth, and if it loses enough energy in the collision, it will then be gravitationally bound. In this way the sun and earth can build up dark matter balls in their cores 3/N', 'If #darkmatter can scatter off itself too, then in the sun that is just one more way that the bound population grow. But the Earth is tiny and its gravitational potential well is shallow. In the earth turning on self-interactions EJECTS previously-captured dark matter 4/N', ""What Cristian Gaidau and I showed in today's paper is that comparing the size of the populations of #darkmatter bound to the sun and the earth is a sensitive diagnostic of dark matter self-interactions 5/N"", 'By sensitive, I mean: the impact of self-interactions can be very large (multiple orders of magnitude), in particular for astrophysically interesting cross-sections\n 6/6', '@davemckeen Thanks, Dave! Cristian did some really great work on this project.']",18,11,1279 |
274,63,1238392964226965504,3021399517,Jean-Baptiste Mouret,New paper: We combine embeddings and meta-learning to adapt dynamical models of robots learned initially in simulation. This allows robots to adapt to novel situations in a few minutes and cross the reality gap (@sim2realAIorg). <LINK> [with @RR_Kaushik ] <LINK> Full video: <LINK> @jrieffel I am late on the announcements... I am trying to catch up.,https://arxiv.org/abs/2003.04663,"Meta-learning algorithms can accelerate the model-based reinforcement learning (MBRL) algorithms by finding an initial set of parameters for the dynamical model such that the model can be trained to match the actual dynamics of the system with only a few data-points. However, in the real world, a robot might encounter any situation starting from motor failures to finding itself in a rocky terrain where the dynamics of the robot can be significantly different from one another. In this paper, first, we show that when meta-training situations (the prior situations) have such diverse dynamics, using a single set of meta-trained parameters as a starting point still requires a large number of observations from the real system to learn a useful model of the dynamics. Second, we propose an algorithm called FAMLE that mitigates this limitation by meta-training several initial starting points (i.e., initial parameters) for training the model and allows the robot to select the most suitable starting point to adapt the model to the current situation with only a few gradient steps. We compare FAMLE to MBRL, MBRL with a meta-trained model with MAML, and model-free policy search algorithm PPO for various simulated and real robotic tasks, and show that FAMLE allows the robots to adapt to novel damages in significantly fewer time-steps than the baselines. ","Fast Online Adaptation in Robotics through Meta-Learning Embeddings of |
Simulated Priors",3,"['New paper: We combine embeddings and meta-learning to adapt dynamical models of robots learned initially in simulation. This allows robots to adapt to novel situations in a few minutes and cross the reality gap (@sim2realAIorg). <LINK> [with @RR_Kaushik ] <LINK>', 'Full video: https://t.co/1rezhLjCGv', '@jrieffel I am late on the announcements... I am trying to catch up.']",20,03,350 |
275,195,1314363840256188417,1044298030512570368,Anna Rogers 🇪🇺🇺🇦,"New paper📜: What Can We Do to Improve Peer Review in NLP? <LINK> with @IAugenstein TLDR: In its current form, peer review is a poorly defined task with apples-to-oranges comparisons and unrealistic expectations. /1 <LINK> Reviewers resort to heuristics such as reject-if-not-SOTA to cope with uncertainty, so the only way to change that is to reduce uncertainty. Which is at least partly doable: better paper-reviewer matching, unambiguous eval criteria, fine-grained tracks, better review forms etc /2 Which criteria and forms, exactly? Each field has to find out for itself, through iterative development and experiments. Except that in NLP such work would be hard to publish, so there are no incentives to do it - and no mechanisms to test and compare any solutions. /3 We mostly rant about peer review *after* acceptance notifications come out, but what do we expect to change without systematic work on improving its quality *between* conferences? /4 <LINK> This is not to say that conference organizers are doing a bad job. But each conference makes a unique set of choices, and we have no way to tell to systematically compare and decide which policies should be kept. Also, this would be a lot of extra work. /5 Actionable steps: - talk about peer review a lot more, build up prestige of the topic and incentives to work on it - create new ACL roles to think about systematic testing/implementation of peer review policies, and feedback mechanism between organizers, authors and reviewers /6 Many thanks to the awesome #NLProc community. This paper would have been very different without insights of @emilymbender @yoavgo @complingy @SeeTedTalk @ani_nenkova @astent @karinv and many, many others. /7 To make this paper truly meta, it will come out in Findings of EMNLP. /8",https://arxiv.org/abs/2010.03863,"Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community. ",What Can We Do to Improve Peer Review in NLP?,8,"['New paper📜: What Can We Do to Improve Peer Review in NLP? \n<LINK>\nwith @IAugenstein \n\nTLDR: In its current form, peer review is a poorly defined task with apples-to-oranges comparisons and unrealistic expectations. /1 <LINK>', 'Reviewers resort to heuristics such as reject-if-not-SOTA to cope with uncertainty, so the only way to change that is to reduce uncertainty. Which is at least partly doable: better paper-reviewer matching, unambiguous eval criteria, fine-grained tracks, better review forms etc /2', 'Which criteria and forms, exactly? Each field has to find out for itself, through iterative development and experiments. Except that in NLP such work would be hard to publish, so there are no incentives to do it - and no mechanisms to test and compare any solutions. /3', 'We mostly rant about peer review *after* acceptance notifications come out, but what do we expect to change without systematic work on improving its quality *between* conferences? /4 https://t.co/m6TGRLOSVI', 'This is not to say that conference organizers are doing a bad job. But each conference makes a unique set of choices, and we have no way to tell to systematically compare and decide which policies should be kept. Also, this would be a lot of extra work. /5', 'Actionable steps:\n- talk about peer review a lot more, build up prestige of the topic and incentives to work on it\n- create new ACL roles to think about systematic testing/implementation of peer review policies, and feedback mechanism between organizers, authors and reviewers /6', 'Many thanks to the awesome #NLProc community. This paper would have been very different without insights of @emilymbender @yoavgo @complingy @SeeTedTalk @ani_nenkova @astent @karinv and many, many others. /7', 'To make this paper truly meta, it will come out in Findings of EMNLP. /8']",20,10,1781 |
276,107,1144400300335423488,46153507,dr. Jordy Davelaar,"New paper! We improved our Black Hole Accretion Code (BHAC) that allows us to keep divergence of magnetic fields close to zero! It turned out to be the key for our latest high-resolution simulations of M87! First author: Hector Olivares @BlackHoleCam <LINK> <LINK> Title: Constrained transport and adaptive mesh refinement in the Black Hole Accretion Code Author list: Hector Olivares, Oliver Porth, Jordy Davelaar, Elias R. Most, Christian M. Fromm, Yosuke Mizuno, Ziri Younsi, and Luciano Rezzolla @StefanPWinc @BlackHoleCam thanks!",https://arxiv.org/abs/1906.10795,"Worldwide very long baseline radio interferometry arrays are expected to obtain horizon-scale images of supermassive black hole candidates as well as of relativistic jets in several nearby active galactic nuclei. This motivates the development of models for magnetohydrodynamic flows in strong gravitational fields. The Black Hole Accretion Code (BHAC) intends to aid with the modelling of such sources by means of general relativistic magnetohydrodynamical (GRMHD) simulations in arbitrary stationary spacetimes. New additions were required to guarantee an accurate evolution of the magnetic field when small and large scales are captured simultaneously. We discuss the adaptive mesh refinement (AMR) techniques employed in BHAC, essential to keep several problems computationally tractable, as well as staggered-mesh-based constrained transport (CT) algorithms to preserve the divergence-free constraint of the magnetic field, including a general class of prolongation operators for face-allocated variables compatible with them. Through several standard tests, we show that the choice of divergence-control method can produce qualitative differences in simulations of scientifically relevant accretion problems. We demonstrate the ability of AMR to reduce the computational costs of accretion simulations while sufficiently resolving turbulence from the magnetorotational instability. In particular, we describe a simulation of an accreting Kerr black hole in Cartesian coordinates using AMR to follow the propagation of a relativistic jet while self-consistently including the jet engine, a problem set up-for which the new AMR implementation is particularly advantageous. The CT methods and AMR strategies discussed here are being employed in the simulations performed with BHAC used in the generation of theoretical models for the Event Horizon Telescope Collaboration. ","Constrained transport and adaptive mesh refinement in the Black Hole |
Accretion Code",3,"['New paper! We improved our Black Hole Accretion Code (BHAC) that allows us to keep divergence of magnetic fields close to zero! It turned out to be the key for our latest high-resolution simulations of M87! \n\nFirst author: Hector Olivares @BlackHoleCam <LINK> <LINK>', 'Title: Constrained transport and adaptive mesh refinement in the Black Hole Accretion Code\nAuthor list: Hector Olivares, Oliver Porth, Jordy Davelaar, Elias R. Most, Christian M. Fromm, Yosuke Mizuno, Ziri Younsi, and Luciano Rezzolla', '@StefanPWinc @BlackHoleCam thanks!']",19,06,535 |
277,111,1468275897476255745,1310552063999438849,Hauke Group,"Understanding the exotic phases of gauge theories is crucial to facilitate their quantum simulation. In <LINK> we map out the ground-state phase diagram of quantum link electrodynamics in (2+1)-d. A new paper by T. Hashizume, @JCHalimeh @PhilippHauke, D. Banerjee <LINK>",https://arxiv.org/abs/2112.00756,"The exploration of phase diagrams of strongly interacting gauge theories coupled to matter in lower dimensions promises the identification of exotic phases and possible new universality classes, and it facilitates a better understanding of salient phenomena in Nature, such as confinement or high-temperature superconductivity. The emerging new techniques of quantum synthetic matter experiments as well as efficient classical computational methods with matrix product states have been extremely successful in one spatial dimension, and are now motivating such studies in two spatial dimensions. In this work, we consider a $\mathrm{U}(1)$ quantum link lattice gauge theory where the gauge fields, represented by spin-$\frac{1}{2}$ operators are coupled to a single flavor of staggered fermions. Using matrix product states on infinite cylinders with increasing diameter, we conjecture its phase diagram in $(2+1)$-d. This model allows us to smoothly tune between the $\mathrm{U}(1)$ quantum link and the quantum dimer models by adjusting the strength of the fermion mass term, enabling us to connect to the well-studied phases of those models. Our study reveals a rich phase diagram with exotic phases and interesting phase transitions to a potential liquid-like phase. It thus furthers the collection of gauge theory models that may guide future quantum-simulation experiments. ",Ground-state phase diagram of quantum link electrodynamics in $(2+1)$-d,1,"['Understanding the exotic phases of gauge theories is crucial to facilitate their quantum simulation. In <LINK> we map out the ground-state phase diagram of quantum link electrodynamics in (2+1)-d. A new paper by T. Hashizume, @JCHalimeh @PhilippHauke, D. Banerjee <LINK>']",21,12,270 |
278,36,1120595746376560641,719928410814955520,Evgenii Zheltonozhskii,"Our new paper, ""Towards Learning of Filter-Level Heterogeneous Compression of CNNs"": <LINK> tl;dr: we played with differentiable NAS for network compression (quantization and pruning). It turned out to be tricky, unstable and still requires tons of resources. <LINK>",https://arxiv.org/abs/1904.09872,"Recently, deep learning has become a de facto standard in machine learning with convolutional neural networks (CNNs) demonstrating spectacular success on a wide variety of tasks. However, CNNs are typically very demanding computationally at inference time. One of the ways to alleviate this burden on certain hardware platforms is quantization relying on the use of low-precision arithmetic representation for the weights and the activations. Another popular method is the pruning of the number of filters in each layer. While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training. In this paper, we formulate optimal arithmetic bit length allocation and neural network pruning as a NAS problem, searching for the configurations satisfying a computational complexity budget while maximizing the accuracy. We use a differentiable search method based on the continuous relaxation of the search space proposed by Liu et al. (arXiv:1806.09055). We show, by grid search, that heterogeneous quantized networks suffer from a high variance which renders the benefit of the search questionable. For pruning, improvement over homogeneous cases is possible, but it is still challenging to find those configurations with the proposed method. The code is publicly available at this https URL and this https URL ","Towards Learning of Filter-Level Heterogeneous Compression of |
Convolutional Neural Networks",1,"['Our new paper, ""Towards Learning of Filter-Level Heterogeneous Compression of CNNs"": <LINK>\ntl;dr: we played with differentiable NAS for network compression (quantization and pruning). It turned out to be tricky, unstable and still requires tons of resources. <LINK>']",19,04,266 |
279,165,1329305378794901504,3275439755,Nicola Branchini,"1/ : I have arXived my first preprint, ""Optimized Auxiliary Particle Filters"": <LINK> . This was done during my MSc at Edi, with Victor Elvira. I really enjoyed learning about SMC, a very fundamental topic. In this work, we propose new ways of choosing 2/ : the proposal. We consider a mixture proposal: Klaas et al (2005) and also Elvira (2019) argued for a mixture proposal in two different ways: through a Rao-Blackwellisation of the APF auxiliary variable (former), or by Multiple Importance Sampling arguments (latter) . 3/ : While more expensive than SMC, these methods which marginalize the previous state offer significant benefits in terms of estimators variance. With the MIS perspective it was possible to devise an analytic choice for the mixture weights. We propose a more flexible mixture 4/ : where the number of kernels is a free parameter, and importantly the weights can be derived as a solution to a convex optimization problem. We prove an unbiased estimate of the marginal likelihood and consistent IS estimators, generalizing the APF estimator (Pitt et al 2012) 5/ : Our way of optimizing the mixture weights is related to the literature on approximate dynamic programming, cost-to-go PFs , but performed in marginal space. Our method effectively allows for optimizing a proposal with general transition/observation densities, but avoiding 6/ : black-box methods, non-convex (e.g. black-box VI) techniques. We show consistently improved performance across several state-space models , comparing to the IAPF (Elvira et al 2018,2019) , BPF and APF. I really enjoyed working with Victor . The first day I met him, 7/ : he dedicated >2 hours talking about his work and slowly explaining foundational concepts and ideas to me . Over the MSc, he always treated me as a peer and let me follow the path I was more interested into , while being very supportive and present. @victorelvira",http://arxiv.org/abs/2011.09317,"Auxiliary particle filters (APFs) are a class of sequential Monte Carlo (SMC) methods for Bayesian inference in state-space models. In their original derivation, APFs operate in an extended state space using an auxiliary variable to improve inference. In this work, we propose optimized auxiliary particle filters, a framework where the traditional APF auxiliary variables are interpreted as weights in an importance sampling mixture proposal. Under this interpretation, we devise a mechanism for proposing the mixture weights that is inspired by recent advances in multiple and adaptive importance sampling. In particular, we propose to select the mixture weights by formulating a convex optimization problem, with the aim of approximating the filtering posterior at each timestep. Further, we propose a weighting scheme that generalizes previous results on the APF (Pitt et al. 2012), proving unbiasedness and consistency of our estimators. Our framework demonstrates significantly improved estimates on a range of metrics compared to state-of-the-art particle filters at similar computational complexity in challenging and widely used dynamical models. ","Optimized Auxiliary Particle Filters: adapting mixture proposals via |
convex optimization",8,"['1/ : I have arXived my first preprint, ""Optimized Auxiliary Particle Filters"": <LINK> . \nThis was done during my MSc at Edi, with Victor Elvira. \nI really enjoyed learning about SMC, a very fundamental topic. In this work, we propose new ways of choosing', '2/ : the proposal. We consider a mixture proposal: Klaas et al (2005) and also Elvira (2019) argued for a mixture proposal in two different ways: through a Rao-Blackwellisation of the APF auxiliary variable (former), or by Multiple Importance Sampling arguments (latter) .', '3/ : While more expensive than SMC, these methods which marginalize the previous state offer significant benefits in terms of estimators variance. With the MIS perspective it was possible to devise an analytic choice for the mixture weights. We propose a more flexible mixture', '4/ : where the number of kernels is a free parameter, and importantly the weights can be derived as a solution to a convex optimization problem. We prove an unbiased estimate of the marginal likelihood and consistent IS estimators, generalizing the APF estimator (Pitt et al 2012)', '5/ : Our way of optimizing the mixture weights is related to the literature on approximate dynamic programming, cost-to-go PFs , but performed in marginal space. Our method effectively allows for optimizing a proposal with general transition/observation densities, but avoiding', '6/ : black-box methods, non-convex (e.g. black-box VI) techniques. We show consistently improved performance across several state-space models , comparing to the IAPF (Elvira et al 2018,2019) , BPF and APF.\nI really enjoyed working with Victor . The first day I met him,', '7/ : he dedicated >2 hours talking about his work and slowly explaining foundational concepts and ideas to me . Over the MSc, he always treated me as a peer and let me follow the path I was more interested into , while being very supportive and present.', '@victorelvira']",20,11,1903 |
280,17,1377484517858975747,2577596593,Chelsea Finn,"How can robots generalize to new environments & tasks? We find that using in-the-wild videos of people can allow learned reward functions to do so! Paper: <LINK> Led by @_anniechen_, @SurajNair_1 🧵(1/5) <LINK> To get reward functions that generalize, we train domain-agnostic video discriminators (DVD) with: * a lot of diverse human data, and * a narrow & small amount of robot demos The idea is super simple: predict if two videos are performing the same task or not. (2/5) <LINK> This discriminator can be used as a reward by feeding in a human video of the desired task and a video of the robot’s behavior. We use it by planning with a learned visual dynamics model. (3/5) <LINK> Does using human videos improve reward generalization compared to using only narrow robot data? We see: * 20% greater task success in new environments * 25% greater task success on new tasks both in simulation and on a real robot. (4/5) <LINK> For more, check out: Paper: <LINK> Website: <LINK> Summary video: <LINK> I'm quite excited about how reusing broad datasets can help robots generalize, and this project has been a great indication in that direction! (5/5) <LINK>",https://arxiv.org/abs/2103.16817,"We are motivated by the goal of generalist robots that can complete a wide range of tasks across many environments. Critical to this is the robot's ability to acquire some metric of task success or reward, which is necessary for reinforcement learning, planning, or knowing when to ask for help. For a general-purpose robot operating in the real world, this reward function must also be able to generalize broadly across environments, tasks, and objects, while depending only on on-board sensor observations (e.g. RGB images). While deep learning on large and diverse datasets has shown promise as a path towards such generalization in computer vision and natural language, collecting high quality datasets of robotic interaction at scale remains an open challenge. In contrast, ""in-the-wild"" videos of humans (e.g. YouTube) contain an extensive collection of people doing interesting tasks across a diverse range of settings. In this work, we propose a simple approach, Domain-agnostic Video Discriminator (DVD), that learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task, and can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos. We find that by leveraging diverse human datasets, this reward function (a) can generalize zero shot to unseen environments, (b) generalize zero shot to unseen tasks, and (c) can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo. ","Learning Generalizable Robotic Reward Functions from ""In-The-Wild"" Human |
Videos",5,"['How can robots generalize to new environments & tasks?\n\nWe find that using in-the-wild videos of people can allow learned reward functions to do so!\nPaper: <LINK>\n\nLed by @_anniechen_, @SurajNair_1\n🧵(1/5) <LINK>', 'To get reward functions that generalize, we train domain-agnostic video discriminators (DVD) with:\n* a lot of diverse human data, and\n* a narrow & small amount of robot demos\n\nThe idea is super simple: predict if two videos are performing the same task or not.\n(2/5) https://t.co/4sRhfThzkI', 'This discriminator can be used as a reward by feeding in a human video of the desired task and a video of the robot’s behavior.\n\nWe use it by planning with a learned visual dynamics model.\n(3/5) https://t.co/yQBtzwlmNi', 'Does using human videos improve reward generalization compared to using only narrow robot data?\n\nWe see:\n* 20% greater task success in new environments\n* 25% greater task success on new tasks\nboth in simulation and on a real robot.\n\n(4/5) https://t.co/S0xfHCmh3F', ""For more, check out:\nPaper: https://t.co/afz2PWw0rT\nWebsite: https://t.co/geRH3tmgTe\nSummary video: https://t.co/wg2C1lEsBG\n\nI'm quite excited about how reusing broad datasets can help robots generalize, and this project has been a great indication in that direction!\n\n(5/5) https://t.co/yVPtIjSUAp""]",21,03,1156 |
281,41,984699828671340545,487990723,Gianfranco Bertone,"New paper today on the arXiv: ""Probing the nature of dark matter particles with stellar streams"" with @D_ITP postdoc Nil Banik, @jobovy and Nassim Bozorgnia <LINK> <LINK> A key prediction of the standard ""Lambda-Cold-Dark-Matter"" cosmological model is the existence of a large number of dark matter substructures on sub-galactic scales This can be tested by studying the perturbations induced by dark matter substructures on cold stellar streams. We studied the prospects for discriminating cold from warm dark matter with upcoming astronomical surveys such as the Large Synoptic Survey Telescope @LSST We argue that this method will set stringent constraints on the mass of dark matter particles, and possibly to yield an actual measurement if it is in the O(1) keV range",https://arxiv.org/abs/1804.04384,"A key prediction of the standard cosmological model -- which relies on the assumption that dark matter is cold, i.e. non-relativistic at the epoch of structure formation -- is the existence of a large number of dark matter substructures on sub-galactic scales. This assumption can be tested by studying the perturbations induced by dark matter substructures on cold stellar streams. Here, we study the prospects for discriminating cold from warm dark matter by generating mock data for upcoming astronomical surveys such as the Large Synoptic Survey Telescope (LSST), and reconstructing the properties of the dark matter particle from the perturbations induced on the stellar density profile of a stream. We discuss the statistical and systematic uncertainties, and show that the method should allow to set stringent constraints on the mass of thermal dark matter relics, and possibly to yield an actual measurement of the dark matter particle mass if it is in the $\mathcal{O}(1)$ keV range. ",Probing the nature of dark matter particles with stellar streams,4,"['New paper today on the arXiv: ""Probing the nature of dark matter\nparticles with stellar streams"" with @D_ITP postdoc Nil Banik, @jobovy and Nassim Bozorgnia <LINK> <LINK>', 'A key prediction of the standard ""Lambda-Cold-Dark-Matter"" cosmological model is the existence of a large number of dark matter substructures on sub-galactic scales', 'This can be tested by studying the perturbations induced by dark matter substructures on cold stellar streams. We studied the prospects for discriminating cold from warm dark matter with upcoming astronomical surveys such as the Large Synoptic Survey Telescope @LSST', 'We argue that this method will set stringent constraints on the mass of dark matter particles, and possibly to yield an actual measurement if it is in the O(1) keV range']",18,04,772 |
282,82,994621565974335489,289347679,Dr. Axiel Yaël Birenbaum,"Something funny happens when you put high T_C superconductor monolayer FeSe on SrTiO3: a van der Walls sandwich forms with an emerging extra layer of TiOx in the role of salami! Check my new paper out: <LINK> #RealLifeScientist research done @ORNL @VanderbiltU First author is @greatconcavity (sorry for the late addition, I forgot you had a Twitter account) Check it out, and learn about the impact it has on the band structures. Perhaps you can help with elucidating how it impacts the #superconductivity? <LINK>",https://arxiv.org/abs/1805.03293,"The sensitive dependence of monolayer materials on their environment often gives rise to unexpected properties. It was recently demonstrated that monolayer FeSe on a SrTiO$_3$ substrate exhibits a much higher superconducting critical temperature T$_C$ than the bulk material. Here, we examine the interfacial structure of FeSe / SrTiO$_3$ and the effect of an interfacial Ti$_{1+x}$O$_2$ layer on the increased T$_C$ using a combination of scanning transmission electron microscopy and density functional theory. We find Ti$_{1+x}$O$_2$ forms its own quasi-two-dimensional layer, bonding to both the substrate and the FeSe film by van der Waals interactions. The excess Ti in this layer electron-dopes the FeSe monolayer in agreement with experimental observations. Moreover, the interfacial layer introduces symmetry-breaking distortions in the FeSe film that favor a T$_C$ increase. These results suggest that this common substrate may be functionalized to modify the electronic structure of a variety of thin films and monolayers. ","Intrinsic interfacial van der Waals monolayers and their effect on the |
high-temperature superconductor FeSe/SrTiO$_3$",3,"['Something funny happens when you put high T_C superconductor monolayer FeSe on SrTiO3: a van der Walls sandwich forms with an emerging extra layer of TiOx in the role of salami!\n\nCheck my new paper out: <LINK> #RealLifeScientist research done @ORNL @VanderbiltU', 'First author is @greatconcavity (sorry for the late addition, I forgot you had a Twitter account)', 'Check it out, and learn about the impact it has on the band structures. Perhaps you can help with elucidating how it impacts the #superconductivity? https://t.co/tWk7awN8MN']",18,05,514 |
283,107,1424907889261514767,1041578714,Benjamin Pope,"Excited to see Dan Hey's awesome new paper on arXiv, about searching for transiting planets around δ Scuti stars in Kepler! Expected to be the last of his PhD at @sifa_astro before he moves to @UHIfA as a postdoc. <LINK> <LINK> δ Scuti stars are a kind of hot star that pulsates with short periods (<< 1 day) with acoustic oscillations. It is hard to see their planets, even though we really would like to know more about planet populations around hot stars, because they get swamped with pulsation noise. Dan and co (inc @benmontet, Tim Bedding, Simon Murphy, and myself) iteratively subtracted sinusoids from Kepler δ Scuti stars to get precise prewhitened light curves to search for planets - finding just a handful of new marginal candidates, despite sensitive searches. The method is powerful and we hope to apply it to TESS and PLATO next - shallow wide surveys are biased toward hot stars so there should be plenty to look at! Meanwhile as a by-product, we have I think the largest sample of δ Scuti and γ Dor pulsation periods. Been playing around with embeddings for these - @davidwhogg? <LINK>",https://arxiv.org/abs/2108.03785,"We search for transits around all known pulsating {\delta} Sct variables (6500 K < Teff < 10 000 K) in the long-cadence Kepler data after subtracting the pulsation signal through an automated routine. To achieve this, we devise a simple and computationally inexpensive method for distinguishing between low-frequency pulsations and transits in light curves. We find 3 new candidate transit events that were previously hidden behind the pulsations, but caution that they are likely to be false positive events. We also examined the Kepler Objects of Interest catalog and identify 13 additional host stars which show {\delta} Sct pulsations. For each star in our sample, we use the non-detection of pulsation timing variations for a planet that is known to be transiting a {\delta} Sct variable to obtain both an upper limit on the mass of the planet and the expected radial velocity semi-amplitude of the host star. Simple injection tests of our pipeline imply 100% recovery for planets of 0.5 RJup or greater. Extrapolating our number of Kepler {\delta} Sct stars, we expect 12 detectable planets above 0.5 RJup in TESS. Our sample contains some of the hottest known transiting planets around evolved stars, and is the first complete sample of transits around {\delta} Sct variables. We make available our code and pulsation-subtracted light curves to facilitate further analysis. ",A search for transits among the {\delta} Scuti variables in Kepler,5,"[""Excited to see Dan Hey's awesome new paper on arXiv, about searching for transiting planets around δ Scuti stars in Kepler! Expected to be the last of his PhD at @sifa_astro before he moves to @UHIfA as a postdoc.\n\n<LINK> <LINK>"", 'δ Scuti stars are a kind of hot star that pulsates with short periods (<< 1 day) with acoustic oscillations. It is hard to see their planets, even though we really would like to know more about planet populations around hot stars, because they get swamped with pulsation noise.', 'Dan and co (inc @benmontet, Tim Bedding, Simon Murphy, and myself) iteratively subtracted sinusoids from Kepler δ Scuti stars to get precise prewhitened light curves to search for planets - finding just a handful of new marginal candidates, despite sensitive searches.', 'The method is powerful and we hope to apply it to TESS and PLATO next - shallow wide surveys are biased toward hot stars so there should be plenty to look at!', 'Meanwhile as a by-product, we have I think the largest sample of δ Scuti and γ Dor pulsation periods. Been playing around with embeddings for these - @davidwhogg? https://t.co/oVv4hc5hwb']",21,08,1109 |
284,93,1480746665690632194,1196266674954985472,Nirmal Raj,"1/n New paper with @HostertMatheus, @davemckeen, and Maxim Pospelov. <LINK> ""Dark sectors in neutron-shining-through-a-wall and nuclear absorption signals"" Thread follows. <LINK> 2/n Particle species w/ the same quantum numbers can ""mix"", e.g. you can emit a photon and measure it elsewhere as a Z boson. We explored how to discover feeble quantum mixings b/w the neutron and a hypothetical ""dark neutron"", a species that could resolve many puzzles in Nature. 3/n Quantum mixing lets you ""shine neutrons through a wall"". Throw a neutron at a wall, and it cd detect it as a dark neutron and let it thru, which cd then regenerate as a neutron on the other side. See attached cartoon where a batter faces a neutron bowler through a wall. <LINK> 4/n We show that this process can be exploited at IsoDAR, an imminent experiment consisting of a very intense proton beam paired to a very large detector, to be situated deep underground at (most likely) Yemilab, South Korea. 5/n While the original design of IsoDAR is to do important physics with neutrinos, the shielding of the beam target cd be a ""wall"" thru which beam-produced neutrons could shine & then show up in the detector. Thus, for free, IsoDAR will be a world-leading hunter of dark neutrons. <LINK> 6/n Next, dark neutrons could make up the #darkmatter in our galaxy (and the universe beyond). They could then slip through the Earth's layers, touch underground detectors and convert to neutrons -- promptly eaten up by detector nuclei with flashes of light spilling over. <LINK> 7/n (And that can be used to constrain even more feeble mixings between the neutron and dark neutron.) 8/n Finally, we explored a few other promising ways to find dark neutrons -- how to reinterpret searches for ultracold neutrons disappearing from their traps, how to catch dark neutrons at spallation sources, and so on.",https://arxiv.org/abs/2201.02603,"We propose new searches for $n^\prime$, a dark baryon that can mix with the Standard Model neutron. We show that IsoDAR, a proposal to place an intense cyclotron near a large-volume neutrino detector deep underground, can look for $n\to n^\prime \to n$ transitions with much lower backgrounds than surface experiments. This neutron-shining-through-a-wall search would be possible without any modifications to the experiment and would provide the strongest laboratory constraints on the $n$-$n^\prime$ mixing for a wide range of mass splittings. We also consider dark neutrons as dark matter and show that their nuclear absorption at deep-underground detectors such as SNO and Borexino places some of the strongest limits in parameter space. Finally, we describe other $n^\prime$ signatures, such as neutrons shining through walls at spallation sources, reactors, and the disappearance of ultracold neutrons. ","Dark sectors in neutron-shining-through-a-wall and nuclear absorption |
signals",8,"['1/n New paper with @HostertMatheus, @davemckeen, and Maxim Pospelov.\n<LINK>\n""Dark sectors in neutron-shining-through-a-wall and nuclear absorption signals""\nThread follows. <LINK>', '2/n Particle species w/ the same quantum numbers can ""mix"", e.g. you can emit a photon and measure it elsewhere as a Z boson. We explored how to discover feeble quantum mixings b/w the neutron and a hypothetical ""dark neutron"", a species that could resolve many puzzles in Nature.', '3/n Quantum mixing lets you ""shine neutrons through a wall"". Throw a neutron at a wall, and it cd detect it as a dark neutron and let it thru, which cd then regenerate as a neutron on the other side. See attached cartoon where a batter faces a neutron bowler through a wall. https://t.co/jXO1aESFlh', '4/n We show that this process can be exploited at IsoDAR, an imminent experiment consisting of a very intense proton beam paired to a very large detector, to be situated deep underground at (most likely) Yemilab, South Korea.', '5/n While the original design of IsoDAR is to do important physics with neutrinos, the shielding of the beam target cd be a ""wall"" thru which beam-produced neutrons could shine & then show up in the detector. Thus, for free, IsoDAR will be a world-leading hunter of dark neutrons. https://t.co/7IHiXtG3RQ', ""6/n Next, dark neutrons could make up the #darkmatter in our galaxy (and the universe beyond). They could then slip through the Earth's layers, touch underground detectors and convert to neutrons -- promptly eaten up by detector nuclei with flashes of light spilling over. https://t.co/Lqh66Ceodq"", '7/n (And that can be used to constrain even more feeble mixings between the neutron and dark neutron.)', '8/n Finally, we explored a few other promising ways to find dark neutrons -- how to reinterpret searches for ultracold neutrons disappearing from their traps, how to catch dark neutrons at spallation sources, and so on.']",22,01,1858 |
285,44,1142052468320342017,2303004390,Brandon Amos,"Excited to share my new tech report from my @IntelAI internship on the Limited Multi-Label projection layer! Joint work with Vladlen Koltun and @zicokolter Paper: <LINK> @PyTorch Code: <LINK> <LINK> We start by motiving our work with other projections in machine learning and reviewing that the ReLU, sigmoid, and softmax layers are just explicit closed-form solutions to convex and constrained optimization problems that project into polytopes. <LINK> We then propose that projecting onto another polytope, that we call LML polytope, is useful for learning in top-k settings. It doesn't have an explicit closed-form solution but we show that solving and differentiating through this projection operation is easy and tractable. <LINK> Now you can maximize the top-k recall with the LML layer by just posing it as a maximum likelihood problem over the labels that you observe *without* worrying about your model collapsing <LINK> We add the LML layer with a few lines of code to existing code for top-k CIFAR-100 classification and scene graph generation and recover or surpass the accuracy of the state-of-the-art models. <LINK> Notably, we also revive and revisit the truncated top-k entropy loss from Lapin et al. as another reasonable baseline for top-k classification that Berrada, Zisserman, and Kumar did not consider and show how it can be extended to multi-label settings for scene graph generation",https://arxiv.org/abs/1906.08707,"We propose the Limited Multi-Label (LML) projection layer as a new primitive operation for end-to-end learning systems. The LML layer provides a probabilistic way of modeling multi-label predictions limited to having exactly k labels. We derive efficient forward and backward passes for this layer and show how the layer can be used to optimize the top-k recall for multi-label tasks with incomplete label information. We evaluate LML layers on top-k CIFAR-100 classification and scene graph generation. We show that LML layers add a negligible amount of computational overhead, strictly improve the model's representational capacity, and improve accuracy. We also revisit the truncated top-k entropy method as a competitive baseline for top-k classification. ",The Limited Multi-Label Projection Layer,6,"['Excited to share my new tech report from my @IntelAI internship on the Limited Multi-Label projection layer! Joint work with Vladlen Koltun and @zicokolter\n\nPaper: <LINK>\n@PyTorch Code: <LINK> <LINK>', 'We start by motiving our work with other projections in machine learning and reviewing that the ReLU, sigmoid, and softmax layers are just explicit closed-form solutions to convex and constrained optimization problems that project into polytopes. https://t.co/NRxZt27wWp', ""We then propose that projecting onto another polytope, that we call LML polytope, is useful for learning in top-k settings. It doesn't have an explicit closed-form solution but we show that solving and differentiating through this projection operation is easy and tractable. https://t.co/wR4Rju9WyK"", 'Now you can maximize the top-k recall with the LML layer by just posing it as a maximum likelihood problem over the labels that you observe *without* worrying about your model collapsing https://t.co/MPREGGW9Ga', 'We add the LML layer with a few lines of code to existing code for top-k CIFAR-100 classification and scene graph generation and recover or surpass the accuracy of the state-of-the-art models. https://t.co/ykDDP26uWj', 'Notably, we also revive and revisit the truncated top-k entropy loss from Lapin et al. as another reasonable baseline for top-k classification that Berrada, Zisserman, and Kumar did not consider and show how it can be extended to multi-label settings for scene graph generation']",19,06,1406 |
286,36,1397600373372772357,219545939,Zoe Zontou,New science alert! Everyone should check out my awesome co-supervisor's (@astro_pwise) paper on SNe Ia rates & delay times in @theDESurvey ! So excited to be a part of this (my first co-authored paper!!!) with the rest of our stellar group @UoS_SNe! ✨ <LINK>,https://arxiv.org/abs/2105.11954,"We use a sample of 809 photometrically classified type Ia supernovae (SNe Ia) discovered by the Dark Energy Survey (DES) along with 40415 field galaxies to calculate the rate of SNe Ia per galaxy in the redshift range $0.2 < z <0.6$. We recover the known correlation between SN Ia rate and galaxy stellar mass across a broad range of scales $8.5 \leq \log(M_*/\mathrm{M}_{\odot}) \leq 11.25$. We find that the SN Ia rate increases with stellar mass as a power-law with index $0.63 \pm 0.02$, which is consistent with previous work. We use an empirical model of stellar mass assembly to estimate the average star-formation histories (SFHs) of galaxies across the stellar mass range of our measurement. Combining the modelled SFHs with the SN Ia rates to estimate constraints on the SN Ia delay time distribution (DTD), we find the data are fit well by a power-law DTD with slope index $\beta = -1.13 \pm 0.05$ and normalisation $A = 2.11 \pm0.05 \times 10^{-13}~\mathrm{SNe}~{\mathrm{M}_{\odot}}^{-1}~\mathrm{yr}^{-1}$, which corresponds to an overall SN Ia production efficiency $N_{\mathrm{Ia}}/M_* = 0.9~_{-0.7}^{+4.0} \times 10^{-3}~\mathrm{SNe}~\mathrm{M}_{\odot}^{-1}$. Upon splitting the SN sample by properties of the light curves, we find a strong dependence on DTD slope with the SN decline rate, with slower-declining SNe exhibiting a steeper DTD slope. We interpret this as a result of a relationship between intrinsic luminosity and progenitor age, and explore the implications of the result in the context of SN Ia progenitors. ",Rates and delay times of type Ia supernovae in the Dark Energy Survey,1,"[""New science alert! Everyone should check out my awesome co-supervisor's (@astro_pwise) paper on SNe Ia rates & delay times in @theDESurvey ! So excited to be a part of this (my first co-authored paper!!!) with the rest of our stellar group @UoS_SNe! ✨\n\n<LINK>""]",21,05,258 |
287,133,1178547420726206464,307880202,Blai Vidiella 🎗,"If we want to study the ecosystems, we should use all the possible knowledge available. Discrete systems (ie insects) are a source of interesting and rich dynamics (#chaos & 👻) that can be confused by #randomness. I am really fascinated by our teamwork! <LINK> <LINK>",https://arxiv.org/abs/1909.12501,"Ecological systems are complex dynamical systems. Modelling efforts on ecosystems' dynamical stability have revealed that population dynamics, being highly nonlinear, can be governed by complex fluctuations. Indeed, experimental and field research has provided mounting evidence of chaos in species' abundances, especially for discrete-time systems. Discrete-time dynamics, mainly arising in boreal and temperate ecosystems for species with non-overlapping generations, have been largely studied to understand the dynamical outcomes due to changes in relevant ecological parameters. The local and global dynamical behaviour of many of these models is difficult to investigate analytically in the parameter space and, typically, numerical approaches are employed when the dimension of the phase space is large. In this article we provide topological and dynamical results for a map modelling a discrete-time, three-species food chain with two predator species interacting on the same prey population. The domain where dynamics live is characterized, as well as the so-called escaping regions, for which the species go rapidly to extinction after surpassing the carrying capacity. We also provide a full description of the local stability of equilibria within a volume of the parameter space given by the prey's growth rate and the predation rates. We have found that the increase of the pressure of predators on the prey results in chaos. The entry into chaos is achieved via a supercritical Neimarck-Sacker bifurcation followed by period-doubling bifurcations of invariant curves. Interestingly, an increasing predation directly on preys can shift the extinction of top predators to their survival, allowing an unstable persistence of the three species by means of periodic and strange chaotic attractors. ","Dynamics in a time-discrete food-chain model with strong pressure on |
preys",1,"['If we want to study the ecosystems, we should use all the possible knowledge available. Discrete systems (ie insects) are a source of interesting and rich dynamics (#chaos & 👻) that can be confused by #randomness. I am really fascinated by our teamwork! <LINK> <LINK>']",19,09,267 |
288,146,1493632604981317634,57294647,Arindam Khan,"In guillotine strip packing, a set of rectangles needs to be packed into a unit-width strip of minimum height s.t. each packed rectangle can be separated out using a sequence of guillotine (end-to-end) cuts. Our new paper on arxiv settles the problem! <LINK> <LINK>",https://arxiv.org/abs/2202.05989,"In the Strip Packing problem (SP), we are given a vertical half-strip $[0,W]\times[0,\infty)$ and a set of $n$ axis-aligned rectangles of width at most $W$. The goal is to find a non-overlapping packing of all rectangles into the strip such that the height of the packing is minimized. A well-studied and frequently used practical constraint is to allow only those packings that are guillotine separable, i.e., every rectangle in the packing can be obtained by recursively applying a sequence of edge-to-edge axis-parallel cuts (guillotine cuts) that do not intersect any item of the solution. In this paper, we study approximation algorithms for the Guillotine Strip Packing problem (GSP), i.e., the Strip Packing problem where we require additionally that the packing needs to be guillotine separable. This problem generalizes the classical Bin Packing problem and also makespan minimization on identical machines, and thus it is already strongly NP-hard. Moreover, due to a reduction from the Partition problem, it is NP-hard to obtain a polynomial-time $(3/2-\varepsilon)$-approximation algorithm for GSP for any $\varepsilon>0$ (exactly as Strip Packing). We provide a matching polynomial time $(3/2+\varepsilon)$-approximation algorithm for GSP. Furthermore, we present a pseudo-polynomial time $(1+\varepsilon)$-approximation algorithm for GSP. This is surprising as it is NP-hard to obtain a $(5/4-\varepsilon)$-approximation algorithm for (general) Strip Packing in pseudo-polynomial time. Thus, our results essentially settle the approximability of GSP for both the polynomial and the pseudo-polynomial settings. ","Tight Approximation Algorithms for Two Dimensional Guillotine Strip |
Packing",1,"['In guillotine strip packing, a set of rectangles needs to be packed into a unit-width strip of minimum height s.t. each packed rectangle can be separated out using a sequence of guillotine (end-to-end) cuts. \n\nOur new paper on arxiv settles the problem!\n\n<LINK> <LINK>']",22,02,266 |
289,31,1408342557663318020,1061375141274439685,Xabier Cid Vidal 🛰️,"New paper! On <LINK> with @CharlesTheVS and many others we study the LHCb sensitivity to the wonderful baryogenesis model proposed in <LINK>. Spoiler alert: we CAN discover DM at LHCb! The mechanism involves b-hadron decays to a dark sector 👇👇 particle which we can't detect but can look for as ""missing-pT"" in the direction of flight of the hadron. This type of signatures appear in other LHCb analyses and we know we can do them pretty well. I'm specially happy for Saul Lopez, to which I propose doing his Master thesis working on this and who will now get his work (hopefully) published. If you want to know more, I recommend you attend the #offshell2021 conference, where we will present our work.",https://arxiv.org/abs/2106.12870,"A model that can simultaneously explain Dark Matter relic density and the apparent matter anti-matter imbalance of the universe has been recently proposed. The model requires $b$-hadron branching fractions to Dark Matter at the per mille level. The $b$-hadrons decay to a dark sector baryon, $\psi_{\rm{DS}}$, which has a mass in the region $940$ MeV/c$^{2} \leq m(\psi_{\rm{DS}}) \leq 4430$ MeV/c$^{2}$. In this paper, we discuss the sensitivity of the LHCb experiment to search for this dark baryon, covering different types of topology and giving prospects for Runs 3 and 4 of the LHC, as well as for the proposed Phase-II Upgrade. We show that the LHCb experiment can cover the entire mass range of the hypothetical dark baryon. ","Prospects on searches for baryonic Dark Matter produced in $b$-hadron |
decays at LHCb",3,"['New paper! On <LINK> with @CharlesTheVS and many others we study the LHCb sensitivity to the wonderful baryogenesis model proposed in <LINK>. Spoiler alert: we CAN discover DM at LHCb! The mechanism involves b-hadron decays to a dark sector 👇👇', 'particle which we can\'t detect but can look for as ""missing-pT"" in the direction of flight of the hadron. This type of signatures appear in other LHCb analyses and we know we can do them pretty well. I\'m specially happy for Saul Lopez,', 'to which I propose doing his Master thesis working on this and who will now get his work (hopefully) published. If you want to know more, I recommend you attend the #offshell2021 conference, where we will present our work.']",21,06,702 |
290,65,1295653270720061440,261865146,Dr Sofia Qvarfort,New paper on the arXiv! <LINK> We compute the fundamental sensitivity for measurements of time-dependent gravitational fields with a nonlinear quantum optomechanical system. Applications include measurements of small oscillating masses and gravitational waves. 🌊 <LINK>,https://arxiv.org/abs/2008.06507,"We study the fundamental sensitivity that can be achieved with an ideal optomechanical system in the nonlinear regime for measurements of time-dependent gravitational fields. Using recently developed methods to solve the dynamics of a nonlinear optomechanical system with a time-dependent Hamiltonian, we compute the quantum Fisher information for linear displacements of the mechanical element due to gravity. We demonstrate that the sensitivity can not only be further enhanced by injecting squeezed states of the cavity field, but also by modulating the light--matter coupling of the optomechanical system. We specifically apply our results to the measurement of gravitational fields from small oscillating masses, where we show that, in principle, the gravitational field of an oscillating nano-gram mass can be detected based on experimental parameters that will likely be accessible in the near-term future. Finally, we identify the experimental parameter regime necessary for gravitational wave detection with a quantum optomechanical sensor. ","Optimal estimation of time-dependent gravitational fields with quantum |
optomechanical systems",1,['New paper on the arXiv! <LINK> We compute the fundamental sensitivity for measurements of time-dependent gravitational fields with a nonlinear quantum optomechanical system. Applications include measurements of small oscillating masses and gravitational waves. 🌊 <LINK>'],20,08,269 |
291,14,1222942113425313792,2411222281,Dr./Prof. Meredith MacGregor,"New paper alert! <LINK> - We present new millimeter flares detected from the M dwarf AU Mic with ALMA, and speculate about the origins of this previously unknown emission by placing these new detections in context with similar flares from Proxima Centauri.",https://arxiv.org/abs/2001.10546,"We report on two millimeter flares detected by ALMA at 220 GHz from AU Mic, a nearby M dwarf. The larger flare had a duration of only $\sim35$ sec, with peak $L_{R}=2\times10^{15}$ erg s$^{-1}$ Hz$^{-1}$, and lower limit on linear polarization of $|Q/I|>0.12\pm0.04$. We examine the characteristics common to these new AU Mic events and those from Proxima Cen previously reported in MacGregor et al. (2018) - namely short durations, negative spectral indices, and significant linear polarization - to provide new diagnostics of conditions in outer stellar atmospheres and details of stellar flare particle acceleration. The event rates ($\sim20$ and $4$ events day$^{-1}$ for AU Mic and Proxima Cen, respectively) suggest that millimeter flares occur commonly but have been undetected until now. Analysis of the flare observing frequency and consideration of possible incoherent emission mechanisms confirms the presence of MeV electrons in the stellar atmosphere occurring as part of the flare process. The spectral indices point to a hard distribution of electrons. The short durations and lack of pronounced exponential decay in the light curve are consistent with formation in a simple magnetic loop, with radio emission predominating from directly precipitating electrons. We consider the possibility of both synchrotron and gyrosynchrotron emission mechanisms, although synchrotron is favored given the linear polarization signal. This would imply that the emission must be occurring in a low density environment of only modest magnetic field strength. A deeper understanding of this newly discovered and apparently common stellar flare mechanism awaits more observations with better-studied flare components at other wavelengths. ",Properties of M Dwarf Flares at Millimeter Wavelengths,1,"['New paper alert! <LINK> - We present new millimeter flares detected from the M dwarf AU Mic with ALMA, and speculate about the origins of this previously unknown emission by placing these new detections in context with similar flares from Proxima Centauri.']",20,01,256 |
292,55,1395543016018952194,36438289,Dr. Patrícia,My new paper is on Arxiv: <LINK> published in MNRAS: <LINK> It's about another Milky Way morphological twin: NGC 2442. It has an low-luminosity Compton-thick AGN! #AGN #astronomy #astrophysics #MNRAS Here is a nice pic of the nucleus: <LINK> There is a star-forming ring around the nuclear region and at the center you can see an arched emission that is actually the wall the ionization cone of the AGN.,http://arxiv.org/abs/2105.09420,"The detailed study of nuclear regions of galaxies is important because it can help understanding the active galactic nucleus (AGN) feedback mechanisms, the connections between the nuclei and their host galaxies, and ultimately the galaxy formation processes. We present the analysis of an optical data cube of the central region of the galaxy NGC 2442, obtained with the integral field unit (IFU) of the Gemini Multi-Object Spectrograph (GMOS). We also performed a multiwavelength analysis, with Chandra data, XMM--Newton and NuSTAR spectra, and Hubble Space Telescope (HST) images. The analysis revealed that the nuclear emission is consistent with a Low Ionization Nuclear Emission-line Region (LINER) associated with a highly obscured compact hard X-ray source, indicating a Compton-thick AGN. The HST image in the F658N filter (H$\alpha$) reveals an arched structure corresponding to the walls of the ionization cone of the AGN. The gas kinematic pattern and the high gas velocity dispersion values in the same region of the ionization cone suggest an outflow emission. The stellar archaeology results indicate the presence of only old stellar populations ($\sim$ 10 Gyr), with high metallicity (z = 0.02 and 0.05), and the absence of recent star formation in the central region of NGC 2442, which is possibly a consequence of the AGN feedback, associated with the detected outflow, shutting off star formation. NGC 2442 is a late-type galaxy similar to the Milky Way, and comparisons show that the main difference between them is the presence of a low-luminosity AGN. ",The nuclear environment of NGC 2442: a Compton-thick low-luminosity AGN,2,"[""My new paper is on Arxiv:\n\n<LINK>\n\npublished in MNRAS: <LINK>\n\nIt's about another Milky Way morphological twin: NGC 2442. It has an low-luminosity Compton-thick AGN! \n\n#AGN #astronomy #astrophysics #MNRAS \n\nHere is a nice pic of the nucleus: <LINK>"", 'There is a star-forming ring around the nuclear region and at the center you can see an arched emission that is actually the wall the ionization cone of the AGN.']",21,05,405 |
293,209,1410773778913763333,1283150444,Maurizio Pierini,"One year ago, we started a project on detecting unexpected #gravitationalwaves sources with #autoencoders. It took less to perform&release the following study on the #FPGA demonstrator of the model than to finish writing the original paper 😱 <LINK> In this study, carried on by our friends at @imperialcollege, a recurrent AE trained to detect GW signals was accelerated on FPGA up to a latency < 1 microsec. This could be used for real-time trigger at @LIGO and @ego_virgo But this hardware demonstrator study also shows a preview of the paper that will appear soon: the #LSTM AE, even compressed down to 16-bits precision, outperform the other architectures on detecting the signal with unsupervised structure. Details on this will appear soon. <LINK>",https://arxiv.org/abs/2106.14089,"This paper presents novel reconfigurable architectures for reducing the latency of recurrent neural networks (RNNs) that are used for detecting gravitational waves. Gravitational interferometers such as the LIGO detectors capture cosmic events such as black hole mergers which happen at unknown times and of varying durations, producing time-series data. We have developed a new architecture capable of accelerating RNN inference for analyzing time-series data from LIGO detectors. This architecture is based on optimizing the initiation intervals (II) in a multi-layer LSTM (Long Short-Term Memory) network, by identifying appropriate reuse factors for each layer. A customizable template for this architecture has been designed, which enables the generation of low-latency FPGA designs with efficient resource utilization using high-level synthesis tools. The proposed approach has been evaluated based on two LSTM models, targeting a ZYNQ 7045 FPGA and a U250 FPGA. Experimental results show that with balanced II, the number of DSPs can be reduced up to 42% while achieving the same IIs. When compared to other FPGA-based LSTM designs, our design can achieve about 4.92 to 12.4 times lower latency. ","Accelerating Recurrent Neural Networks for Gravitational Wave |
Experiments",3,"['One year ago, we started a project on detecting unexpected #gravitationalwaves sources with #autoencoders. It took less to perform&release the following study on the #FPGA demonstrator of the model than to finish writing the original paper 😱\n<LINK>', 'In this study, carried on by our friends at @imperialcollege, a recurrent AE trained to detect GW signals was accelerated on FPGA up to a latency < 1 microsec. This could be used for real-time trigger at @LIGO and @ego_virgo', 'But this hardware demonstrator study also shows a preview of the paper that will appear soon: the #LSTM AE, even compressed down to 16-bits precision, outperform the other architectures on detecting the signal with unsupervised structure. Details on this will appear soon. https://t.co/jec8CUCQkh']",21,06,756 |
294,110,1391957647348310017,2569631268,Daniel Huber,"Check out our latest paper on the arxiv tonight, led by @UHIfA graduate student Jingwen Zhang, presenting a new misaligned multiplanet system with a long-period Jovian perturber discovered using @keckobservatory. The #Kepler mission keeps on giving! <LINK>",https://arxiv.org/abs/2105.03446,"We present the discovery of Kepler-129 d ($P_{d}=7.2^{+0.4}_{-0.3}$ yr, $m\sin i_{d}=8.3^{+1.1}_{-0.7}\ \rm M_{Jup}$, $ e_{d}=0.15^{+0.07}_{-0.05} $) based on six years of radial velocity (RV) observations from Keck/HIRES. Kepler-129 also hosts two transiting sub-Neptunes: Kepler-129 b ($P_{b}=15.79$ days, $r_{b}=2.40\pm{0.04}\ \rm{R_{\oplus}}$) and Kepler-129 c ($P_{c}=82.20$ days, $r_{c}=2.52\pm{0.07}\ \rm{R_{\oplus}}$) for which we measure masses of $m_{b}<20\ \rm{M_{\oplus}}$ and $m_{c}=43^{+13}_{-12}\ \rm{M_{\oplus}}$. Kepler-129 is an hierarchical system consisting of two tightly-packed inner planets and an external companion whose mass is close to the deuterium burning limit. In such a system, two inner planets precess around the orbital normal of the outer companion, causing their inclinations to oscillate with time. Based on an asteroseismic analysis of Kepler data, we find tentative evidence that Kepler-129 b and c are misaligned with stellar spin axis by $\gtrsim 38$ deg, which could be torqued by Kepler-129 d if it is inclined by $\gtrsim 19$ deg relative to inner planets. Using N-body simulations, we provide additional constraints on the mutual inclination between Kepler-129 d and inner planets by estimating the fraction of time during which two inner planets both transit. The probability that two planets both transit decreases as their misalignment with Kepler-129 d increases. We also find a more massive Kepler-129 c enables the two inner planets to become strongly coupled and more resistant to perturbations from Kepler-129 d. The unusually high mass of Kepler-129 c provides a valuable benchmark for both planetary dynamics and interior structure, since the best-fit mass is consistent with this $\rm{2.5\ R_{\oplus}}$ planet having a rocky surface. ","Long Period Jovian Tilts the Orbits of Two sub-Neptunes Relative to |
Stellar Spin Axis in Kepler-129",1,"['Check out our latest paper on the arxiv tonight, led by @UHIfA graduate student Jingwen Zhang, presenting a new misaligned multiplanet system with a long-period Jovian perturber discovered using @keckobservatory. The #Kepler mission keeps on giving! <LINK>']",21,05,256 |
295,105,1404543388486094853,147615935,Niko Grupen,"🚨New preprint🚨 Does mutual reward yield fair outcomes for cooperative teams? We find this is not the case! Teams learn capitalistic strategies, achieving high reward by distributing it *unequally* across teammates. Paper: <LINK> #AI #FairAI #RL We connect prediction-based fairness to multi-agent learning and introduce Fairness through Equivariance (Fair-E) -- a method that ensures fair outcomes for multi-agent teams through equivariant policy learning. We also introduce Fairness through Equivariance Regularization (Fair-ER) as a soft-constraint version of equivariant policy learning and show that it allows us to modulate between fairness and utility. <LINK>",https://arxiv.org/abs/2106.05727,"We study fairness through the lens of cooperative multi-agent learning. Our work is motivated by empirical evidence that naive maximization of team reward yields unfair outcomes for individual team members. To address fairness in multi-agent contexts, we introduce team fairness, a group-based fairness measure for multi-agent learning. We then prove that it is possible to enforce team fairness during policy optimization by transforming the team's joint policy into an equivariant map. We refer to our multi-agent learning strategy as Fairness through Equivariance (Fair-E) and demonstrate its effectiveness empirically. We then introduce Fairness through Equivariance Regularization (Fair-ER) as a soft-constraint version of Fair-E and show that it reaches higher levels of utility than Fair-E and fairer outcomes than non-equivariant policies. Finally, we present novel findings regarding the fairness-utility trade-off in multi-agent settings; showing that the magnitude of the trade-off is dependent on agent skill. ",Cooperative Multi-Agent Fairness and Equivariant Policies,3,"['🚨New preprint🚨\n\nDoes mutual reward yield fair outcomes for cooperative teams? We find this is not the case! Teams learn capitalistic strategies, achieving high reward by distributing it *unequally* across teammates.\n\nPaper: <LINK>\n\n#AI #FairAI #RL', 'We connect prediction-based fairness to multi-agent learning and introduce Fairness through Equivariance (Fair-E) -- a method that ensures fair outcomes for multi-agent teams through equivariant policy learning.', 'We also introduce Fairness through Equivariance Regularization (Fair-ER) as a soft-constraint version of equivariant policy learning and show that it allows us to modulate between fairness and utility. https://t.co/MpyBho8v8m']",21,06,665 |
296,91,1326705261113958402,2427184074,Christopher Berry,"Congratulations to @ChaseBKimball on his new paper <LINK> In this we ask if the black holes we see with @LIGO & @ego_virgo could be made of smaller black holes? The answer is *definitely* maybe <LINK> We know that black holes merge together to form bigger black holes. Could one of these merger remnants get a new partner and merge again? Potentially, if somewhere like a globular cluster or a nuclear star cluster. How could we tell? <LINK> Second-generation black holes formed from mergers should be more massive and have distinctive spins. But since we don't know the distribution of first-generation black holes, they can be difficult to spot. We have to fit both generations at the same time <LINK> Accounting for the possibility of second-generation black holes is especially important if you want to reconstruct the first-generation black hole properties (say to find if there is a maximum mass) otherwise you'll pollute your results and you could come to false conclusions When we infer the properties of the black hole distribution, we see evidence for some of the heavier black holes, like #GW190521 and GW190519, being merger remnants! *But* this depends upon some important assumptions <LINK> Our analysis assumes that all binaries are formed in identical clusters. If we assume a cluster with a low escape velocity, second-generation systems are unlikely, but for a high escape velocity (a few hundred km/s) they are almost certain! <LINK> The assumption that all binaries come from identical clusters is a simplification. We really need to consider a distribution and add in non-cluster formation. This is difficult, but we think our results are exciting enough to show it will be worth the work @ChaseBKimball will be taking a good nap before starting on the *next* paper though @gravitysydney @ChaseBKimball @LIGO @ego_virgo @ColmMTalbot @spacedontwait @EHThrane @chionatan @MattCarney106 @TomD_Santiago @hannahmidd8 @daniel_williams 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 🐢 <LINK>",https://arxiv.org/abs/2011.05332,"We study the population properties of merging binary black holes in the second LIGO--Virgo Gravitational-Wave Transient Catalog assuming they were all formed dynamically in gravitationally bound clusters. Using a phenomenological population model, we infer the mass and spin distribution of first-generation black holes, while self-consistently accounting for hierarchical mergers. Considering a range of cluster masses, we see compelling evidence for hierarchical mergers in clusters with escape velocities $\gtrsim 100~\mathrm{km\,s^{-1}}$. For our most probable cluster mass, we find that the catalog contains at least one second-generation merger with $99\%$ credibility. We find that the hierarchical model is preferred over an alternative model with no hierarchical mergers (Bayes factor $\mathcal{B} > 1400$) and that GW190521 is favored to contain two second-generation black holes with odds $\mathcal{O}>700$, and GW190519, GW190602, GW190620, and GW190706 are mixed-generation binaries with $\mathcal{O} > 10$. However, our results depend strongly on the cluster escape velocity, with more modest evidence for hierarchical mergers when the escape velocity is $\lesssim 100~\mathrm{km\,s^{-1}}$. Assuming that all binary black holes are formed dynamically in globular clusters with escape velocities on the order of tens of $\mathrm{km\,s^{-1}}$, GW190519 and GW190521 are favored to include a second-generation black hole with odds $\mathcal{O}>1$. In this case, we find that $99\%$ of black holes from the inferred total population have masses that are less than $49\,M_{\odot}$, and that this constraint is robust to our choice of prior on the maximum black hole mass. ","Evidence for hierarchical black hole mergers in the second LIGO--Virgo |
gravitational-wave catalog",9,"['Congratulations to @ChaseBKimball on his new paper <LINK>\nIn this we ask if the black holes we see with @LIGO & @ego_virgo could be made of smaller black holes? The answer is *definitely* maybe <LINK>', 'We know that black holes merge together to form bigger black holes. Could one of these merger remnants get a new partner and merge again? Potentially, if somewhere like a globular cluster or a nuclear star cluster. How could we tell? https://t.co/squreUR1mb', ""Second-generation black holes formed from mergers should be more massive and have distinctive spins. But since we don't know the distribution of first-generation black holes, they can be difficult to spot. We have to fit both generations at the same time https://t.co/gIy8XZ8G7n"", ""Accounting for the possibility of second-generation black holes is especially important if you want to reconstruct the first-generation black hole properties (say to find if there is a maximum mass) otherwise you'll pollute your results and you could come to false conclusions"", 'When we infer the properties of the black hole distribution, we see evidence for some of the heavier black holes, like #GW190521 and GW190519, being merger remnants! *But* this depends upon some important assumptions https://t.co/zBfOBxfe0z', 'Our analysis assumes that all binaries are formed in identical clusters. If we assume a cluster with a low escape velocity, second-generation systems are unlikely, but for a high escape velocity (a few hundred km/s) they are almost certain! https://t.co/otgK3Xqcmc', 'The assumption that all binaries come from identical clusters is a simplification. We really need to consider a distribution and add in non-cluster formation. This is difficult, but we think our results are exciting enough to show it will be worth the work', '@ChaseBKimball will be taking a good nap before starting on the *next* paper though', '@gravitysydney @ChaseBKimball @LIGO @ego_virgo @ColmMTalbot @spacedontwait @EHThrane @chionatan @MattCarney106 @TomD_Santiago @hannahmidd8 @daniel_williams 🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\n🐢\nhttps://t.co/Q3Xf5xYOVT']",20,11,2126 |
297,189,1492214859655569408,348159742,Blair Bilodeau,"Is it possible to efficiently identify the optimal intervention while remaining agnostic to assumptions about the causal structure? In new work with Linbo Wang and @roydanroy, we study adapting to the presence of a d-separator using multi-armed bandits. <LINK> <LINK> Causal assumptions provide guarantees to more efficiently identify the optimal intervention. However, it can be expensive or impossible to verify such assumptions. Ideal algorithms should do as well as if they knew whether the assumptions hold, without requiring this knowledge. We provide a new algorithm that (a) achieves optimal regret when given access to a d-separator, beating algos like UCB, and (b) significantly improves on causal bandit algos when no d-separator is observed. We require no prior knowledge of whether a d-separator is observed. We prove that when d-separation is incorrectly assumed, existing causal bandit algos can incur linear regret (worst possible). We also prove optimal adaptivity is *impossible*: no algorithm can enjoy the benefits of causal structure while never paying for it in the worst-case. Our new algorithm (HAC-UCB) always gets sublinear regret, and in certain settings where existing causal bandit algos get linear regret, we prove HAC-UCB only incurs sqrt(T) regret. We also see this improvement in simulations. <LINK> Our adaptivity is wrt a novel condition for bandits. It reduces to observing a d-separator when actions are all do interventions, and is implied by the front-door criterion without the null intervention. Next steps: apply our framework for adapting to other causal structures. This is a growing area of research that has generated lots of exciting work in the last few years. I’ll end by plugging next week’s workshop @SimonsInstitute, with speakers who will be discussing many related ideas. <LINK> @aminkarbasi @roydanroy Thanks!",https://arxiv.org/abs/2202.05100,"Multi-armed bandit problems provide a framework to identify the optimal intervention over a sequence of repeated experiments. Without additional assumptions, minimax optimal performance (measured by cumulative regret) is well-understood. With access to additional observed variables that d-separate the intervention from the outcome (i.e., they are a d-separator), recent causal bandit algorithms provably incur less regret. However, in practice it is desirable to be agnostic to whether observed variables are a d-separator. Ideally, an algorithm should be adaptive; that is, perform nearly as well as an algorithm with oracle knowledge of the presence or absence of a d-separator. In this work, we formalize and study this notion of adaptivity, and provide a novel algorithm that simultaneously achieves (a) optimal regret when a d-separator is observed, improving on classical minimax algorithms, and (b) significantly smaller regret than recent causal bandit algorithms when the observed variables are not a d-separator. Crucially, our algorithm does not require any oracle knowledge of whether a d-separator is observed. We also generalize this adaptivity to other conditions, such as the front-door criterion. ",Adaptively Exploiting d-Separators with Causal Bandits,8,"['Is it possible to efficiently identify the optimal intervention while remaining agnostic to assumptions about the causal structure?\n\nIn new work with Linbo Wang and @roydanroy, we study adapting to the presence of a d-separator using multi-armed bandits.\n\n<LINK> <LINK>', 'Causal assumptions provide guarantees to more efficiently identify the optimal intervention. However, it can be expensive or impossible to verify such assumptions. Ideal algorithms should do as well as if they knew whether the assumptions hold, without requiring this knowledge.', 'We provide a new algorithm that (a) achieves optimal regret when given access to a d-separator, beating algos like UCB, and (b) significantly improves on causal bandit algos when no d-separator is observed. We require no prior knowledge of whether a d-separator is observed.', 'We prove that when d-separation is incorrectly assumed, existing causal bandit algos can incur linear regret (worst possible). We also prove optimal adaptivity is *impossible*: no algorithm can enjoy the benefits of causal structure while never paying for it in the worst-case.', 'Our new algorithm (HAC-UCB) always gets sublinear regret, and in certain settings where existing causal bandit algos get linear regret, we prove HAC-UCB only incurs sqrt(T) regret. We also see this improvement in simulations. https://t.co/xyYBeDLRQL', 'Our adaptivity is wrt a novel condition for bandits. It reduces to observing a d-separator when actions are all do interventions, and is implied by the front-door criterion without the null intervention. Next steps: apply our framework for adapting to other causal structures.', 'This is a growing area of research that has generated lots of exciting work in the last few years. I’ll end by plugging next week’s workshop @SimonsInstitute, with speakers who will be discussing many related ideas.\nhttps://t.co/B1SzXNSJ4v', '@aminkarbasi @roydanroy Thanks!']",22,02,1864 |
298,120,1303528950589603840,1067301932610478081,Ayaka Usui,Our another new arXiv paper: Bayesian parameter estimation using Gaussian states and measurements - <LINK> This provides a comprehensive investigation of Bayesian parameter estimation with single-mode Gaussian states and suitable Gaussian measurements.,https://arxiv.org/abs/2009.03709,"Bayesian analysis is a framework for parameter estimation that applies even in uncertainty regimes where the commonly used local (frequentist) analysis based on the Cram\'er-Rao bound is not well defined. In particular, it applies when no initial information about the parameter value is available, e.g., when few measurements are performed. Here, we consider three paradigmatic estimation schemes in continuous-variable quantum metrology (estimation of displacements, phases, and squeezing strengths) and analyse them from the Bayesian perspective. For each of these scenarios, we investigate the precision achievable with single-mode Gaussian states under homodyne and heterodyne detection. This allows us to identify Bayesian estimation strategies that combine good performance with the potential for straightforward experimental realization in terms of Gaussian states and measurements. Our results provide practical solutions for reaching uncertainties where local estimation techniques apply, thus bridging the gap to regimes where asymptotically optimal strategies can be employed. ",Bayesian parameter estimation using Gaussian states and measurements,1,['Our another new arXiv paper: Bayesian parameter estimation using Gaussian states and measurements - <LINK>\n\nThis provides a comprehensive investigation of Bayesian parameter estimation with single-mode Gaussian states and suitable Gaussian measurements.'],20,09,252 |
299,139,1369024530732187650,725121185952976896,Ilenna Jones,"In our previous paper, we show that a neuron model with dendrites can do interesting machine learning problems. But is this still the case if we add biological constraints? Check out our new paper “Do biological constraints impair dendritic computation?” <LINK> <LINK> We use an ANN with a binary tree structure constraint so as to model dendritic structure. This causes the weight parameters to be analogous to axial resistances between nodal dendritic compartments. What if we constrained them to be non-negative, like resistances? <LINK> Biological dendrites have voltage dependent conductances that are the basis for dendritic nonlinearities. Following deep learning model convention, our previous model uses leaky ReLU nonlinearities. What if we used voltage-gated ion channel nonlinearities instead? <LINK> Synaptic response is conductance-based, gated by ligands from presynaptic vesicles, and relies on the number of receptors which correlates with the size of the postsynaptic bouton. Can we map a single pixel input to have a synaptic response output in our model? <LINK> Here we simulate models of dendritic computation with and without these biological constraints. <LINK> Upon comparing a variety of nonlinearities to our dendrite voltage gated conductance derived (NaCaK) function, we found that the NaCaK function turns out to be a better nonlinearity for dendritic binary tree structures. (NaCaK in grey, interestingly SWISH in green) <LINK> We added the synapse and non-negative weight constraints and found that dendritic model performance on interesting machine learning tasks is not hurt by these constraints, and may even benefit from them. (All constraints in Orange, best seen in top rows) <LINK> The results from our model suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks! <LINK> Lastly and very importantly, here's our code: <LINK>",https://arxiv.org/abs/2103.03274,"Computations on the dendritic trees of neurons have important constraints. Voltage dependent conductances in dendrites are not similar to arbitrary direct-current generation, they are the basis for dendritic nonlinearities and they do not allow converting positive currents into negative currents. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these biological constraints. Here we simulate models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by these constraints but may benefit from them. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks. ",Do biological constraints impair dendritic computation?,9,"['In our previous paper, we show that a neuron model with dendrites can do interesting machine learning problems. But is this still the case if we add biological constraints? Check out our new paper “Do biological constraints impair dendritic computation?” <LINK> <LINK>', 'We use an ANN with a binary tree structure constraint so as to model dendritic structure. This causes the weight parameters to be analogous to axial resistances between nodal dendritic compartments. What if we constrained them to be non-negative, like resistances? https://t.co/DD5BsflPgN', 'Biological dendrites have voltage dependent conductances that are the basis for dendritic nonlinearities. Following deep learning model convention, our previous model uses leaky ReLU nonlinearities. What if we used voltage-gated ion channel nonlinearities instead? https://t.co/UfQh5RFcAI', 'Synaptic response is conductance-based, gated by ligands from presynaptic vesicles, and relies on the number of receptors which correlates with the size of the postsynaptic bouton. Can we map a single pixel input to have a synaptic response output in our model? https://t.co/d6hq6Jtg7q', 'Here we simulate models of dendritic computation with and without these biological constraints. https://t.co/Ko7a5WaZ4R', 'Upon comparing a variety of nonlinearities to our dendrite voltage gated conductance derived (NaCaK) function, we found that the NaCaK function turns out to be a better nonlinearity for dendritic binary tree structures. (NaCaK in grey, interestingly SWISH in green) https://t.co/sYj4HZ5ZPy', 'We added the synapse and non-negative weight constraints and found that dendritic model performance on interesting machine learning tasks is not hurt by these constraints, and may even benefit from them. (All constraints in Orange, best seen in top rows) https://t.co/crRAC9FPJn', 'The results from our model suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks! https://t.co/qjKGWLYfhq', ""Lastly and very importantly, here's our code: https://t.co/1pNal3Ccu2""]",21,03,1905 |
300,99,1457648288585424896,962876421268914177,Hanlin Ren,"<LINK> A new paper with Ran Duan! We show that in an undirected graph, we can design an oracle that maintains 𝒆𝒙𝒂𝒄𝒕 distances under any number of edge failures. For d edge failures, our distance oracle has space complexity O(dn^4) and query time d^{O(d)}. This is the first fault-tolerant distance oracle that supports an arbitrary number of failures and maintains exact distances. Previously, there are good oracles that maintain (1+eps)-approximate distances, but exact distances remained elusive. Indeed, the previous best exact distance oracle needed roughly n^d space (which is barely non-trivial). One drawback: although using only O(dn^4) space, our oracle seems to need n^{O(d)} time to preprocess. Can we improve the preprocessing time?",https://arxiv.org/abs/2111.03360,"We present the first compact distance oracle that tolerates multiple failures and maintains exact distances. Given an undirected weighted graph $G = (V, E)$ and an arbitrarily large constant $d$, we construct an oracle that given vertices $u, v \in V$ and a set of $d$ edge failures $D$, outputs the exact distance between $u$ and $v$ in $G - D$ (that is, $G$ with edges in $D$ removed). Our oracle has space complexity $O(d n^4)$ and query time $d^{O(d)}$. Previously, there were compact approximate distance oracles under multiple failures [Chechik, Cohen, Fiat, and Kaplan, SODA'17; Duan, Gu, and Ren, SODA'21], but the best exact distance oracles under $d$ failures require essentially $\Omega(n^d)$ space [Duan and Pettie, SODA'09]. Our distance oracle seems to require $n^{\Omega(d)}$ time to preprocess; we leave it as an open question to improve this preprocessing time. ",Maintaining Exact Distances under Multiple Edge Failures,4,"['<LINK>\nA new paper with Ran Duan! We show that in an undirected graph, we can design an oracle that maintains 𝒆𝒙𝒂𝒄𝒕 distances under any number of edge failures. For d edge failures, our distance oracle has space complexity O(dn^4) and query time d^{O(d)}.', 'This is the first fault-tolerant distance oracle that supports an arbitrary number of failures and maintains exact distances. Previously, there are good oracles that maintain (1+eps)-approximate distances, but exact distances remained elusive.', 'Indeed, the previous best exact distance oracle needed roughly n^d space (which is barely non-trivial).', 'One drawback: although using only O(dn^4) space, our oracle seems to need n^{O(d)} time to preprocess. Can we improve the preprocessing time?']",21,11,745 |
301,118,1318440960997519361,164135176,Mohit Shridhar,"(1/4) Can reasoning 💭 in textworld help agents solve tasks in embodied environments 🤖? Checkout our new paper on aligning text and embodied environments. Paper: <LINK> Data & Code: <LINK> <LINK> (2/4) We generate interactive TextWorld games for scenes in the ALFRED dataset. Agents learn abstract policies in TextWorld by leveraging semantic priors. (3/4) Results show that TextWorld training does indeed help with embodied tasks. Our TextWorld-trained agent, BUTLER, generalizes better to new tasks by reasoning in high-level ‘textual’ space rather than through low-level visual representations. (4/4) This was joint work with @ericxyuan, @Cote_Marc, @ybisk, @APTrizzle, and @mhauskn as part of my @MSFTResearch internship.",http://arxiv.org/abs/2010.03768,"Given a simple request like Put a washed apple in the kitchen fridge, humans can reason in purely abstract terms by imagining action sequences and scoring their likelihood of success, prototypicality, and efficiency, all without moving a muscle. Once we see the kitchen in question, we can update our abstract plans to fit the scene. Embodied agents require the same abilities, but existing work does not yet provide the infrastructure necessary for both reasoning abstractly and executing concretely. We address this limitation by introducing ALFWorld, a simulator that enables agents to learn abstract, text based policies in TextWorld (C\^ot\'e et al., 2018) and then execute goals from the ALFRED benchmark (Shridhar et al., 2020) in a rich visual environment. ALFWorld enables the creation of a new BUTLER agent whose abstract knowledge, learned in TextWorld, corresponds directly to concrete, visually grounded actions. In turn, as we demonstrate empirically, this fosters better agent generalization than training only in the visually grounded environment. BUTLER's simple, modular design factors the problem to allow researchers to focus on models for improving every piece of the pipeline (language understanding, planning, navigation, and visual scene understanding). ","ALFWorld: Aligning Text and Embodied Environments for Interactive |
Learning",4,"['(1/4) Can reasoning 💭 in textworld help agents solve tasks in embodied environments 🤖? \n\nCheckout our new paper on aligning text and embodied environments.\nPaper: <LINK> \nData & Code: <LINK> <LINK>', '(2/4) We generate interactive TextWorld games for scenes in the ALFRED dataset. Agents learn abstract policies in TextWorld by leveraging semantic priors.', '(3/4) Results show that TextWorld training does indeed help with embodied tasks. Our TextWorld-trained agent, BUTLER, generalizes better to new tasks by reasoning in high-level ‘textual’ space rather than through low-level visual representations.', '(4/4) This was joint work with @ericxyuan, @Cote_Marc, @ybisk, @APTrizzle, and @mhauskn as part of my @MSFTResearch internship.']",20,10,725 |
302,42,1072906391889702912,190058865,Vikram Sreekanti,"1/ New paper with @joe_hellerstein @alsched @cgwu0530 @jmfaleiro @jssmith @mejoeyg on pitfalls of existing serverless infra and future research directions: <LINK> (Paper will also be at #cidr2019.) 2/ Autoscaling & managed function execution seems great at first blush, but existing infrastructure doesn't support fine-grained communication or data movement, which makes it terrible for distributed & data systems. 3/ To make serverless suitable for a wider variety of applications (which we want to do) there's a ton of room for research & innovation in systems, programming abstractions, hardware, etc. 4/ You can check out what we're working on here: <LINK>",https://arxiv.org/abs/1812.03651,"Serverless computing offers the potential to program the cloud in an autoscaling, pay-as-you go manner. In this paper we address critical gaps in first-generation serverless computing, which place its autoscaling potential at odds with dominant trends in modern computing: notably data-centric and distributed computing, but also open source and custom hardware. Put together, these gaps make current serverless offerings a bad fit for cloud innovation and particularly bad for data systems innovation. In addition to pinpointing some of the main shortfalls of current serverless architectures, we raise a set of challenges we believe must be met to unlock the radical potential that the cloud---with its exabytes of storage and millions of cores---should offer to innovative developers. ","Serverless Computing: One Step Forward, Two Steps Back",4,"['1/ New paper with @joe_hellerstein @alsched @cgwu0530 @jmfaleiro @jssmith @mejoeyg on pitfalls of existing serverless infra and future research directions: <LINK> (Paper will also be at #cidr2019.)', ""2/ Autoscaling & managed function execution seems great at first blush, but existing infrastructure doesn't support fine-grained communication or data movement, which makes it terrible for distributed & data systems."", ""3/ To make serverless suitable for a wider variety of applications (which we want to do) there's a ton of room for research & innovation in systems, programming abstractions, hardware, etc."", ""4/ You can check out what we're working on here: https://t.co/PRqfNfyibQ""]",18,12,660 |
303,30,1074887565788893184,14544467,Daniel Apai,Our new paper led by @benrackham on the transit light source effect: how does stellar heterogeneity contaminate spectra of transiting planets? Now for FGK-type stars. TiO absorption? Spec. Slopes? NaI+KI? Halpha? Biosignature contamination? Check out TLSE2!<LINK>,http://arxiv.org/abs/1812.06184,"Transmission spectra probe exoplanetary atmospheres, but they can also be strongly affected by heterogeneities in host star photospheres through the transit light source effect. Here we build upon our recent study of the effects of unocculted spots and faculae on M-dwarf transmission spectra, extending the analysis to FGK dwarfs. Using a suite of rotating model photospheres, we explore spot and facula covering fractions for varying activity levels and the associated stellar contamination spectra. Relative to M dwarfs, we find that the typical variabilities of FGK dwarfs imply lower spot covering fractions, though they generally increase with later spectral types, from $\sim 0.1\%$ for F dwarfs to 2-4$\%$ for late-K dwarfs. While the stellar contamination spectra are considerably weaker than those for typical M dwarfs, we find that typically active G and K dwarfs produce visual slopes that are detectable in high-precision transmission spectra. We examine line offsets at H$\alpha$ and the Na and K doublets and find that unocculted faculae in K dwarfs can appreciably alter transit depths around the Na D doublet. We find that band-averaged transit depth offsets at molecular bands for CH$_{4}$, CO, CO$_{2}$, H$_{2}$O, N$_{2}$O, O$_{2}$, and O$_{3}$ are not detectable for typically active FGK dwarfs, though stellar TiO/VO features are potentially detectable for typically active late-K dwarfs. Generally, this analysis shows that inactive FGK dwarfs do not produce detectable stellar contamination features in transmission spectra, though active FGK host stars can produce such features and care is warranted in interpreting transmission spectra from these systems. ","The Transit Light Source Effect II: The Impact of Stellar Heterogeneity |
on Transmission Spectra of Planets Orbiting Broadly Sun-like Stars",1,['Our new paper led by @benrackham on the transit light source effect: how does stellar heterogeneity contaminate spectra of transiting planets? Now for FGK-type stars. TiO absorption? Spec. Slopes? NaI+KI? Halpha? Biosignature contamination? Check out TLSE2!<LINK>'],18,12,263 |
304,112,1357509481639403520,1556664198,Kyle Cranmer,"New paper: A deep search for decaying dark matter with XMM-Newton blank-sky observations with Joshua W. Foster, Marius Kongsore, Christopher Dessert, Yujin Park, Nicholas L. Rodd, Benjamin R. Safdi <LINK> <LINK> In this work, we perform the most sensitive search to date for tmsteril neutrinos and other decaying DM scenarios across the mass range from 5 to 16 keV using archival XMM-Newton data. <LINK> <LINK> We reduce 547 Ms of data from both the MOS and PN instruments using observations taken across the full sky and then use this data to search for evidence of DM decay in the ambient halo of the Milky Way. We use a data-driven background subtraction strategy that removes most astrophysical and instrumental lines. We model the remaining continuum with a Gaussian process — a non parametric approach that is a great fit (no pun intended) for this purpose. <LINK> This is one of my few forays into astrophysics. The rest of the team did the lion’s share of the work, but I had a hand in the statistical approaches that were used. It’s an interesting example of cross-over in techniques from the LHC to astrophysics. @nausheenrshah @NYUPhysics @NYUDataScience <LINK>",https://arxiv.org/abs/2102.02207,"Sterile neutrinos with masses in the keV range are well-motivated extensions to the Standard Model that could explain the observed neutrino masses while also making up the dark matter (DM) of the Universe. If sterile neutrinos are DM then they may slowly decay into active neutrinos and photons, giving rise to the possibility of their detection through narrow spectral features in astrophysical X-ray data sets. In this work, we perform the most sensitive search to date for this and other decaying DM scenarios across the mass range from 5 to 16 keV using archival XMM-Newton data. We reduce 547 Ms of data from both the MOS and PN instruments using observations taken across the full sky and then use this data to search for evidence of DM decay in the ambient halo of the Milky Way. We determine the instrumental and astrophysical baselines with data taken far away from the Galactic Center, and use Gaussian Process modeling to capture additional continuum background contributions. No evidence is found for unassociated X-ray lines, leading us to produce the strongest constraints to date on decaying DM in this mass range. ","A deep search for decaying dark matter with XMM-Newton blank-sky |
observations",6,"['New paper:\nA deep search for decaying dark matter with XMM-Newton blank-sky observations\nwith Joshua W. Foster, Marius Kongsore, Christopher Dessert, Yujin Park, Nicholas L. Rodd, Benjamin R. Safdi\n<LINK> <LINK>', 'In this work, we perform the most sensitive search to date for tmsteril neutrinos and other decaying DM scenarios across the mass range from 5 to 16 keV using archival XMM-Newton data.\n\nhttps://t.co/iXbpV1Wy1z https://t.co/CW5Ch3ztTB', 'We reduce 547 Ms of data from both the MOS and PN instruments using observations taken across the full sky and then use this data to search for evidence of DM decay in the ambient halo of the Milky Way.', 'We use a data-driven background subtraction strategy that removes most astrophysical and instrumental lines. We model the remaining continuum with a Gaussian process — a non parametric approach that is a great fit (no pun intended) for this purpose. https://t.co/6Ce5iGMhJr', 'This is one of my few forays into astrophysics. The rest of the team did the lion’s share of the work, but I had a hand in the statistical approaches that were used. It’s an interesting example of cross-over in techniques from the LHC to astrophysics.', '@nausheenrshah @NYUPhysics @NYUDataScience https://t.co/BPiuGvDSaT']",21,02,1172 |
305,232,1406885385763016706,561167071,Sascha Caron,"""Rare and Different"" with @l_hendriks and Rob Verheyen Today we propose new anomaly scores for LHC to optimally combine that events are different (with a new ensemble of DeepSVDs) and rare (using the likelihood of a autoregressive flow model). See <LINK>",https://arxiv.org/abs/2106.10164,"We propose a new method to define anomaly scores and apply this to particle physics collider events. Anomalies can be either rare, meaning that these events are a minority in the normal dataset, or different, meaning they have values that are not inside the dataset. We quantify these two properties using an ensemble of One-Class Deep Support Vector Data Description models, which quantifies differentness, and an autoregressive flow model, which quantifies rareness. These two parameters are then combined into a single anomaly score using different combination algorithms. We train the models using a dataset containing only simulated collisions from the Standard Model of particle physics and test it using various hypothetical signals in four different channels and a secret dataset where the signals are unknown to us. The anomaly detection method described here has been evaluated in a summary paper [1] where it performed very well compared to a large number of other methods. The method is simple to implement and is applicable to other datasets in other fields as well. ","Rare and Different: Anomaly Scores from a combination of likelihood and |
out-of-distribution models to detect new physics at the LHC",1,"['""Rare and Different"" with @l_hendriks and Rob Verheyen\n\nToday we propose new anomaly scores for LHC to optimally combine that events are different (with a new ensemble of DeepSVDs) and rare (using the likelihood of a autoregressive flow model). \n\nSee <LINK>']",21,06,255 |
306,118,1437415180644847619,561899047,Aki Vehtari,"New paper ""Latent space projection predictive inference"" with @AleexCatalina and @paulbuerkner <LINK> <LINK> Projection predictive inference project the posterior to a restricted parameter space, e.g., restricting some coefficients to 0. This can be used for low variance variable selection and inference after the selection. Lindley (1968) presented the approach for normal linear model with known variances. Goutis & Robert (1988) and presented computationally feasible approach for generalized linear models. Piironen & Vehtari (2017), and Piironen et al. (2020) presented many practical improvements. So far the approach has been based on the projection minimizing KL-divergence from the projected predictive distribution to the full posterior predictive distribution. In case of exponential family, these projections can be solved with maximum likelihood for each posterior draw. Minimizing KL-divergence for non-exponential family distributions is more difficult. We present here that we can do the projection by minimizing the KL divergence from the approximate latent posterior to the restricted latent posterior. With many non-exponential family models, the parameterization is already chosen so that the latent posterior is close to normal. If we approximate the latent posterior with normal, we get back to fast projection for each posterior draw of the other parameters. With this, we can extend our projpred package to handle, e.g., ordinal and survival models, which users have been asking many times. It turns out that the approach improves also projection predictive inference for hierarchical models and non-normal exponential family models. For eager ones, the code is in laten_projection branch of the projpred github repo. We're working on a vignette and additional testing, before merging to main branch.",https://arxiv.org/abs/2109.04702,"Given a reference model that includes all the available variables, projection predictive inference replaces its posterior with a constrained projection including only a subset of all variables. We extend projection predictive inference to enable computationally efficient variable and structure selection in models outside the exponential family. By adopting a latent space projection predictive perspective we are able to: 1) propose a unified and general framework to do variable selection in complex models while fully honouring the original model structure, 2) properly identify relevant structure and retain posterior uncertainties from the original model, and 3) provide an improved approach also for non-Gaussian models in the exponential family. We demonstrate the superior performance of our approach by thoroughly testing and comparing it against popular variable selection approaches in a wide range of settings, including realistic data sets. Our results show that our approach successfully recovers relevant terms and model structure in complex models, selecting less variables than competing approaches for realistic datasets. ",Latent space projection predictive inference,8,"['New paper ""Latent space projection predictive inference""\nwith @AleexCatalina and @paulbuerkner <LINK> <LINK>', 'Projection predictive inference project the posterior to a restricted parameter space, e.g., restricting some coefficients to 0. This can be used for low variance variable selection and inference after the selection.', 'Lindley (1968) presented the approach for normal linear model with known variances. Goutis & Robert (1988) and presented computationally feasible approach for generalized linear models. Piironen & Vehtari (2017), and Piironen et al. (2020) presented many practical improvements.', 'So far the approach has been based on the projection minimizing KL-divergence from the projected predictive distribution to the full posterior predictive distribution. In case of exponential family, these projections can be solved with maximum likelihood for each posterior draw.', 'Minimizing KL-divergence for non-exponential family distributions is more difficult. We present here that we can do the projection by minimizing the KL divergence from the approximate latent posterior to the restricted latent posterior.', 'With many non-exponential family models, the parameterization is already chosen so that the latent posterior is close to normal. If we approximate the latent posterior with normal, we get back to fast projection for each posterior draw of the other parameters.', 'With this, we can extend our projpred package to handle, e.g., ordinal and survival models, which users have been asking many times. It turns out that the approach improves also projection predictive inference for hierarchical models and non-normal exponential family models.', ""For eager ones, the code is in laten_projection branch of the projpred github repo. We're working on a vignette and additional testing, before merging to main branch.""]",21,09,1825 |
307,0,1228049925587578881,1057452552,Duncan Watson-Parris,"Our recent #ML paper describing a new, general physics emulator (DENSE; <LINK>) just had a really nice write-up in @sciencemagazine, check it out! <LINK> <LINK> @shoyer @rabernat @sciencemagazine Haha, yes there was some artistic license in the article! I suspect the biggest challenge in getting this complexity would just be the data throughput. We only needed fairly low-res runs to answer our question in this case anyway",https://arxiv.org/abs/2001.08055,"Computer simulations are invaluable tools for scientific discovery. However, accurate simulations are often slow to execute, which limits their applicability to extensive parameter exploration, large-scale data analysis, and uncertainty quantification. A promising route to accelerate simulations by building fast emulators with machine learning requires large training datasets, which can be prohibitively expensive to obtain with slow simulations. Here we present a method based on neural architecture search to build accurate emulators even with a limited number of training data. The method successfully accelerates simulations by up to 2 billion times in 10 scientific cases including astrophysics, climate science, biogeochemistry, high energy density physics, fusion energy, and seismology, using the same super-architecture, algorithm, and hyperparameters. Our approach also inherently provides emulator uncertainty estimation, adding further confidence in their use. We anticipate this work will accelerate research involving expensive simulations, allow more extensive parameters exploration, and enable new, previously unfeasible computational discovery. ","Building high accuracy emulators for scientific simulations with deep |
neural architecture search",2,"['Our recent #ML paper describing a new, general physics emulator (DENSE; <LINK>) just had a really nice write-up in @sciencemagazine, check it out! <LINK> <LINK>', '@shoyer @rabernat @sciencemagazine Haha, yes there was some artistic license in the article! I suspect the biggest challenge in getting this complexity would just be the data throughput. We only needed fairly low-res runs to answer our question in this case anyway']",20,01,425 |
308,171,1460531790188322816,344361113,Manuel Gomez-Rodriguez,"Check out our paper on Counterfactual Temporal Point Processes (TPPs) (<LINK>), the first paper led by Kimia Noorbakhsh. This paper was her 3-month internship project! What we propose in this paper is best explained by an example relevant for COVID-19 (1/n) Assume that, during a pandemic, a government decides to implement business restrictions every time the weekly incidence—the (relative) number of new cases—is larger than certain threshold but unfortunately the incidence nevertheless spirals out of control (sounds familiar?) (2/n) Counterfactual TPPs could help the government understand retrospectively to what extent the incidence would have grown had a lower threshold been implemented. This is in contrast with existing epidemiological models, also those developed during COVID-19 (3/n) Existing epidemiological models cannot answer counterfactual questions, only predict what the future may look like under interventions given the past. Our methodology is general and applies to many types of temporal point processes, not only those found in epidemiology (4/n) A catch? Counterfactual TPPs lie within level three in the ""ladder of causation"" of @yudapearl—we cannot validate our counterfactual predictions using observational nor interventional experiments. However, our model satisfies monotonicity, an intuitive assumption about... (5/n) ...the causal mechanism of the world, which specifies how changes on the intensity function of a temporal point process may have lead to particular outcomes while holding ""every-thing else"" fixed. This assumption also helps avoiding non identifiability issues (6/n) We have also released an open source implementation of counterfactual TPPs and a SIR network-based epidemiological model fitted using data from an Ebola outbreak in West Africa: <LINK> (Thanks to @WilliamTrouleau for helping us with the Ebola dataset!) (n/n)",https://arxiv.org/abs/2111.07603,"Machine learning models based on temporal point processes are the state of the art in a wide variety of applications involving discrete events in continuous time. However, these models lack the ability to answer counterfactual questions, which are increasingly relevant as these models are being used to inform targeted interventions. In this work, our goal is to fill this gap. To this end, we first develop a causal model of thinning for temporal point processes that builds upon the Gumbel-Max structural causal model. This model satisfies a desirable counterfactual monotonicity condition, which is sufficient to identify counterfactual dynamics in the process of thinning. Then, given an observed realization of a temporal point process with a given intensity function, we develop a sampling algorithm that uses the above causal model of thinning and the superposition theorem to simulate counterfactual realizations of the temporal point process under a given alternative intensity function. Simulation experiments using synthetic and real epidemiological data show that the counterfactual realizations provided by our algorithm may give valuable insights to enhance targeted interventions. ",Counterfactual Temporal Point Processes,7,"['Check out our paper on Counterfactual Temporal Point Processes (TPPs) (<LINK>), the first paper led by Kimia Noorbakhsh. This paper was her 3-month internship project! What we propose in this paper is best explained by an example relevant for COVID-19 (1/n)', 'Assume that, during a pandemic, a government decides\nto implement business restrictions every time the weekly incidence—the (relative) number of new cases—is larger than certain threshold but unfortunately the incidence nevertheless spirals out of control (sounds familiar?) (2/n)', 'Counterfactual TPPs could help the government understand retrospectively to what extent the incidence would have grown had a lower threshold been implemented. This is in contrast with existing epidemiological models, also those developed during COVID-19 (3/n)', 'Existing epidemiological models cannot answer counterfactual questions, only predict what the future may look like under interventions given the past. Our methodology is general and applies to many types of temporal point processes, not only those found in epidemiology (4/n)', 'A catch? Counterfactual TPPs lie within level three in the ""ladder of causation"" of @yudapearl—we cannot validate our counterfactual predictions using observational nor interventional experiments. However, our model satisfies monotonicity, an intuitive assumption about... (5/n)', '...the causal mechanism of the world, which specifies how changes on the intensity function of a temporal point process may have lead to particular outcomes while holding ""every-thing else"" fixed. This assumption also helps avoiding non identifiability issues (6/n)', 'We have also released an open source implementation of counterfactual TPPs and a SIR network-based epidemiological model fitted using data from an Ebola outbreak in West Africa: https://t.co/1K1rsMonNo (Thanks to @WilliamTrouleau for helping us with the Ebola dataset!) (n/n)']",21,11,1878 |
309,42,1255401831523594243,3269288695,Archit Sharma,"We are moving unsupervised learning to real-world robotics! In our new work, we present off-DADS which enables sample-efficient skill discovery on real robots without any rewards. paper: <LINK> overview: <LINK> website: <LINK> <LINK> We can repurpose the learned skills to solve downstream tasks, without any additional training! w/ @shaneguML, @hausman_k, @Vikashplus, @svlevine, M. Ahn <LINK> If you are attending #ICLR2020, check out our long talk for the original DADS submission at <LINK>. We'll be having the poster sessions on Thu @ 10 am PT and 1 pm PT. I will also be answering the questions throughout the conference.",https://arxiv.org/abs/2004.12974,"Reinforcement learning provides a general framework for learning robotic skills while minimizing engineering effort. However, most reinforcement learning algorithms assume that a well-designed reward function is provided, and learn a single behavior for that single reward function. Such reward functions can be difficult to design in practice. Can we instead develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks? In this paper, we demonstrate that a recently proposed unsupervised skill discovery algorithm can be extended into an efficient off-policy method, making it suitable for performing unsupervised reinforcement learning in the real world. Firstly, we show that our proposed algorithm provides substantial improvement in learning efficiency, making reward-free real-world training feasible. Secondly, we move beyond the simulation environments and evaluate the algorithm on real physical hardware. On quadrupeds, we observe that locomotion skills with diverse gaits and different orientations emerge without any rewards or demonstrations. We also demonstrate that the learned skills can be composed using model predictive control for goal-oriented navigation, without any additional training. ","Emergent Real-World Robotic Skills via Unsupervised Off-Policy |
Reinforcement Learning",3,"['We are moving unsupervised learning to real-world robotics! In our new work, we present off-DADS which enables sample-efficient skill discovery on real robots without any rewards.\n\npaper: <LINK>\noverview: <LINK>\nwebsite: <LINK> <LINK>', 'We can repurpose the learned skills to solve downstream tasks, without any additional training!\n\nw/ @shaneguML, @hausman_k, @Vikashplus, @svlevine, M. Ahn https://t.co/pPHj1K0z9r', ""If you are attending #ICLR2020, check out our long talk for the original DADS submission at https://t.co/C6AWVtCxr0.\n\nWe'll be having the poster sessions on Thu @ 10 am PT and 1 pm PT. I will also be answering the questions throughout the conference.""]",20,04,627 |
310,26,1298955387836788736,2613619922,byron wallace,"In new work we set out to automatically generate abstractive brief narrative summaries of all randomized controlled trials relevant to a given clinical question (in the style of Cochrane evidence narrative syntheses). paper: <LINK> <LINK> We find that generated summaries are relevant and fluent, but struggle with *factuality*; they often mischaracterize the evidence presented in trials. A few simple strategies, like ""decorating"" inputs with automatically extracted elements, seems to improve this a bit. <LINK> w/Sayantan Saha @h21k and @ijmarshall",https://arxiv.org/abs/2008.11293,"We consider the problem of automatically generating a narrative biomedical evidence summary from multiple trial reports. We evaluate modern neural models for abstractive summarization of relevant article abstracts from systematic reviews previously conducted by members of the Cochrane collaboration, using the authors conclusions section of the review abstract as our target. We enlist medical professionals to evaluate generated summaries, and we find that modern summarization systems yield consistently fluent and relevant synopses, but that they are not always factual. We propose new approaches that capitalize on domain-specific models to inform summarization, e.g., by explicitly demarcating snippets of inputs that convey key findings, and emphasizing the reports of large and high-quality trials. We find that these strategies modestly improve the factual accuracy of generated summaries. Finally, we propose a new method for automatically evaluating the factuality of generated narrative evidence syntheses using models that infer the directionality of reported findings. ","Generating (Factual?) Narrative Summaries of RCTs: Experiments with |
Neural Multi-Document Summarization",3,"['In new work we set out to automatically generate abstractive brief narrative summaries of all randomized controlled trials relevant to a given clinical question (in the style of Cochrane evidence narrative syntheses). \n\npaper: <LINK> <LINK>', 'We find that generated summaries are relevant and fluent, but struggle with *factuality*; they often mischaracterize the evidence presented in trials. A few simple strategies, like ""decorating"" inputs with automatically extracted elements, seems to improve this a bit. https://t.co/8YWpCCOaue', 'w/Sayantan Saha @h21k and @ijmarshall']",20,08,553 |
311,141,1499743935517708288,1152338625654226944,Megan Mansfield,"Today on the ArXiv: a new paper looking at the secondary eclipse spectrum of WASP-77Ab with the Hubble Space Telescope! <LINK> <LINK> First, some fun facts about WASP-77Ab: it’s a “moderate” hot Jupiter, with an equilibrium T of ~1700K. It was previously observed at high resolution which showed a non-inverted T-P profile. H2O+CO were also detected and indicated a substellar metallicity. (Fig. from Line+21) <LINK> Here, we observed 2 eclipses of the planet with HST. We combined these data with eclipses from a set of Spitzer phase curves. We then modeled the data with both free retrievals and self-consistent equilibrium retrievals. We detected a H2O absorption feature in the HST data and confirmed that the atmosphere has a decreasing T-P profile. The strength of the feature is also about what we expect when we compare WASP-77Ab to other HST observations of similar-temperature hot Jupiters! <LINK> Our observations also match well with the best-fit model to those earlier high-res observations. This is really great news, because high-res and low-res data are reduced completely differently, so it’s a great sign that we’re on the right track when they show the same results. <LINK> But our equilibrium models suggest a higher metallicity than what was found from the high-res observations. We think this might be due to some sort of disequilibrium chemistry, as the equilibrium models do a poor job approximating the shape of the blue end of the HST spectrum. <LINK> Ultimately, we’re hoping to combine the previous high-res data with our new HST+Spitzer data set and do a joint retrieval. Combining high+low-res data in this way gives more detailed information on the atmosphere. Look for a paper in the future by @ExoplanetPete doing just this! <LINK> Finally, I’d like to thank all my co-authors for making this paper possible, especially @kevinbstevenson for the Spitzer data reduction and @LindsLikesSpace, @ExoplanetPete, and twitterless Mike Line for their modeling work.... And everyone else who contributed to this paper! @jjfplanet, @V_Parmentier, @jmdesert, @lkreidberg, and twitterless Jacob Bean, Eliza Kempton, Jacob Arcangeli, Brian Kilpatrick, and Matej Malik. <LINK>",https://arxiv.org/abs/2203.01463,"Secondary eclipse observations of hot Jupiters can reveal both their compositions and thermal structures. Previous observations have shown a diversity of hot Jupiter eclipse spectra, including absorption features, emission features, and featureless blackbody-like spectra. We present a secondary eclipse spectrum of the hot Jupiter WASP-77Ab observed between $1-5$ $\mu$m with the Hubble Space Telescope (HST) and the Spitzer Space Telescope. The HST observations show signs of water absorption indicative of a non-inverted thermal structure. We fit the data with both a one-dimensional free retrieval and a grid of one-dimensional self-consistent forward models to confirm this non-inverted structure. The free retrieval places a $3\sigma$ lower limit on the atmospheric water abundance of $\log(n_\mathrm{H_2O})>-4.78$ and can not constrain the CO abundance. The grid fit produces a slightly super-stellar metallicity and constrains the carbon-to-oxygen ratio to less than or equal to the solar value. We also compare our data to recent high-resolution observations of WASP-77Ab taken with the Gemini-South/IGRINS spectrograph and find that our observations are consistent with the best-fit model to the high-resolution data. However, the metallicity derived from the IGRINS data is significantly lower than that derived from our self-consistent model fit. We find that this difference may be due to disequilibrium chemistry, and the varying results between the models applied here demonstrate the difficulty of constraining disequilibrium chemistry with low-resolution, low wavelength coverage data alone. Future work to combine observations from IGRINS, HST, and JWST will improve our estimate of the atmospheric composition of WASP-77Ab. ","Confirmation of Water Absorption in the Thermal Emission Spectrum of the |
Hot Jupiter WASP-77Ab with HST/WFC3",9,"['Today on the ArXiv: a new paper looking at the secondary eclipse spectrum of WASP-77Ab with the Hubble Space Telescope! <LINK> <LINK>', 'First, some fun facts about WASP-77Ab: it’s a “moderate” hot Jupiter, with an equilibrium T of ~1700K. It was previously observed at high resolution which showed a non-inverted T-P profile. H2O+CO were also detected and indicated a substellar metallicity. (Fig. from Line+21) https://t.co/M6i2Xmgufq', 'Here, we observed 2 eclipses of the planet with HST. We combined these data with eclipses from a set of Spitzer phase curves. We then modeled the data with both free retrievals and self-consistent equilibrium retrievals.', 'We detected a H2O absorption feature in the HST data and confirmed that the atmosphere has a decreasing T-P profile. The strength of the feature is also about what we expect when we compare WASP-77Ab to other HST observations of similar-temperature hot Jupiters! https://t.co/vBeD9Yukej', 'Our observations also match well with the best-fit model to those earlier high-res observations. This is really great news, because high-res and low-res data are reduced completely differently, so it’s a great sign that we’re on the right track when they show the same results. https://t.co/ILtPUqPOlu', 'But our equilibrium models suggest a higher metallicity than what was found from the high-res observations. We think this might be due to some sort of disequilibrium chemistry, as the equilibrium models do a poor job approximating the shape of the blue end of the HST spectrum. https://t.co/nkQJ0wb5KX', 'Ultimately, we’re hoping to combine the previous high-res data with our new HST+Spitzer data set and do a joint retrieval. Combining high+low-res data in this way gives more detailed information on the atmosphere. Look for a paper in the future by @ExoplanetPete doing just this! https://t.co/DVCdpdJhvB', 'Finally, I’d like to thank all my co-authors for making this paper possible, especially @kevinbstevenson for the Spitzer data reduction and @LindsLikesSpace, @ExoplanetPete, and twitterless Mike Line for their modeling work....', 'And everyone else who contributed to this paper! @jjfplanet, @V_Parmentier, @jmdesert, @lkreidberg, and twitterless Jacob Bean, Eliza Kempton, Jacob Arcangeli, Brian Kilpatrick, and Matej Malik. https://t.co/tQcusArbA6']",22,03,2194 |
312,87,1458402064493654018,1077995761487568896,Jon Miller,"New paper by grad student @NicolasTrueba: When winds are launched within ~1000 GM/c^2, the central engine cannot be taken to be a point source, and simple geometry then constrains its size. Important for @chandraxray, XRISM, @AthenaXIFU, @ArcusXray. <LINK> <LINK>",https://arxiv.org/abs/2111.04764,"Analyses of absorption from disk winds and atmospheres in accreting compact objects typically treat the central emitting regions in these systems as point sources relative to the absorber. This assumption breaks down if the absorbing gas is located within $few \times 1000\cdot GM/{c}^{2}$, in which case a small component of the absorber's Keplerian motion contributes to the velocity-width of absorption lines. Here, we demonstrate how this velocity-broadening effect can be used to constrain the sizes of central engines in accreting compact objects via a simple geometric relationship, and develop a method for modeling this effect. We apply this method on the Chandra/HETG spectra of three ultra-compact and short period neutron star X-ray binaries in which evidence of gravitationally redshifted absorption, owing to an inner-disk atmosphere, has recently been reported. The significance of the redshift is above $5\sigma$ for XTE J1710$-$281 (this work) and 4U 1916$-$053, and is inconsistent with various estimates of the relative radial velocity of each binary. For our most sensitive spectrum (XTE J1710$-$281), we obtain a 1$\sigma$ upper bound of 310 $\text{km}$ $\text{s}^{-1}$ on the magnitude of this geometric effect and a central engine of size ${R}_{CE} < 60 ~ GM/{c}^{2}$ (or, $< 90 ~ GM/{c}^{2}$ at the $3\sigma$ level). These initial constraints compare favorably to those obtained via microlensing in quasars and approach the sensitivity of constraints via relativistic reflection in neutron stars. This sensitivity will increase with further exposures, as well as the launch of future microcalorimeter and grating missions. ","A Spectroscopic Angle on Central Engine Size Scales in Accreting Neutron |
Stars",1,"['New paper by grad student @NicolasTrueba: \nWhen winds are launched within ~1000 GM/c^2, the central engine cannot be taken to be a point source, and simple geometry then constrains its size. \nImportant for @chandraxray, XRISM, @AthenaXIFU, @ArcusXray.\n<LINK> <LINK>']",21,11,265 |
313,116,1380321084453568517,2756561793,Zili Shen,"Pretty galaxy picture ahead! Get ready for the deepest HST image of the mysterious NGC1052-DF2, combined from 40 orbits of data. This is part of our new paper measuring the distance to this galaxy with the tip of the red giant branch method: <LINK> <LINK> We use this new data to construct a color-magnitude diagram, and you can see the red giant branch by eye! (In case you don't know what I'm talking about, it's in between the green dashed lines.) The tip of the red giant branch is located around 27.5 magnitude. <LINK> From our model that accounts for contamination and photometric errors, the best-fit TRGB magnitude is 27.67 mag and the distance is 22.1 Mpc. <LINK> In addition, we also found that we could measure the relative distance to NGC1052-DF4, which has been analyzed previously by @DanieliShany. We found that these two galaxies are 2.1 Mpc apart, so they cannot both be in close proximity to NGC1052, the central galaxy in the group. These two galaxies remain a puzzle to be solved, and new twists keep emerging in this story! Exciting times.",https://arxiv.org/abs/2104.03319,"The large and diffuse galaxies NGC1052-DF2 and NGC1052-DF4 have been found to have very low dark matter content and a population of luminous globular clusters. Accurate distance measurements are key to interpreting these observations. Recently, the distance to NGC1052-DF4 was found to be $20.0\pm 1.6$ Mpc by identifying the tip of the red giant branch (TRGB) in 12 orbits of Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) imaging. Here we present 40 orbits of HST ACS data for NGC1052-DF2 and use these data to measure its TRGB. The TRGB is readily apparent in the color-magnitude diagram. Using a forward model that incorporates photometric uncertainties, we find a TRGB magnitude of $m_{\rm F814W, TRGB} = 27.67 \pm 0.10$ mag. The inferred distance is $D_{\rm TRGB} = 22.1 \pm 1.2$ Mpc, consistent with the previous surface brightness fluctuation distances to the bright elliptical galaxy NGC1052. The new HST distance rules out the idea that some of NGC1052-DF2's unusual properties can be explained if it were at $\sim 13$ Mpc; instead, it implies that the galaxy's globular clusters are even more luminous than had been derived using the previous distance of 20 Mpc. The distance from NGC1052-DF2 to NGC1052-DF4 is well-determined at $2.1\pm 0.5$ Mpc, significantly larger than the virial diameter of NGC1052. We discuss the implications for formation scenarios of the galaxies and for the external field effect, which has been invoked to explain the intrinsic dynamics of these objects in the context of modified Newtonian dynamics. ","A Tip of the Red Giant Branch Distance of $22.1 \pm 1.2$ Mpc to the Dark |
Matter Deficient Galaxy NGC1052-DF2 from 40 Orbits of Hubble Space Telescope |
Imaging",5,"['Pretty galaxy picture ahead! Get ready for the deepest HST image of the mysterious NGC1052-DF2, combined from 40 orbits of data. This is part of our new paper measuring the distance to this galaxy with the tip of the red giant branch method: <LINK> <LINK>', ""We use this new data to construct a color-magnitude diagram, and you can see the red giant branch by eye! (In case you don't know what I'm talking about, it's in between the green dashed lines.) The tip of the red giant branch is located around 27.5 magnitude. https://t.co/mBJwSmFiEQ"", 'From our model that accounts for contamination and photometric errors, the best-fit TRGB magnitude is 27.67 mag and the distance is 22.1 Mpc. https://t.co/F79KIQBBeg', 'In addition, we also found that we could measure the relative distance to NGC1052-DF4, which has been analyzed previously by @DanieliShany. We found that these two galaxies are 2.1 Mpc apart, so they cannot both be in close proximity to NGC1052, the central galaxy in the group.', 'These two galaxies remain a puzzle to be solved, and new twists keep emerging in this story! Exciting times.']",21,04,1060 |
314,110,1380469687230795780,78913886,parfait Atchadé,"New paper <LINK>! We propose a convolutional filter that takes advantage of the classic #ML experience, quantum effects and the variational principle to enhance #CNN. I like to thank @XanaduAI @pennylaneai @awscloud & team for hosting #Qhack21 #QML #ai #Quantum <LINK>",http://arxiv.org/abs/2104.03418,"Convolutional Neural Networks (CNN) are used mainly to treat problems with many images characteristic of Deep Learning. In this work, we propose a hybrid image classification model to take advantage of quantum and classical computing. The method will use the potential that convolutional networks have shown in artificial intelligence by replacing classical filters with variational quantum filters. Similarly, this work will compare with other classification methods and the system's execution on different servers. The algorithm's quantum feasibility is modelled and tested on Amazon Braket Notebook instances and experimented on the Pennylane's philosophy and framework. ",Quantum Enhanced Filter: QFilter,1,"['New paper <LINK>! We propose a convolutional filter that takes advantage of the classic #ML experience, quantum effects and the variational principle to enhance #CNN. I like to thank @XanaduAI @pennylaneai @awscloud & team for hosting #Qhack21 #QML #ai #Quantum <LINK>']",21,04,268 |
315,162,1299765511660724224,1014263782493818880,Pierre Ablin,"[Preprint] We propose SMICA, an ICA algorithm based on frequential diversity with a noise model for M/EEG processing. It is unusual to have a noise model in ICA, but it brings many benefits 👇👇👇 With @agramfort and JF Cardoso Paper: <LINK> 1/7 <LINK> For SMICA, data x is modeled as a linear combination of sources s and some noise n: x = As + n, A is the mixing matrix. First benefit of the noise model: the likelihood of the model is not degenerate even when # sources < # sensors ! 2/7 In usual ICA, if you have 100 sensors and want 10 sources, you have to do PCA to reduce the data dimension to 10, and then fit ICA. This destroys signals of low power which might still be useful. With SMICA, you can estimate 10 sources straight from your 100 sensors 😃 3/7 Another benefit of the noise model is the ability to have fine source estimation. In standard ICA, sources are estimated as x = A^-1 s. 4/7 With SMICA, if there is a lot of noise on one sensor, the contribution of that sensor will be shrunk when estimating sources thanks to Wiener filtering: <LINK> 5/7 But if noise modelling is so great in ICA, why isn't it used everywhere? The problem is that it is much harder to fit than noiseless ICA, because the non-Gaussian ICA model with noise does not have a tractable likelihood. There is no practical algorithm in this case. 6/7 SMICA solves this by working in the spectral domain, offering a closed form likelihood, and simple parameter estimation with the EM algorithm 😃 7/7 @CaballeroGaudes @agramfort I guess that in this case instead of spectral ICA you can use non-stationnary ICA, where the sources are assumed non-stationnary instead of spectrally diverse. It leads to the same algorithm, without going in the Fourier domain.",https://arxiv.org/abs/2008.09693,"Background: Independent Component Analysis (ICA) is a widespread tool for exploration and denoising of electroencephalography (EEG) or magnetoencephalography (MEG) signals. In its most common formulation, ICA assumes that the signal matrix is a noiseless linear mixture of independent sources that are assumed non-Gaussian. A limitation is that it enforces to estimate as many sources as sensors or to rely on a detrimental PCA step. Methods: We present the Spectral Matching ICA (SMICA) model. Signals are modelled as a linear mixing of independent sources corrupted by additive noise, where sources and the noise are stationary Gaussian time series. Thanks to the Gaussian assumption, the negative log-likelihood has a simple expression as a sum of divergences between the empirical spectral covariance matrices of the signals and those predicted by the model. The model parameters can then be estimated by the expectation-maximization (EM) algorithm. Results: Experiments on phantom MEG datasets show that SMICA can recover dipole locations more precisely than usual ICA algorithms or Maxwell filtering when the dipole amplitude is low. Experiments on EEG datasets show that SMICA identifies a source subspace which contains sources that have less pairwise mutual information, and are better explained by the projection of a single dipole on the scalp. Comparison with existing methods: Noiseless ICA models lead to degenerate likelihood when there are fewer sources than sensors, while SMICA succeeds without resorting to prior dimension reduction. Conclusions: SMICA is a promising alternative to other noiseless ICA models based on non-Gaussian assumptions. ","Spectral independent component analysis with noise modeling for M/EEG |
source separation",8,"['[Preprint]\n\nWe propose SMICA, an ICA algorithm based on frequential diversity with a noise model for M/EEG processing.\n\nIt is unusual to have a noise model in ICA, but it brings many benefits 👇👇👇\n\nWith @agramfort and JF Cardoso\n\nPaper: <LINK>\n\n1/7 <LINK>', 'For SMICA, data x is modeled as a linear combination of sources s and some noise n:\n\nx = As + n,\n\nA is the mixing matrix.\n\nFirst benefit of the noise model: the likelihood of the model is not degenerate even when # sources < # sensors !\n\n2/7', 'In usual ICA, if you have 100 sensors and want 10 sources, you have to do PCA to reduce the data dimension to 10, and then fit ICA. This destroys signals of low power which might still be useful.\n\nWith SMICA, you can estimate 10 sources straight from your 100 sensors 😃\n\n3/7', 'Another benefit of the noise model is the ability to have fine source estimation.\n\nIn standard ICA, sources are estimated as x = A^-1 s. \n\n4/7', 'With SMICA, if there is a lot of noise on one sensor, the contribution of that sensor will be shrunk when estimating sources thanks to Wiener filtering:\n\nhttps://t.co/9eaVniKmKF\n\n5/7', ""But if noise modelling is so great in ICA, why isn't it used everywhere?\n\nThe problem is that it is much harder to fit than noiseless ICA, because the non-Gaussian ICA model with noise does not have a tractable likelihood. There is no practical algorithm in this case.\n\n6/7"", 'SMICA solves this by working in the spectral domain, offering a closed form likelihood, and simple parameter estimation with the EM algorithm 😃\n\n7/7', '@CaballeroGaudes @agramfort I guess that in this case instead of spectral ICA you can use non-stationnary ICA, where the sources are assumed non-stationnary instead of spectrally diverse. It leads to the same algorithm, without going in the Fourier domain.']",20,08,1745 |
316,7,1091020609071534082,35724743,Adrien Ecoffet,"Go-Explore paper out. New high/average scores: 18 million/650k on Montezuma's Revenge (44k w/o domain knowledge) & 100k+/~60k on Pitfall, all tested w/ sticky actions! Huge thanks to my co-authors @Joost_Huizinga @joelbot3000 @jeffclune @kenneth0stanley. <LINK> <LINK>",https://arxiv.org/abs/1901.10995,"A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of ""superhuman"" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics). ",Go-Explore: a New Approach for Hard-Exploration Problems,1,"[""Go-Explore paper out. New high/average scores: 18 million/650k on Montezuma's Revenge (44k w/o domain knowledge) & 100k+/~60k on Pitfall, all tested w/ sticky actions! Huge thanks to my co-authors @Joost_Huizinga @joelbot3000 @jeffclune @kenneth0stanley. <LINK> <LINK>""]",19,01,268 |
317,188,1374872523221782528,1354198150072786946,alewkowycz,"New paper out <LINK>! We explore when learning rate schedules are beneficial. Main points: a) We present ABEL: a schedule which decays the learning rate automatically after the weight norm ""bounces"". It is as good and more robust than tuned schedules. 1/2 <LINK> b) A simple schedule where one decays the learning rate at the end of training is as good as more complex schedules in setups where the weight norm does not ""bounce"", like Transformer architectures without L2 regularization. Check the paper for more details! 2/2 <LINK>",https://arxiv.org/abs/2103.12682,"Complex learning rate schedules have become an integral part of deep learning. We find empirically that common fine-tuned schedules decay the learning rate after the weight norm bounces. This leads to the proposal of ABEL: an automatic scheduler which decays the learning rate by keeping track of the weight norm. ABEL's performance matches that of tuned schedules and is more robust with respect to its parameters. Through extensive experiments in vision, NLP, and RL, we show that if the weight norm does not bounce, we can simplify schedules even further with no loss in performance. In such cases, a complex schedule has similar performance to a constant learning rate with a decay at the end of training. ",How to decay your learning rate,2,"['New paper out <LINK>! We explore when learning rate schedules are beneficial. Main points:\na) We present ABEL: a schedule which decays the learning rate automatically after the weight norm ""bounces"". It is as good and more robust than tuned schedules.\n1/2 <LINK>', 'b) A simple schedule where one decays the learning rate at the end of training is as good as more complex schedules in setups where the weight norm does not ""bounce"", like Transformer architectures without L2 regularization.\n\nCheck the paper for more details!\n2/2 https://t.co/whYXpGxRVC']",21,03,532 |
318,60,982310783257231360,720772140,Anvita Gupta,"Check out our new paper! ""Feedback GAN (FBGAN) for DNA: a Novel Feedback-Loop Architecture for Optimizing Protein Functions"" <LINK> GAN architecture to produce genes &optimize for secondary structure & function of their proteins. Some GAN-produced alpha helices: <LINK> @CThurstonERAU HAHA right back at you Courtney!! Incredible job on Goldwater :) @pfau @Miles_Brundage In the past VAEs empirically haven't performed well on high-noise genomic data (<LINK>) and the latent code is often ignored (making interpolation hard). That being said, a full comparison of VAEs to our approach would be interesting.",https://arxiv.org/abs/1804.01694,"Generative Adversarial Networks (GANs) represent an attractive and novel approach to generate realistic data, such as genes, proteins, or drugs, in synthetic biology. Here, we apply GANs to generate synthetic DNA sequences encoding for proteins of variable length. We propose a novel feedback-loop architecture, called Feedback GAN (FBGAN), to optimize the synthetic gene sequences for desired properties using an external function analyzer. The proposed architecture also has the advantage that the analyzer need not be differentiable. We apply the feedback-loop mechanism to two examples: 1) generating synthetic genes coding for antimicrobial peptides, and 2) optimizing synthetic genes for the secondary structure of their resulting peptides. A suite of metrics demonstrate that the GAN generated proteins have desirable biophysical properties. The FBGAN architecture can also be used to optimize GAN-generated datapoints for useful properties in domains beyond genomics. ","Feedback GAN (FBGAN) for DNA: a Novel Feedback-Loop Architecture for |
Optimizing Protein Functions",3,"['Check out our new paper! ""Feedback GAN (FBGAN) for DNA: a Novel Feedback-Loop Architecture for Optimizing Protein Functions"" <LINK>\n\nGAN architecture to produce genes &optimize for secondary structure & function of their proteins. Some GAN-produced alpha helices: <LINK>', '@CThurstonERAU HAHA right back at you Courtney!! Incredible job on Goldwater :)', ""@pfau @Miles_Brundage In the past VAEs empirically haven't performed well on high-noise genomic data (https://t.co/yahuBwCezr) and the latent code is often ignored (making interpolation hard). That being said, a full comparison of VAEs to our approach would be interesting.""]",18,04,606 |
319,206,1284113565998342146,1097855896212946945,Francesco Di Lauro,"Very interesting! We study heterogeneities in contact structures rather than susceptibility in our most recent preprint <LINK>. We observe similar effects on herd immunity threshold, but we also show a dtawback: control can potentially hinder this mechanism. <LINK> How? This is very nicely explained in this brief thread <LINK>. The idea is simple: if during lockdown high interacting individuals are shielded and interact only with close relatives, then the epidemic cannot exploit the contact structure heterogeneity!",https://arxiv.org/abs/2007.06975,"The contact structure of a population plays an important role in transmission of infection. Many ``structured models'' capture aspects of the contact structure through an underlying network or a mixing matrix. An important observation in such models, is that once a fraction $1-1/\mathcal{R}_0$ has been infected, the residual susceptible population can no longer sustain an epidemic. A recent observation of some structured models is that this threshold can be crossed with a smaller fraction of infected individuals, because the disease acts like a targeted vaccine, preferentially immunizing higher-risk individuals who play a greater role in transmission. Therefore, a limited ``first wave'' may leave behind a residual population that cannot support a second wave once interventions are lifted. In this paper, we systematically analyse a number of mean-field models for networks and other structured populations to address issues relevant to the Covid-19 pandemic. In particular, we consider herd-immunity under several scenarios. We confirm that, in networks with high degree heterogeneity, the first wave confers herd-immunity with significantly fewer infections than equivalent models with lower degree heterogeneity. However, if modelling the intervention as a change in the contact network, then this effect might become more subtle. Indeed, modifying the structure can shield highly connected nodes from becoming infected during the first wave and make the second wave more substantial. We confirm this finding by using an age-structured compartmental model parameterised with real data and comparing lockdown periods implemented either as a global scaling of the mixing matrix or age-specific structural changes. We find that results regarding herd immunity levels are strongly dependent on the model, the duration of lockdown and how lockdown is implemented. ","The impact of network properties and mixing on control measures and |
disease-induced herd immunity in epidemic models: a mean-field model |
perspective",2,"['Very interesting! We study heterogeneities in contact structures rather than susceptibility in our most recent preprint <LINK>. We observe similar effects on herd immunity threshold, but we also show a dtawback: control can potentially hinder this mechanism. <LINK>', 'How? This is very nicely explained in this brief thread https://t.co/sXkWjYANZq.\nThe idea is simple: if during lockdown high interacting individuals are shielded and interact only with close relatives, then the epidemic cannot exploit the contact structure heterogeneity!']",20,07,520 |
320,125,1380093483621421061,898575285121159168,Alexandre Santerne 🇪🇺,"New paper on #HIP41378 reporting the detection from the ground of the 19h-long transit of planet f by @NextGenTransits. Interestingly, the planet exhibits large #TTVs. The next transit will be observed by @NASAHubble this May. 🔭🛰️🪐 <LINK> <LINK> @V_Parmentier @NextGenTransits @NASAHubble Unfortunately, for visibility reasons @ESA_CHEOPS won't be able to observe a transit of #HIP41378 f 😞",https://arxiv.org/abs/2104.03159,"HIP 41378 f is a temperate $9.2\pm0.1 R_{\oplus}$ planet with period of 542.08 days and an extremely low density of $0.09\pm0.02$ g cm$^{-3}$. It transits the bright star HIP 41378 (V=8.93), making it an exciting target for atmospheric characterization including transmission spectroscopy. HIP 41378 was monitored photometrically between the dates of 2019 November 19 and November 28. We detected a transit of HIP 41378 f with NGTS, just the third transit ever detected for this planet, which confirms the orbital period. This is also the first ground-based detection of a transit of HIP 41378 f. Additional ground-based photometry was also obtained and used to constrain the time of the transit. The transit was measured to occur 1.50 hours earlier than predicted. We use an analytic transit timing variation (TTV) model to show the observed TTV can be explained by interactions between HIP 41378 e and HIP 41378 f. Using our TTV model, we predict the epochs of future transits of HIP 41378 f, with derived transit centres of T$_{C,4} = 2459355.087^{+0.031}_{-0.022}$ (May 2021) and T$_{C,5} = 2459897.078^{+0.114}_{-0.060}$ (Nov 2022). ","A transit timing variation observed for the long-period extremely low |
density exoplanet HIP 41378f",2,"['New paper on #HIP41378 reporting the detection from the ground of the 19h-long transit of planet f by @NextGenTransits. Interestingly, the planet exhibits large #TTVs. The next transit will be observed by @NASAHubble this May. 🔭🛰️🪐\n<LINK> <LINK>', ""@V_Parmentier @NextGenTransits @NASAHubble Unfortunately, for visibility reasons @ESA_CHEOPS won't be able to observe a transit of #HIP41378 f 😞""]",21,04,390 |
321,206,1493503680079106048,776107104180600832,Reza Shokri,"We study the question ""What Does it Mean for a Language Model to Preserve Privacy?"" <LINK> in a great collaboration with wonderful Hannah, Katherine, Fatemeh, and Florian @Hannah_Aught @katherine1ee @limufar @florian_tramer We discuss the mismatch between the 1/3 <LINK> narrow assumptions made by data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for 2/3 language models. We conclude that ""data protection is not equivalent to privacy protection for natural language data"", and language models should be trained on text data which was explicitly produced for public use. 3/3",https://arxiv.org/abs/2202.05520,"Natural language reflects our private lives and identities, making its privacy concerns as broad as those of real life. Language models lack the ability to understand the context and sensitivity of text, and tend to memorize phrases present in their training sets. An adversary can exploit this tendency to extract training data. Depending on the nature of the content and the context in which this data was collected, this could violate expectations of privacy. Thus there is a growing interest in techniques for training language models that preserve privacy. In this paper, we discuss the mismatch between the narrow assumptions made by popular data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for language models. We conclude that language models should be trained on text data which was explicitly produced for public use. ",What Does it Mean for a Language Model to Preserve Privacy?,3,"['We study the question ""What Does it Mean for a Language Model to Preserve Privacy?"" <LINK> in a great collaboration with wonderful Hannah, Katherine, Fatemeh, and Florian @Hannah_Aught @katherine1ee @limufar @florian_tramer We discuss the mismatch between the 1/3 <LINK>', 'narrow assumptions made by data protection techniques (data sanitization and differential privacy), and the broadness of natural language and of privacy as a social norm. We argue that existing protection methods cannot guarantee a generic and meaningful notion of privacy for 2/3', 'language models. We conclude that ""data protection is not equivalent to privacy protection for natural language\ndata"", and language models should be trained on text data which was explicitly produced for public use. 3/3']",22,02,771 |
322,8,1035566279317483520,185910194,Graham Neubig,"#EMNLP2018 paper on adapting word embeddings to new languages using linguistic features: <LINK> We use morphological and phonemic features, allowing better knowledge sharing across languages w/ different writing systems or rich morphology. Nice results on NER/MT! <LINK> Particular congrats to first author Aditi, and the other co-authors @violet_zct, Lori, David, and Jaime.",https://arxiv.org/abs/1808.09500,"Much work in Natural Language Processing (NLP) has been for resource-rich languages, making generalization to new, less-resourced languages challenging. We present two approaches for improving generalization to low-resourced languages by adapting continuous word representations using linguistically motivated subword units: phonemes, morphemes and graphemes. Our method requires neither parallel corpora nor bilingual dictionaries and provides a significant gain in performance over previous methods relying on these resources. We demonstrate the effectiveness of our approaches on Named Entity Recognition for four languages, namely Uyghur, Turkish, Bengali and Hindi, of which Uyghur and Bengali are low resource languages, and also perform experiments on Machine Translation. Exploiting subwords with transfer learning gives us a boost of +15.2 NER F1 for Uyghur and +9.7 F1 for Bengali. We also show improvements in the monolingual setting where we achieve (avg.) +3 F1 and (avg.) +1.35 BLEU. ","Adapting Word Embeddings to New Languages with Morphological and |
Phonological Subword Representations",2,"['#EMNLP2018 paper on adapting word embeddings to new languages using linguistic features: <LINK>\nWe use morphological and phonemic features, allowing better knowledge sharing across languages w/ different writing systems or rich morphology. Nice results on NER/MT! <LINK>', 'Particular congrats to first author Aditi, and the other co-authors @violet_zct, Lori, David, and Jaime.']",18,08,375 |
323,111,1380459343586455553,1020053469787566082,Alan Karthikesalingam,"Our new paper tackles an important safety hurdle for ML from code to clinic- “how does your dermatology classifier know what it doesn’t know?” In clinical practice patients may present with conditions unseen by ML systems in training, causing errors <LINK> 1/2 Detecting these previously-unseen conditions is challenging but enables safer management, eg deferral to clinician experts. Joint work @DeepMind @GoogleHealth @GoogleAI with @abzz4ssj @jimwinkens @vivnat @pwnic @balajiln @JanFreyberg @TaylanCemgilML @_basilM @AziziShekoofeh et al",http://arxiv.org/abs/2104.03829,"We develop and rigorously evaluate a deep learning based system that can accurately classify skin conditions while detecting rare conditions for which there is not enough data available for training a confident classifier. We frame this task as an out-of-distribution (OOD) detection problem. Our novel approach, hierarchical outlier detection (HOD) assigns multiple abstention classes for each training outlier class and jointly performs a coarse classification of inliers vs. outliers, along with fine-grained classification of the individual classes. We demonstrate the effectiveness of the HOD loss in conjunction with modern representation learning approaches (BiT, SimCLR, MICLe) and explore different ensembling strategies for further improving the results. We perform an extensive subgroup analysis over conditions of varying risk levels and different skin types to investigate how the OOD detection performance changes over each subgroup and demonstrate the gains of our framework in comparison to baselines. Finally, we introduce a cost metric to approximate downstream clinical impact. We use this cost metric to compare the proposed method against a baseline system, thereby making a stronger case for the overall system effectiveness in a real-world deployment scenario. ","Does Your Dermatology Classifier Know What It Doesn't Know? Detecting |
the Long-Tail of Unseen Conditions",2,"['Our new paper tackles an important safety hurdle for ML from code to clinic- “how does your dermatology classifier know what it doesn’t know?” In clinical practice patients may present with conditions unseen by ML systems in training, causing errors <LINK> 1/2', 'Detecting these previously-unseen conditions is challenging but enables safer management, eg deferral to clinician experts. Joint work @DeepMind @GoogleHealth @GoogleAI with @abzz4ssj @jimwinkens @vivnat @pwnic @balajiln @JanFreyberg @TaylanCemgilML @_basilM @AziziShekoofeh et al']",21,04,541 |
324,60,1204885601583009797,3172667405,Dr. Vicki Henderson,"As of yesterday, the new BECCAL paper is on the arXiv: <LINK> Check it out for exciting spacey lasers and cold atomy things. I hear that the laser system section is a particularly good read😉#coldatoms #WomenInSTEM #WomenInPhyics #PostdocLife I've got to say, getting a paper out, dealing with #GeneralElection2019 stress, and only having one full week at home since October, is not my favouritest of combos.... #PostdocLife #GetTheToriesOut",https://arxiv.org/abs/1912.04849,"Microgravity eases several constraints limiting experiments with ultracold and condensed atoms on ground. It enables extended times of flight without suspension and eliminates the gravitational sag for trapped atoms. These advantages motivated numerous initiatives to adapt and operate experimental setups on microgravity platforms. We describe the design of the payload, motivations for design choices, and capabilities of the Bose-Einstein Condensate and Cold Atom Laboratory (BECCAL), a NASA-DLR collaboration. BECCAL builds on the heritage of previous devices operated in microgravity, features rubidium and potassium, multiple options for magnetic and optical trapping, different methods for coherent manipulation, and will offer new perspectives for experiments on quantum optics, atom optics, and atom interferometry in the unique microgravity environment on board the International Space Station. ",The Bose-Einstein Condensate and Cold Atom Laboratory,2,"['As of yesterday, the new BECCAL paper is on the arXiv: <LINK> Check it out for exciting spacey lasers and cold atomy things. I hear that the laser system section is a particularly good read😉#coldatoms #WomenInSTEM #WomenInPhyics #PostdocLife', ""I've got to say, getting a paper out, dealing with #GeneralElection2019 stress, and only having one full week at home since October, is not my favouritest of combos.... #PostdocLife #GetTheToriesOut""]",19,12,440 |
325,90,1480891707071905798,19606850,David Manheim,"Why do some artificial intelligence safety researchers view ""Highly Reliable Agent Designs"" or ""Agent Foundations"" research as useful, or even critical? (And why do others disagree?) A new paper by Issa Rice, and myself, explains: <LINK> Thread (1/6) There's been a lot of discussion about risks from advanced AI, and several ""agendas"" for achieving safety. (See, for example, <LINK> ) One, championed by researchers at @MIRIBerkeley, is ""Highly Reliable Agent Designs"" (HRAD) (2/6) <LINK> @MIRIBerkeley This is only one of a number of approaches, but it is fundamentally different from most proposals in that this work attempts to understand the problem more clearly, and formally / mathematically. (3/6) @MIRIBerkeley Because it's different, many people seem unsure what it is trying to do. So in the paper we extend the discussion from Issa's original post - <LINK> - and lay out four separate cases which have been presented for the value of this work. (4/6) <LINK> @MIRIBerkeley The first two cases for the work, incidental utility and deconfusion, seem widely accepted, but are insufficient on their own for allowing AI safety. The latter two are more ambitious, and few people fully accept the arguments, but they would provide a provably safe AI. (5/6) @MIRIBerkeley In short, if we think that AI safety is a potentially important issue, and if we're less than certain that other approaches will be sufficient, there are a variety of ways that this work is critical, as the diagram shows. Now, go read the paper ;) <LINK> (6/6) <LINK> @MIRIBerkeley PS. Thanks to the Long Term Futures Fund for funding this work, and to the Modelling Transformative AI team, <LINK> (including @daniel_eth,) as well as those who provided feedback on the paper, including @robbensinger, @RohinMShah, and @romanyam. <LINK> @MIRIBerkeley @daniel_eth @robbensinger @rohinmshah @romanyam Also, I should have tagged the first author, who did most of the work on the paper: @riceissa.",https://arxiv.org/abs/2201.02950,"Several different approaches exist for ensuring the safety of future Transformative Artificial Intelligence (TAI) or Artificial Superintelligence (ASI) systems, and proponents of different approaches have made different and debated claims about the importance or usefulness of their work in the near term, and for future systems. Highly Reliable Agent Designs (HRAD) is one of the most controversial and ambitious approaches, championed by the Machine Intelligence Research Institute, among others, and various arguments have been made about whether and how it reduces risks from future AI systems. In order to reduce confusion in the debate about AI safety, here we build on a previous discussion by Rice which collects and presents four central arguments which are used to justify HRAD as a path towards safety of AI systems. We have titled the arguments (1) incidental utility,(2) deconfusion, (3) precise specification, and (4) prediction. Each of these makes different, partly conflicting claims about how future AI systems can be risky. We have explained the assumptions and claims based on a review of published and informal literature, along with consultation with experts who have stated positions on the topic. Finally, we have briefly outlined arguments against each approach and against the agenda overall. ","Arguments about Highly Reliable Agent Designs as a Useful Path to |
Artificial Intelligence Safety",8,"['Why do some artificial intelligence safety researchers view ""Highly Reliable Agent Designs"" or ""Agent Foundations"" research as useful, or even critical? (And why do others disagree?)\n\nA new paper by Issa Rice, and myself, explains: <LINK>\n\nThread (1/6)', 'There\'s been a lot of discussion about risks from advanced AI, and several ""agendas"" for achieving safety. (See, for example, https://t.co/9RglQVNb3c ) \n\nOne, championed by researchers at @MIRIBerkeley, is ""Highly Reliable Agent Designs"" (HRAD)\n(2/6) https://t.co/imDi8Hdsta', '@MIRIBerkeley This is only one of a number of approaches, but it is fundamentally different from most proposals in that this work attempts to understand the problem more clearly, and formally / mathematically.\n(3/6)', ""@MIRIBerkeley Because it's different, many people seem unsure what it is trying to do. \nSo in the paper we extend the discussion from Issa's original post - https://t.co/wqBqMTNdnP - and lay out four separate cases which have been presented for the value of this work.\n(4/6) https://t.co/Wb0WSBF5v2"", '@MIRIBerkeley The first two cases for the work, incidental utility and deconfusion, seem widely accepted, but are insufficient on their own for allowing AI safety. The latter two are more ambitious, and few people fully accept the arguments, but they would provide a provably safe AI.\n(5/6)', ""@MIRIBerkeley In short, if we think that AI safety is a potentially important issue, and if we're less than certain that other approaches will be sufficient, there are a variety of ways that this work is critical, as the diagram shows.\n\nNow, go read the paper ;) https://t.co/15suw7F6JZ\n(6/6) https://t.co/KNc1tsK75F"", '@MIRIBerkeley PS. Thanks to the Long Term Futures Fund for funding this work, and to the Modelling Transformative AI team, https://t.co/CmN2pE2iWY (including @daniel_eth,) as well as those who provided feedback on the paper, including @robbensinger, @RohinMShah, and @romanyam. https://t.co/Zm139fIjSY', '@MIRIBerkeley @daniel_eth @robbensinger @rohinmshah @romanyam Also, I should have tagged the first author, who did most of the work on the paper: @riceissa.']",22,01,1968 |
326,201,1415605121187188740,1321069260945444864,Frederik Träuble,"What's the role of pre-trained representations for RL and how important is this for OOD generalization? We attempted to find some answers on that! “Representation Learning for Out-of-Distribution Generalization in Reinforcement Learning” <LINK> [1/5] <LINK> Learning data representations that are useful for various downstream tasks is a cornerstone of artificial intelligence, but how useful are they in challenging RL downstream control tasks such as reaching or pushing objects? [2/5] We pre-trained 200+ representations from simulated camera observations covering a wide range of properties, then trained multiple downstream policies on each of these representations and systematically evaluated their performance across many different OOD scenarios. [3/5] We evaluated generalization to OOD object properties and even to the real world! One insight: A policies’ OOD performance is closely linked with a related generalization score of its representation backbone. If we got you excited, our paper awaits with many more results! [4/5] <LINK> Thanks to all the amazing collaborators on that project! @andrea_dittadi (equal contrib.), @manuelwuethrich, Felix Widmaier, @pegehler, @OleWinther1, @FrancescoLocat8, @OlivierBachem, @bschoelkopf, Stefan Bauer @MPI_IS @DTUtweet @AmazonScience @GoogleAI [5/5]",http://arxiv.org/abs/2107.05686,"Building sample-efficient agents that generalize out-of-distribution (OOD) in real-world settings remains a fundamental unsolved problem on the path towards achieving higher-level cognition. One particularly promising approach is to begin with low-dimensional, pretrained representations of our world, which should facilitate efficient downstream learning and generalization. By training 240 representations and over 10,000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents. We observe that many agents are surprisingly robust to realistic distribution shifts, including the challenging sim-to-real case. In addition, we find that the generalization performance of a simple downstream proxy task reliably predicts the generalization performance of our RL agents under a wide range of OOD settings. Such proxy tasks can thus be used to select pretrained representations that will lead to agents that generalize. ","The Role of Pretrained Representations for the OOD Generalization of |
Reinforcement Learning Agents",5,"[""What's the role of pre-trained representations for RL and how important is this for OOD generalization? We attempted to find some answers on that!\n\n“Representation Learning for Out-of-Distribution Generalization in Reinforcement Learning”\n<LINK>\n\n[1/5] <LINK>"", 'Learning data representations that are useful for various downstream tasks is a cornerstone of artificial intelligence, but how useful are they in challenging RL downstream control tasks such as reaching or pushing objects? [2/5]', 'We pre-trained 200+ representations from simulated camera observations covering a wide range of properties, then trained multiple downstream policies on each of these representations and systematically evaluated their performance across many different OOD scenarios. [3/5]', 'We evaluated generalization to OOD object properties and even to the real world! One insight: A policies’ OOD performance is closely linked with a related generalization score of its representation backbone. If we got you excited, our paper awaits with many more results! [4/5] https://t.co/Y0PKCRNWUK', 'Thanks to all the amazing collaborators on that project!\n@andrea_dittadi (equal contrib.), @manuelwuethrich, Felix Widmaier, @pegehler, @OleWinther1, @FrancescoLocat8, @OlivierBachem, @bschoelkopf, Stefan Bauer \n\n@MPI_IS @DTUtweet @AmazonScience @GoogleAI\n[5/5]']",21,07,1306 |