text
stringlengths 64
6.93k
|
---|
Hubble Space Telescope",1,"['New paper on arxiv today! We find NIR emission at the location of 6 Galactic magnetars for the first time, and show that some known NIR counterparts are highly variable. We also discuss the nature of the NIR emission - more to come on that, so stay tuned! <LINK>']",22,03,262 |
129,110,1357525242902577152,2377407248,Daniel Whiteson,"New paper! “Learning to Isolate Muons” <LINK> A short study that tells us that there is still a LOT to learn about muons! Telling prompt muons (from heavy bosons) apart from those embedded in jets should be easy. Jets are big and messy and leave a lot of energy everywhere. <LINK> Forever, people have just calculated “isolation” which measures how much energy there is around the muon. We had two Q: can deep learning do better, and if so, can we interpret what it’s done? Thread on interpretable deep learning: <LINK> It turns out that deep learning can do a LOT better, even if you use multiple isolation cones! I was pretty surprised. Muons are simple, and the information is all radial, right? Wrong. <LINK> So what is the network doing? Is there non-radially symmetric information? We found some observables that partially close the gap, but not all of it. <LINK> So what ELSE is the network doing? We still don't know! Something that needs a new class of observables, perhaps ones that include the location of the muon to capture relative angular information. @dangaristo Yeah, it's the same problem actually, but e+e- colliders don't produce jets as often, so it's not as important.",https://arxiv.org/abs/2102.02278,"Distinguishing between prompt muons produced in heavy boson decay and muons produced in association with heavy-flavor jet production is an important task in analysis of collider physics data. We explore whether there is information available in calorimeter deposits that is not captured by the standard approach of isolation cones. We find that convolutional networks and particle-flow networks accessing the calorimeter cells surpass the performance of isolation cones, suggesting that the radial energy distribution and the angular structure of the calorimeter deposits surrounding the muon contain unused discrimination power. We assemble a small set of high-level observables which summarize the calorimeter information and close the performance gap with networks which analyze the calorimeter cells directly. These observables are theoretically well-defined and can be studied with collider data. ",Learning to Isolate Muons,7,"['New paper!\n\n“Learning to Isolate Muons”\n<LINK>\n\nA short study that tells us that there is still a LOT to learn about muons!', 'Telling prompt muons (from heavy bosons) apart from those embedded in jets should be easy. Jets are big and messy and leave a lot of energy everywhere. https://t.co/bdz7FIOKMr', 'Forever, people have just calculated “isolation” which measures how much energy there is around the muon.\n\nWe had two Q: can deep learning do better, and if so, can we interpret what it’s done? \n\nThread on interpretable deep learning: https://t.co/nWmR3jPopH', 'It turns out that deep learning can do a LOT better, even if you use multiple isolation cones! \n\nI was pretty surprised. Muons are simple, and the information is all radial, right? Wrong. https://t.co/hOUN7OXeK7', 'So what is the network doing? Is there non-radially symmetric information? \n\nWe found some observables that partially close the gap, but not all of it. https://t.co/Qlf0FmcHLC', ""So what ELSE is the network doing? \n\nWe still don't know!\n\nSomething that needs a new class of observables, perhaps ones that include the location of the muon to capture relative angular information."", ""@dangaristo Yeah, it's the same problem actually, but e+e- colliders don't produce jets as often, so it's not as important.""]",21,02,1194 |
130,18,1209407413381685253,844700194197397506,Alex Kendall,Happy to share our final paper of 2019 proposing a spatial-temporal video-pixel embedding loss which sets a new state of the art for video instance segmentation. Here's the paper <LINK> and demo video <LINK> This work was led by Anthony Hu and extends Bert De Brabandere et al.'s method to video. Our method is online and runs in real-time and explicitly uses motion and geometric cues for video instance segmentation.,http://arxiv.org/abs/1912.08969,"We present a novel embedding approach for video instance segmentation. Our method learns a spatio-temporal embedding integrating cues from appearance, motion, and geometry; a 3D causal convolutional network models motion, and a monocular self-supervised depth loss models geometry. In this embedding space, video-pixels of the same instance are clustered together while being separated from other instances, to naturally track instances over time without any complex post-processing. Our network runs in real-time as our architecture is entirely causal - we do not incorporate information from future frames, contrary to previous methods. We show that our model can accurately track and segment instances, even with occlusions and missed detections, advancing the state-of-the-art on the KITTI Multi-Object and Tracking Dataset. ",Learning a Spatio-Temporal Embedding for Video Instance Segmentation,2,"[""Happy to share our final paper of 2019 proposing a spatial-temporal video-pixel embedding loss which sets a new state of the art for video instance segmentation. Here's the paper\n<LINK>\nand demo video <LINK>"", ""This work was led by Anthony Hu and extends Bert De Brabandere et al.'s method to video. Our method is online and runs in real-time and explicitly uses motion and geometric cues for video instance segmentation.""]",19,12,418 |
131,137,1402704471763922947,502318662,Emmanuel Bengio,"Ever wanted to generate diverse samples of discrete data based on a reward function? Our new method, GFlowNet, based on flow networks & a TD-like objective, gets great results on a molecule generation domain 💊 paper:<LINK> <LINK> blog: <LINK> code: <LINK> and this is work with these awesome people: @JainMoksh, Maksym Korablyov, Doina Precup & Yoshua Bengio",http://arxiv.org/abs/2106.04399,"This paper is about the problem of learning a stochastic policy for generating an object (like a molecular graph) from a sequence of actions, such that the probability of generating an object is proportional to a given positive reward for that object. Whereas standard return maximization tends to converge to a single return-maximizing sequence, there are cases where we would like to sample a diverse set of high-return solutions. These arise, for example, in black-box function optimization when few rounds are possible, each with large batches of queries, where the batches should be diverse, e.g., in the design of new molecules. One can also see this as a problem of approximately converting an energy function to a generative distribution. While MCMC methods can achieve that, they are expensive and generally only perform local exploration. Instead, training a generative policy amortizes the cost of search during training and yields to fast generation. Using insights from Temporal Difference learning, we propose GFlowNet, based on a view of the generative process as a flow network, making it possible to handle the tricky case where different trajectories can yield the same final state, e.g., there are many ways to sequentially add atoms to generate some molecular graph. We cast the set of trajectories as a flow and convert the flow consistency equations into a learning objective, akin to the casting of the Bellman equations into Temporal Difference methods. We prove that any global minimum of the proposed objectives yields a policy which samples from the desired distribution, and demonstrate the improved performance and diversity of GFlowNet on a simple domain where there are many modes to the reward function, and on a molecule synthesis task. ","Flow Network based Generative Models for Non-Iterative Diverse Candidate |
Generation",2,"['Ever wanted to generate diverse samples of discrete data based on a reward function? Our new method, GFlowNet, based on flow networks & a TD-like objective, gets great results on a molecule generation domain 💊\npaper:<LINK> <LINK>', 'blog: https://t.co/gnT6I0ZZQF\ncode: https://t.co/UwSIjpXqs7\nand this is work with these awesome people: @JainMoksh, Maksym Korablyov, Doina Precup & Yoshua Bengio']",21,06,358 |
132,23,1033894762858733569,140770306,Masafumi Oizumi,"Our new paper ""Fisher Information and Natural Gradient Learning of Random Deep Networks"" is available on arXiv. We showed that in random deep networks, the Fisher information matrix is unit-wise block diagonal, which speeds up the natural gradient. <LINK>",https://arxiv.org/abs/1808.07172,"A deep neural network is a hierarchical nonlinear model transforming input signals to output signals. Its input-output relation is considered to be stochastic, being described for a given input by a parameterized conditional probability distribution of outputs. The space of parameters consisting of weights and biases is a Riemannian manifold, where the metric is defined by the Fisher information matrix. The natural gradient method uses the steepest descent direction in a Riemannian manifold, so it is effective in learning, avoiding plateaus. It requires inversion of the Fisher information matrix, however, which is practically impossible when the matrix has a huge number of dimensions. Many methods for approximating the natural gradient have therefore been introduced. The present paper uses statistical neurodynamical method to reveal the properties of the Fisher information matrix in a net of random connections under the mean field approximation. We prove that the Fisher information matrix is unit-wise block diagonal supplemented by small order terms of off-block-diagonal elements, which provides a justification for the quasi-diagonal natural gradient method by Y. Ollivier. A unitwise block-diagonal Fisher metrix reduces to the tensor product of the Fisher information matrices of single units. We further prove that the Fisher information matrix of a single unit has a simple reduced form, a sum of a diagonal matrix and a rank 2 matrix of weight-bias correlations. We obtain the inverse of Fisher information explicitly. We then have an explicit form of the natural gradient, without relying on the numerical matrix inversion, which drastically speeds up stochastic gradient learning. ",Fisher Information and Natural Gradient Learning of Random Deep Networks,1,"['Our new paper ""Fisher Information and Natural Gradient Learning of Random Deep Networks"" is available on arXiv. We showed that in random deep networks, the Fisher information matrix is unit-wise block diagonal, which speeds up the natural gradient. <LINK>']",18,08,255 |
133,163,1280078716706721793,33994542,Frank Schlosser,"Our new paper is online! 🎉 We looked at mobility in Germany during the Covid-19 lockdown and found 1) structural changes in mobility patterns, which 2) strongly affect how infectious diseases spread! Here's what we found: (1/8) <LINK> Mobility was reduced substantially due to the lockdown, up to around -40% below normal. This is not really suprising. However, if we look closer at *how* mobility changed ... (2/8) <LINK> ... we see that it is mostly long-distance trips that are reduced. This is important, because it changes the *structure* of the mobility network. (3/8) <LINK> During lockdown (right frame), the mobility network is more local, and more clustered. This leads to a reduction of the so-called ""small world"" effect: During lockdown, it is harder to get from one place to a distant location - the world is not ""small"" anymore. (4/8) <LINK> We can see this for example in the ""shortest path length"" between locations: Generally, if two places are farther away, the shortest path connecting them is longer. But during lockdown, the paths are much longer and they keep growing with distance! (5/8) <LINK> In technical terms: The shortest path length L and the clustering coefficient C peak during the lockdown, indicating the strong reduction of the small world effect. (6/8) <LINK> What this all means for infectious diseases is: Epidemics take longer to spread, and take longer to reach far away places. Effectively, the changes in mobility ""flatten the curve"". (7/8) <LINK> That's the gist, more details in the paper. (8/8) Thanks to all authors involved for the months of hard work! 🙏 @BenFMaier @davhin11 @AdrianZachariae @DirkBrockmann (And phew, in personal news, my first paper as a first author. What a journey! 🎉)",https://arxiv.org/abs/2007.01583,"In the wake of the COVID-19 pandemic many countries implemented containment measures to reduce disease transmission. Studies using digital data sources show that the mobility of individuals was effectively reduced in multiple countries. However, it remains unclear whether these reductions caused deeper structural changes in mobility networks, and how such changes may affect dynamic processes on the network. Here we use movement data of mobile phone users to show that mobility in Germany has not only been reduced considerably: Lockdown measures caused substantial and long-lasting structural changes in the mobility network. We find that long-distance travel was reduced disproportionately strongly. The trimming of long-range network connectivity leads to a more local, clustered network and a moderation of the ""small-world"" effect. We demonstrate that these structural changes have a considerable effect on epidemic spreading processes by ""flattening"" the epidemic curve and delaying the spread to geographically distant regions. ","COVID-19 lockdown induces disease-mitigating structural changes in |
mobility networks",8,"[""Our new paper is online! 🎉\n\nWe looked at mobility in Germany during the Covid-19 lockdown and found 1) structural changes in mobility patterns, which 2) strongly affect how infectious diseases spread!\n\nHere's what we found: (1/8)\n\n<LINK>"", 'Mobility was reduced substantially due to the lockdown, up to around -40% below normal.\n\nThis is not really suprising. However, if we look closer at *how* mobility changed ... (2/8) https://t.co/hjMBbztrka', '... we see that it is mostly long-distance trips that are reduced.\n\nThis is important, because it changes the *structure* of the mobility network. (3/8) https://t.co/yixEttAMHI', 'During lockdown (right frame), the mobility network is more local, and more clustered.\n\nThis leads to a reduction of the so-called ""small world"" effect: During lockdown, it is harder to get from one place to a distant location - the world is not ""small"" anymore. (4/8) https://t.co/4W3l0EI4t8', 'We can see this for example in the ""shortest path length"" between locations: \n\nGenerally, if two places are farther away, the shortest path connecting them is longer. But during lockdown, the paths are much longer and they keep growing with distance! (5/8) https://t.co/Nw38MHPcyW', 'In technical terms: The shortest path length L and the clustering coefficient C peak during the lockdown, indicating the strong reduction of the small world effect. (6/8) https://t.co/aDV5gEgwsQ', 'What this all means for infectious diseases is:\n\nEpidemics take longer to spread, and take longer to reach far away places. Effectively, the changes in mobility ""flatten the curve"". (7/8) https://t.co/aSve8YwXE2', ""That's the gist, more details in the paper. (8/8)\n\nThanks to all authors involved for the months of hard work! 🙏 @BenFMaier @davhin11 @AdrianZachariae @DirkBrockmann \n\n(And phew, in personal news, my first paper as a first author. What a journey! 🎉)""]",20,07,1739 |
134,83,1516091529193934848,1250628810,Jessy Lin,"How can agents infer what people want from what they say? In our new paper at #acl2022nlp w/ @dan_fried, Dan Klein, and @ancadianadragan, we learn preferences from language by reasoning about how people communicate in context. Paper: <LINK> [1/n] <LINK> @dan_fried @ancadianadragan We’d like AI agents that not only follow our instructions (“book this flight”), but learn to generalize to what to do in new contexts (know what flights I prefer from our past interactions and book on my behalf) — i.e., learn *rewards* from language. [2/n] @dan_fried @ancadianadragan The challenge is that language only reveals partial, context-dependent information about our goals and preferences (when I tell a flight booking agent I want “the jetblue flight,” I don’t mean I always want a jetblue flight — just in this particular case!). [3/n] <LINK> @dan_fried @ancadianadragan On the other hand, we have a lot of techniques in inverse reinforcement learning to go from actions -> underlying rewards, but these methods will miss the fact that language naturally communicates *why* people want those actions. [4/n] @dan_fried @ancadianadragan To study this, we collect a dataset of natural language in a new task, FlightPref, where one player (the “assistant”) has to infer the preferences of the “user"" while they book flights together. Lots of rich, interesting phenomena in the data (to be released!): [5/n] <LINK> @dan_fried @ancadianadragan We build a pragmatic model that reasons that language communicates what agents should do, and the *way* people describe what to do reveal the features they care about. Both enable agents to make more accurate inferences. [6/n] <LINK> @dan_fried @ancadianadragan There’s many directions to take FlightPref / reward learning from language further: building agents that learn to ask questions based on uncertainty, studying adaptation to different humans, and seeing how these ideas extend to inferring real preferences in the wild! [7/n] @dan_fried @ancadianadragan More broadly, it’s an exciting time to be working on language + action/RL! A lot of work has been focused on e.g. language for generalization, but our work hints at how language humans use to *communicate* present distinct challenges for grounded agents (pragmatics, etc.!). [8/8]",https://arxiv.org/abs/2204.02515,"In classic instruction following, language like ""I'd like the JetBlue flight"" maps to actions (e.g., selecting that flight). However, language also conveys information about a user's underlying reward function (e.g., a general preference for JetBlue), which can allow a model to carry out desirable actions in new contexts. We present a model that infers rewards from language pragmatically: reasoning about how speakers choose utterances not only to elicit desired actions, but also to reveal information about their preferences. On a new interactive flight-booking task with natural language, our model more accurately infers rewards and predicts optimal actions in unseen environments, in comparison to past work that first maps language to actions (instruction following) and then maps actions to rewards (inverse reinforcement learning). ",Inferring Rewards from Language in Context,8,"['How can agents infer what people want from what they say?\n\nIn our new paper at #acl2022nlp w/ @dan_fried, Dan Klein, and @ancadianadragan, we learn preferences from language by reasoning about how people communicate in context.\n\nPaper: <LINK>\n[1/n] <LINK>', '@dan_fried @ancadianadragan We’d like AI agents that not only follow our instructions (“book this flight”), but learn to generalize to what to do in new contexts (know what flights I prefer from our past interactions and book on my behalf) — i.e., learn *rewards* from language. [2/n]', '@dan_fried @ancadianadragan The challenge is that language only reveals partial, context-dependent information about our goals and preferences (when I tell a flight booking agent I want “the jetblue flight,” I don’t mean I always want a jetblue flight — just in this particular case!). [3/n] https://t.co/6n3Lg4X4CR', '@dan_fried @ancadianadragan On the other hand, we have a lot of techniques in inverse reinforcement learning to go from actions -> underlying rewards, but these methods will miss the fact that language naturally communicates *why* people want those actions. [4/n]', '@dan_fried @ancadianadragan To study this, we collect a dataset of natural language in a new task, FlightPref, where one player (the “assistant”) has to infer the preferences of the “user"" while they book flights together. \n\nLots of rich, interesting phenomena in the data (to be released!): [5/n] https://t.co/iknlAjY3Tx', '@dan_fried @ancadianadragan We build a pragmatic model that reasons that language communicates what agents should do, and the *way* people describe what to do reveal the features they care about. Both enable agents to make more accurate inferences. [6/n] https://t.co/p3xOJlS8KS', '@dan_fried @ancadianadragan There’s many directions to take FlightPref / reward learning from language further: building agents that learn to ask questions based on uncertainty, studying adaptation to different humans, and seeing how these ideas extend to inferring real preferences in the wild! [7/n]', '@dan_fried @ancadianadragan More broadly, it’s an exciting time to be working on language + action/RL! A lot of work has been focused on e.g. language for generalization, but our work hints at how language humans use to *communicate* present distinct challenges for grounded agents (pragmatics, etc.!). [8/8]']",22,04,2281 |
135,164,1449080001932894208,1406456816410660867,Jana Doppa,"New JAIR @JAIR_Editor paper on output space entropy search framework for solving a variety of multi-objective optimization problems with expensive black-box functions (w/ @syrineblk and @deshwal_aryan) Paper: <LINK> Code: <LINK> 🧵👇 <LINK> These problems arise in many science and engineering applications such as hardware design to trade-off power and performance, materials design, and Auto ML tasks (e.g., hyper-parameter tuning). <LINK> The key challenge is to select the sequence of experiments (inputs for function evaluation) by trading-off exploration and exploitation to find the Optimal pareto set (input space) by minimizing the overall cost of experiments. <LINK> For the single-fidelity setting (expensive and accurate function evaluations), we select the experiment that will maximize the information gain about the optimal Pareto front (output space). Advantages: improved anytime accuracy of Pareto set and computational efficiency. <LINK> For the constrained setting (invalid designs can be identified only by doing expensive experiments), MESMOC algorithm instantiate the same principle to approximate the constrained Pareto set. Code: <LINK> <LINK> For the discrete multi-fidelity setting (evaluations that vary in accuracy and resource cost), MF-OSEMO selects the experiment that maximizes the information gain per unit cost about the optimal Pareto front. Code: <LINK> <LINK> For the continuous-fidelity setting, where the number of function approximations are huge, iMOCA instantiates this principle over the joint input and fidelity-vector space. Code: <LINK> <LINK> These algorithms allow us to find high-quality Pareto solutions with lower cost for experiments. <LINK> This work builds on excellent work on max-value entropy search for single-objective BO by @ziwphd and @StefanieJegelka. Thanks to the @JAIR_Editor reviewers and associate editor for their constructive feedback in improving the paper. Work funded by @NSF. /End Tagging @SigOpt",https://arxiv.org/abs/2110.06980,"We consider the problem of black-box multi-objective optimization (MOO) using expensive function evaluations (also referred to as experiments), where the goal is to approximate the true Pareto set of solutions by minimizing the total resource cost of experiments. For example, in hardware design optimization, we need to find the designs that trade-off performance, energy, and area overhead using expensive computational simulations. The key challenge is to select the sequence of experiments to uncover high-quality solutions using minimal resources. In this paper, we propose a general framework for solving MOO problems based on the principle of output space entropy (OSE) search: select the experiment that maximizes the information gained per unit resource cost about the true Pareto front. We appropriately instantiate the principle of OSE search to derive efficient algorithms for the following four MOO problem settings: 1) The most basic em single-fidelity setting, where experiments are expensive and accurate; 2) Handling em black-box constraints} which cannot be evaluated without performing experiments; 3) The discrete multi-fidelity setting, where experiments can vary in the amount of resources consumed and their evaluation accuracy; and 4) The em continuous-fidelity setting, where continuous function approximations result in a huge space of experiments. Experiments on diverse synthetic and real-world benchmarks show that our OSE search based algorithms improve over state-of-the-art methods in terms of both computational-efficiency and accuracy of MOO solutions. ","Output Space Entropy Search Framework for Multi-Objective Bayesian |
Optimization",10,"['New JAIR @JAIR_Editor paper on output space entropy search framework for solving a variety of multi-objective optimization problems with expensive black-box functions (w/ @syrineblk and @deshwal_aryan) \n\nPaper: <LINK>\nCode: <LINK>\n\n🧵👇 <LINK>', 'These problems arise in many science and engineering applications such as hardware design to trade-off power and performance, materials design, and Auto ML tasks (e.g., hyper-parameter tuning). https://t.co/ahrfQe2Nyh', 'The key challenge is to select the sequence of experiments (inputs for function evaluation) by trading-off exploration and exploitation to find the Optimal pareto set (input space) by minimizing the overall cost of experiments. https://t.co/3YaUKNjfK9', 'For the single-fidelity setting (expensive and accurate function evaluations), we select the experiment that will maximize the information gain about the optimal Pareto front (output space). \n\nAdvantages: improved anytime accuracy of Pareto set and computational efficiency. https://t.co/OJSS4eS3Zt', 'For the constrained setting (invalid designs can be identified only by doing expensive experiments), MESMOC algorithm instantiate the same principle to approximate the constrained Pareto set.\n\nCode: https://t.co/w5bz8zbbG8 https://t.co/J7wVznr9wQ', 'For the discrete multi-fidelity setting (evaluations that vary in accuracy and resource cost), MF-OSEMO selects the experiment that maximizes the information gain per unit cost about the optimal Pareto front.\n\nCode: https://t.co/VbxCgjY8XQ https://t.co/bqPwDq7ZTO', 'For the continuous-fidelity setting, where the number of function approximations are huge, iMOCA instantiates this principle over the joint input and fidelity-vector space.\n\nCode: https://t.co/WuumE3jaqQ https://t.co/3DDSqHY9RC', 'These algorithms allow us to find high-quality Pareto solutions with lower cost for experiments. https://t.co/d4UPeTWiNy', 'This work builds on excellent work on max-value entropy search for single-objective BO by @ziwphd and @StefanieJegelka.\n\nThanks to the @JAIR_Editor reviewers and associate editor for their constructive feedback in improving the paper.\n\nWork funded by @NSF.\n\n/End', 'Tagging @SigOpt']",21,10,1970 |
136,148,1359301186734620675,2162872302,Mingxing Tan,"Nystromformer: a new linear self-attention. It turns out a simple Nyström method is quite effective in approximating the full attention, outperforming reformer/linformer/performer by +3% accuracy on LRA. @YoungXiong1 Paper: <LINK> Code: <LINK> <LINK> <LINK>",https://arxiv.org/abs/2102.03902,"Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the input sequence length has limited its application to longer sequences -- a topic being actively studied in the community. To address this limitation, we propose Nystr\""{o}mformer -- a model that exhibits favorable scalability as a function of sequence length. Our idea is based on adapting the Nystr\""{o}m method to approximate standard self-attention with $O(n)$ complexity. The scalability of Nystr\""{o}mformer enables application to longer sequences with thousands of tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard sequence length, and find that our Nystr\""{o}mformer performs comparably, or in a few cases, even slightly better, than standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nystr\""{o}mformer performs favorably relative to other efficient self-attention methods. Our code is available at this https URL ","Nystr\""omformer: A Nystr\""om-Based Algorithm for Approximating |
Self-Attention",1,"['Nystromformer: a new linear self-attention.\n\nIt turns out a simple Nyström method is quite effective in approximating the full attention, outperforming reformer/linformer/performer by +3% accuracy on LRA. @YoungXiong1\n\nPaper: <LINK>\nCode: <LINK> <LINK> <LINK>']",21,02,257 |
137,48,1321467105464778753,70874545,Josh Lothringer,"New paper on the arXiv today with @astronomerslc25 in which we calculate some atmosphere models for two brown dwarfs irradiated by white dwarfs! <LINK> The two objects, WD-0137B (Teq~2,000 K) and EPIC2122B (Teq~3,450 K), orbit a 16,500 K and 25,000 K white dwarf every 116 and 68 minutes, respectively. They're just about some of the most extreme (sub-stellar) systems you could dream up. <LINK> From the ground, we can get multiple full spectroscopic phase-curves in a night- something unimaginable with exoplanets! The plot below is from Longstaff et al. 2017 and is one of my favorite in the literature. <LINK> To better understand these extreme objects, we computed some PHOENIX atmosphere models. Importantly, we could self-consistently account for the huge amounts of incoming UV irradiation and show that this can drive temperature inversions... not unlike ultra-hot Jupiters! <LINK> We show that we can reproduce the emission lines observed in these two systems with our models and can fit the photometry *okay*. We also use PETRA retrievals to get an idea of the object's average temperature. <LINK> We then compare these irradiated BDs to exoplanets: unlike exoplanets at similar temperatures, WD-0137B's inversion isn't driven by TiO/VO absorption, but from the absorption of the UV irradiation by metals, akin to the hottest ultra-hot Jupiters. EPIC2122B on the other hand, is much like KELT-9b, but because of the even more intense UV irradiation, EPIC2122B's inversion can reach even higher temperatures! <LINK> Even though these wild systems are rare, they have a lot to teach us about irradiated atmospheres and definitely test our models. You can read all about this and more in the paper, to appear soon in ApJ!",https://arxiv.org/abs/2010.14319,"Irradiated brown dwarfs (BDs) provide natural laboratories to test our understanding of substellar and irradiated atmospheres. A handful of short-period BDs around white dwarfs (WDs) have been observed, but the uniquely intense UV-dominated irradiation presents a modeling challenge. Here, we present the first fully self-consistent 1D atmosphere models that take into account the UV irradiation's effect on the object's temperature structure. We explore two BD-WD systems, namely WD-0137-349 and EPIC-212235321. WD-0137-349B has an equilibrium temperature that would place it in the transition between hot and ultra-hot Jupiters, while EPIC-212235321B has an equilibrium temperature higher than all ultra-hot Jupiters except KELT-9b. We explore some peculiar aspects of irradiated BD atmospheres and show that existing photometry can be well-fit with our models. Additionally, the detections of atomic emission lines from these BDs can be explained by a strong irradiation-induced temperature inversion, similar to inversions recently explored in ultra-hot Jupiters. Our models of WD-0137-349B can reproduce the observed equivalent width of many but not all of these atomic lines. We use the observed photometry of these objects to retrieve the temperature structure using the PHOENIX ExoplaneT Retrieval Algorithm (PETRA) and demonstrate that the structures are consistent with our models, albeit somewhat cooler at low pressures. We then discuss the similarities and differences between this class of irradiated brown dwarf and the lower-mass ultra-hot Jupiters. Lastly, we describe the behavior of irradiated BDs in color-magnitude space to show the difficulty in classifying irradiated BDs using otherwise well-tested methods for isolated objects. ","Atmosphere Models of Brown Dwarfs Irradiated by White Dwarfs: Analogues |
for Hot and Ultra-Hot Jupiters",8,"['New paper on the arXiv today with @astronomerslc25 in which we calculate some atmosphere models for two brown dwarfs irradiated by white dwarfs!\n\n<LINK>', ""The two objects, WD-0137B (Teq~2,000 K) and EPIC2122B (Teq~3,450 K), orbit a 16,500 K and 25,000 K white dwarf every 116 and 68 minutes, respectively. They're just about some of the most extreme (sub-stellar) systems you could dream up. https://t.co/FWql4lXGYX"", 'From the ground, we can get multiple full spectroscopic phase-curves in a night- something unimaginable with exoplanets! The plot below is from Longstaff et al. 2017 and is one of my favorite in the literature. https://t.co/4bpIsLtkDR', 'To better understand these extreme objects, we computed some PHOENIX atmosphere models. Importantly, we could self-consistently account for the huge amounts of incoming UV irradiation and show that this can drive temperature inversions... not unlike ultra-hot Jupiters! https://t.co/iZm7BJ25BO', ""We show that we can reproduce the emission lines observed in these two systems with our models and can fit the photometry *okay*. We also use PETRA retrievals to get an idea of the object's average temperature. https://t.co/sZiBoBTyA1"", ""We then compare these irradiated BDs to exoplanets: unlike exoplanets at similar temperatures, WD-0137B's inversion isn't driven by TiO/VO absorption, but from the absorption of the UV irradiation by metals, akin to the hottest ultra-hot Jupiters."", ""EPIC2122B on the other hand, is much like KELT-9b, but because of the even more intense UV irradiation, EPIC2122B's inversion can reach even higher temperatures! https://t.co/Jq57LvxRHG"", 'Even though these wild systems are rare, they have a lot to teach us about irradiated atmospheres and definitely test our models. You can read all about this and more in the paper, to appear soon in ApJ!']",20,10,1729 |
138,88,1040133126712832000,66175375,Jason Wang,Our Gemini @PlanetImager orbits of the HR 8799 planets is out: <LINK>. We combined orbit fits with N-body simulations to find stable orbit solutions. Inside: 4-planet resonance lock is not necessary for stability; planet-disk interactions; and dynamical masses! <LINK>,https://arxiv.org/abs/1809.04107,"The HR 8799 system uniquely harbors four young super-Jupiters whose orbits can provide insights into the system's dynamical history and constrain the masses of the planets themselves. Using the Gemini Planet Imager (GPI), we obtained down to one milliarcsecond precision on the astrometry of these planets. We assessed four-planet orbit models with different levels of constraints and found that assuming the planets are near 1:2:4:8 period commensurabilities, or are coplanar, does not worsen the fit. We added the prior that the planets must have been stable for the age of the system (40 Myr) by running orbit configurations from our posteriors through $N$-body simulations and varying the masses of the planets. We found that only assuming the planets are both coplanar and near 1:2:4:8 period commensurabilities produces dynamically stable orbits in large quantities. Our posterior of stable coplanar orbits tightly constrains the planets' orbits, and we discuss implications for the outermost planet b shaping the debris disk. A four-planet resonance lock is not necessary for stability up to now. However, planet pairs d and e, and c and d, are each likely locked in two-body resonances for stability if their component masses are above $6~M_{\rm{Jup}}$ and $7~M_{\rm{Jup}}$, respectively. Combining the dynamical and luminosity constraints on the masses using hot-start evolutionary models and a system age of $42 \pm 5$~Myr, we found the mass of planet b to be $5.8 \pm 0.5~M_{\rm{Jup}}$, and the masses of planets c, d, and e to be $7.2_{-0.7}^{+0.6}~M_{\rm{Jup}}$ each. ",Dynamical Constraints on the HR 8799 Planets with GPI,1,['Our Gemini @PlanetImager orbits of the HR 8799 planets is out: <LINK>. We combined orbit fits with N-body simulations to find stable orbit solutions. Inside: 4-planet resonance lock is not necessary for stability; planet-disk interactions; and dynamical masses! <LINK>'],18,09,268 |
139,138,1148404319076544512,836417100864344064,Takuma Udagawa,Our #AAAI2019 paper “A Natural Language Corpus of Common Grounding under Continuous and Partially-Observable Context” is up on arXiv: <LINK>! We proposed a minimal dialogue task to study advanced common grounding under continuous and partially-observable context. Dataset is available here: <LINK>,https://arxiv.org/abs/1907.03399,"Common grounding is the process of creating, repairing and updating mutual understandings, which is a critical aspect of sophisticated human communication. However, traditional dialogue systems have limited capability of establishing common ground, and we also lack task formulations which introduce natural difficulty in terms of common grounding while enabling easy evaluation and analysis of complex models. In this paper, we propose a minimal dialogue task which requires advanced skills of common grounding under continuous and partially-observable context. Based on this task formulation, we collected a largescale dataset of 6,760 dialogues which fulfills essential requirements of natural language corpora. Our analysis of the dataset revealed important phenomena related to common grounding that need to be considered. Finally, we evaluate and analyze baseline neural models on a simple subtask that requires recognition of the created common ground. We show that simple baseline models perform decently but leave room for further improvement. Overall, we show that our proposed task will be a fundamental testbed where we can train, evaluate, and analyze dialogue system's ability for sophisticated common grounding. ","A Natural Language Corpus of Common Grounding under Continuous and |
Partially-Observable Context",2,"['Our #AAAI2019 paper “A Natural Language Corpus of Common Grounding under Continuous and Partially-Observable Context” is up on arXiv: <LINK>! We proposed a minimal dialogue task to study advanced common grounding under continuous and partially-observable context.', 'Dataset is available here: https://t.co/LIEcs46rBA']",19,07,297 |
140,97,1273538845619957761,1068823110,José Cano,"The preprint of our ASAP 2020 paper ""Optimizing Grouped Convolutions on Edge Devices"" is already available! We propose GSPC, a new and higher performance implementation of Grouped Convolutions. Check it out! <LINK> @MJcomp86 Thanks Mahdi! Not sure about the multiple models in TVM... we haven't explored that yet, but it's interesting!",https://arxiv.org/abs/2006.09791,"When deploying a deep neural network on constrained hardware, it is possible to replace the network's standard convolutions with grouped convolutions. This allows for substantial memory savings with minimal loss of accuracy. However, current implementations of grouped convolutions in modern deep learning frameworks are far from performing optimally in terms of speed. In this paper we propose Grouped Spatial Pack Convolutions (GSPC), a new implementation of grouped convolutions that outperforms existing solutions. We implement GSPC in TVM, which provides state-of-the-art performance on edge devices. We analyze a set of networks utilizing different types of grouped convolutions and evaluate their performance in terms of inference time on several edge devices. We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by 3.4x, 8x and 4x on average respectively. Code is available at this https URL ",Optimizing Grouped Convolutions on Edge Devices,2,"['The preprint of our ASAP 2020 paper ""Optimizing Grouped Convolutions on Edge Devices"" is already available!\n\nWe propose GSPC, a new and higher performance implementation of Grouped Convolutions. Check it out! <LINK>', ""@MJcomp86 Thanks Mahdi! Not sure about the multiple models in TVM... we haven't explored that yet, but it's interesting!""]",20,06,335 |
141,80,1237277384539279360,2862127121,ioana ciucă,"Now that the figures look nicer, check out our new paper @ <LINK>. We employ a simple ML approach on the brilliant ages of A. Miglio and the amazing group at Birmingham to get stellar age for 17,305 stars in the local neighbourhood. We then compare our results with predictions from the superb, high-resolution cosmological simulations Auriga (amazing work done by Rob Grand @ MPA). We find that the thick disc forms only in the inner disc, and that the inner and outer disc follow a distinct formation pathway. @sadie_lb Thanks Sadie ❤️❤️ ❤️.",https://arxiv.org/abs/2003.03316,"We develop a Bayesian Machine Learning framework called BINGO (Bayesian INference for Galactic archaeOlogy) centred around a Bayesian neural network. After being trained on the APOGEE and \emph{Kepler} asteroseismic age data, BINGO is used to obtain precise relative stellar age estimates with uncertainties for the APOGEE stars. We carefully construct a training set to minimise bias and apply BINGO to a stellar population that is similar to our training set. We then select the 17,305 stars with ages from BINGO and reliable kinematic properties obtained from \textit{Gaia} DR2. By combining the age and chemo-kinematical information, we dissect the Galactic disc stars into three components, namely, the thick disc (old, high-[$\alpha$/Fe], [$\alpha$/Fe] $\gtrsim$ 0.12), the thin disc (young, low-[$\alpha$/Fe]) and the Bridge, which is a region between the thick and thin discs. Our results indicate that the thick disc formed at an early epoch only in the inner region, and the inner disc smoothly transforms to the thin disc. We found that the outer disc follows a different chemical evolution pathway from the inner disc. The outer metal-poor stars only start forming after the compact thick disc phase has completed and the star-forming gas disc extended outwardly with metal-poor gas accretion. We found that in the Bridge region the range of [Fe/H] becomes wider with decreasing age, which suggests that the Bridge region corresponds to the transition phase from the smaller chemically well-mixed thick to a larger thin disc with a metallicity gradient. ","Unveiling the Distinct Formation Pathways of the Inner and Outer Discs |
of the Milky Way with Bayesian Machine Learning",3,"['Now that the figures look nicer, check out our new paper @ <LINK>. We employ a simple ML approach on the brilliant ages of A. Miglio and the amazing group at Birmingham to get stellar age for 17,305 stars in the local neighbourhood.', 'We then compare our results with predictions from the superb, high-resolution cosmological simulations Auriga (amazing work done by Rob Grand @ MPA). We find that the thick disc forms only in the inner disc, and that the inner and outer disc follow a distinct formation pathway.', '@sadie_lb Thanks Sadie ❤️❤️ ❤️.']",20,03,543 |
142,29,1298589325203046400,561899047,Aki Vehtari,"We (Tuomas Sivula, @MansMeg and I) have another new related paper <LINK> showing that although there is no generally unbiased estimator of CV error variance (as shown by Bengio and Grandvalet (2004)) there can be unbiased estimator for a specific model. <LINK> <LINK> The unbiasedness is not a necessary property, but the example demonstrates that it is possible to derive model specific estimators that could have better calibration and smaller error than the usual naive estimator.",https://arxiv.org/abs/2008.10859,"When evaluating and comparing models using leave-one-out cross-validation (LOO-CV), the uncertainty of the estimate is typically assessed using the variance of the sampling distribution. Considering the uncertainty is important, as the variability of the estimate can be high in some cases. An important result by Bengio and Grandvalet (2004) states that no general unbiased variance estimator can be constructed, that would apply for any utility or loss measure and any model. We show that it is possible to construct an unbiased estimator considering a specific predictive performance measure and model. We demonstrate an unbiased sampling distribution variance estimator for the Bayesian normal model with fixed model variance using the expected log pointwise predictive density (elpd) utility score. This example demonstrates that it is possible to obtain improved, problem-specific, unbiased estimators for assessing the uncertainty in LOO-CV estimation. ","Unbiased estimator for the variance of the leave-one-out |
cross-validation estimator for a Bayesian normal model with fixed variance",2,"['We (Tuomas Sivula, @MansMeg and I) have another new related paper <LINK> showing that although there is no generally unbiased estimator of CV error variance (as shown by Bengio and Grandvalet (2004)) there can be unbiased estimator for a specific model. <LINK> <LINK>', 'The unbiasedness is not a necessary property, but the example demonstrates that it is possible to derive model specific estimators that could have better calibration and smaller error than the usual naive estimator.']",20,08,483 |
143,14,834121051613204481,460069521,Andrew Francis,"Thanks to visit from Vincent Moulton and Katharina Huber in November, new paper just out on arxiv: <LINK> The new paper is on the diameter of the space of #phylogenetic networks under NNI: how far apart can networks be? <LINK> We also discuss versions of the subtree prune and regraft (SPR) and TBR moves for unrooted phylogenetic networks. <LINK>",https://arxiv.org/abs/1702.05609,"Phylogenetic networks are a generalization of phylogenetic trees that allow for representation of reticulate evolution. Recently, a space of unrooted phylogenetic networks was introduced, where such a network is a connected graph in which every vertex has degree 1 or 3 and whose leaf-set is a fixed set $X$ of taxa. This space, denoted $\mathcal{N}(X)$, is defined in terms of two operations on networks -- the nearest neighbor interchange and triangle operations -- which can be used to transform any network with leaf set $X$ into any other network with that leaf set. In particular, it gives rise to a metric $d$ on $\mathcal N(X)$ which is given by the smallest number of operations required to transform one network in $\mathcal N(X)$ into another in $\mathcal N(X)$. The metric generalizes the well-known NNI-metric on phylogenetic trees which has been intensively studied in the literature. In this paper, we derive a bound for the metric $d$ as well as a related metric $d_{N\!N\!I}$ which arises when restricting $d$ to the subset of $\mathcal{N}(X)$ consisting of all networks with $2(|X|-1+i)$ vertices, $i \ge 1$. We also introduce two new metrics on networks -- the SPR and TBR metrics -- which generalize the metrics on phylogenetic trees with the same name and give bounds for these new metrics. We expect our results to eventually have applications to the development and understanding of network search algorithms. ",Bounds for phylogenetic network space metrics,3,"['Thanks to visit from Vincent Moulton and Katharina Huber in November, new paper just out on arxiv: <LINK>', 'The new paper is on the diameter of the space of #phylogenetic networks under NNI: how far apart can networks be? https://t.co/FQlxf9xb28', 'We also discuss versions of the subtree prune and regraft (SPR) and TBR moves for unrooted phylogenetic networks. https://t.co/FQlxf9xb28']",17,02,347 |
144,49,1441340184679772166,127490159,Jitka Polechova,"New paper out, with multiple #variants and #vaccines: responding to the reproduction number is significantly more efficient in preventing future outbreaks. We define a new measure rho which highlights changing relative advantage of #VOCs: <LINK> @MBeiglboeck <LINK>",https://arxiv.org/abs/2109.11156,"In light of the continuing emergence of new SARS-CoV-2 variants and vaccines, we create a simulation framework for exploring possible infection trajectories under various scenarios. The situations of primary interest involve the interaction between three components: vaccination campaigns, non-pharmaceutical interventions (NPIs), and the emergence of new SARS-CoV-2 variants. Additionally, immunity waning and vaccine boosters are modeled to account for their growing importance. New infections are generated according to a hierarchical model in which people have a random, individual infectiousness. The model thus includes super-spreading observed in the COVID-19 pandemic. Our simulation functions as a dynamic compartment model in which an individual's history of infection, vaccination, and possible reinfection all play a role in their resistance to further infections. We present a risk measure for each SARS-CoV-2 variant, $\rho^\V$, that accounts for the amount of resistance within a population and show how this risk changes as the vaccination rate increases. Furthermore, by considering different population compositions in terms of previous infection and type of vaccination, we can learn about variants which pose differential risk to different countries. Different control strategies are implemented which aim to both suppress COVID-19 outbreaks when they occur as well as relax restrictions when possible. We demonstrate that a controller that responds to the effective reproduction number in addition to case numbers is more efficient and effective in controlling new waves than monitoring case numbers alone. This is of interest as the majority of the public discussion and well-known statistics deal primarily with case numbers. ",Robust models of SARS-CoV-2 heterogeneity and control,1,"['New paper out, with multiple #variants and #vaccines: responding to the reproduction number is significantly more efficient in preventing future outbreaks. We define a new measure rho which highlights changing relative advantage of #VOCs: <LINK> @MBeiglboeck <LINK>']",21,09,265 |
145,140,1499779996239421445,20865039,Tristan Deleu,"New paper! 📄 ""Continuous-Time Meta-Learning with Forward Mode Differentiation"", with @davidkanaa, @lylbfeng, @GcKerg, Yoshua Bengio, @g_lajoie_ & @pierrelux @Mila_Quebec Accepted as a Spotlight at #ICLR2022 paper: <LINK> code: <LINK> <LINK> We introduce COMLN, a new meta-learning algorithm where adaptation follows the dynamics of a gradient vector field, instead of being based on typically a few steps of gradient-descent. Computing the adapted parameters for a new task therefore requires solving an ODE. Neural ODEs provided tools (adjoint method) to backpropagate through an ODE solver in constant memory. But in practice, this is numerically unstable when applied to gradient flows. Intuitively, it would require gradient *ascent* on the loss function during the backward pass. <LINK> We developed a memory-efficient algorithm to compute the gradients wrt. the initialization, based on forward-mode differentiation and a decomposition of the Jacobian matrices. With COMLN, we can do the equivalent of millions of gradient steps of adaptation in constant memory. Finally, another advantage of treating adaptation as a continuous-time process is that the amount of adaptation can now be viewed as a meta-parameter, on par with the initialization, that can be meta-learned with SGD instead of being a hyperparameter fixed ahead of time.",https://arxiv.org/abs/2203.01443,"Drawing inspiration from gradient-based meta-learning methods with infinitely small gradient steps, we introduce Continuous-Time Meta-Learning (COMLN), a meta-learning algorithm where adaptation follows the dynamics of a gradient vector field. Specifically, representations of the inputs are meta-learned such that a task-specific linear classifier is obtained as a solution of an ordinary differential equation (ODE). Treating the learning process as an ODE offers the notable advantage that the length of the trajectory is now continuous, as opposed to a fixed and discrete number of gradient steps. As a consequence, we can optimize the amount of adaptation necessary to solve a new task using stochastic gradient descent, in addition to learning the initial conditions as is standard practice in gradient-based meta-learning. Importantly, in order to compute the exact meta-gradients required for the outer-loop updates, we devise an efficient algorithm based on forward mode differentiation, whose memory requirements do not scale with the length of the learning trajectory, thus allowing longer adaptation in constant memory. We provide analytical guarantees for the stability of COMLN, we show empirically its efficiency in terms of runtime and memory usage, and we illustrate its effectiveness on a range of few-shot image classification problems. ",Continuous-Time Meta-Learning with Forward Mode Differentiation,5,"['New paper! 📄 ""Continuous-Time Meta-Learning with Forward Mode Differentiation"", with @davidkanaa, @lylbfeng, @GcKerg, Yoshua Bengio, @g_lajoie_ & @pierrelux @Mila_Quebec \n\nAccepted as a Spotlight at #ICLR2022\npaper: <LINK>\ncode: <LINK> <LINK>', 'We introduce COMLN, a new meta-learning algorithm where adaptation follows the dynamics of a gradient vector field, instead of being based on typically a few steps of gradient-descent. Computing the adapted parameters for a new task therefore requires solving an ODE.', 'Neural ODEs provided tools (adjoint method) to backpropagate through an ODE solver in constant memory. But in practice, this is numerically unstable when applied to gradient flows. Intuitively, it would require gradient *ascent* on the loss function during the backward pass. https://t.co/qLtpUyh6kR', 'We developed a memory-efficient algorithm to compute the gradients wrt. the initialization, based on forward-mode differentiation and a decomposition of the Jacobian matrices. With COMLN, we can do the equivalent of millions of gradient steps of adaptation in constant memory.', 'Finally, another advantage of treating adaptation as a continuous-time process is that the amount of adaptation can now be viewed as a meta-parameter, on par with the initialization, that can be meta-learned with SGD instead of being a hyperparameter fixed ahead of time.']",22,03,1341 |
146,105,1222576254865625088,1610691422,Patrick Schwab,"Using smartphone data from the Floodlight Open study (n=774), we trained deep learning models that identify digital biomarkers for multiple sclerosis, and showed that these digital biomarkers contain significant signal (AUC=0.88) for diagnosing MS Link: <LINK> <LINK>",https://arxiv.org/abs/2001.09748,"Multiple sclerosis (MS) affects the central nervous system with a wide range of symptoms. MS can, for example, cause pain, changes in mood and fatigue, and may impair a person's movement, speech and visual functions. Diagnosis of MS typically involves a combination of complex clinical assessments and tests to rule out other diseases with similar symptoms. New technologies, such as smartphone monitoring in free-living conditions, could potentially aid in objectively assessing the symptoms of MS by quantifying symptom presence and intensity over long periods of time. Here, we present a deep-learning approach to diagnosing MS from smartphone-derived digital biomarkers that uses a novel combination of a multilayer perceptron with neural soft attention to improve learning of patterns in long-term smartphone monitoring data. Using data from a cohort of 774 participants, we demonstrate that our deep-learning models are able to distinguish between people with and without MS with an area under the receiver operating characteristic curve of 0.88 (95% CI: 0.70, 0.88). Our experimental results indicate that digital biomarkers derived from smartphone data could in the future be used as additional diagnostic criteria for MS. ","A Deep Learning Approach to Diagnosing Multiple Sclerosis from |
Smartphone Data",1,"['Using smartphone data from the Floodlight Open study (n=774), we trained deep learning models that identify digital biomarkers for multiple sclerosis, and showed that these digital biomarkers contain significant signal (AUC=0.88) for diagnosing MS\n\nLink: <LINK> <LINK>']",20,01,267 |
147,31,1420283313797881856,3236251346,Mikel Sanz,"New quantum paper today on Noise in Digital and Digital-Analog Quantum Computing (<LINK>) with two brilliant scientists @Paula_G_Phys and @quantum_ana. We compare the performance of digital and digital-analog quantum computing under different noise sources (1/2) Additionally, we show that not only standard quantum error mitigation techniques can be applied to DAQC, but also how to extend them to suppress bang errors. @Ikerbasque @upvehu @OpenSuperQ (2/2) <LINK>",https://arxiv.org/abs/2107.12969,"Quantum computing makes use of quantum resources provided by the underlying quantum nature of matter to enhance classical computation. However, current Noisy Intermediate-Scale Quantum (NISQ) era in quantum computing is characterized by the use of quantum processors comprising from a few tens to, at most, few hundreds of physical qubits without implementing quantum error correction techniques. This limits the scalability in the implementation of quantum algorithms. Digital-analog quantum computing (DAQC) has been proposed as a more resilient alternative quantum computing paradigm to outperform digital quantum computation within the NISQ era framework. It arises from adding the flexibility provided by fast single-qubit gates to the robustness of analog quantum simulations. Here, we perform a careful comparison between digital and digital-analog paradigms under the presence of noise sources. The comparison is illustrated by comparing the performance of the quantum Fourier transform algorithm under a wide range of single- and two-qubit noise sources. Indeed, we obtain that, when the different noise channels usually present in superconducting quantum processors are considered, the fidelity of the QFT algorithm for the digital-analog paradigm outperforms the one obtained for the digital approach. Additionally, this difference grows when the size of the processor scales up, constituting consequently a sensible alternative paradigm in the NISQ era. Finally, we show how the DAQC paradigm can be adapted to quantum error mitigation techniques for canceling different noise sources, including the bang error. ",Noise in Digital and Digital-Analog Quantum Computation,2,"['New quantum paper today on Noise in Digital and Digital-Analog Quantum Computing (<LINK>) with two brilliant scientists @Paula_G_Phys and @quantum_ana. We compare the performance of digital and digital-analog quantum computing under different noise sources (1/2)', 'Additionally, we show that not only standard quantum error mitigation techniques can be applied to DAQC, but also how to extend them to suppress bang errors. @Ikerbasque @upvehu @OpenSuperQ (2/2) https://t.co/lgO7YAcRT5']",21,07,465 |
148,66,1417515887024689155,10471882,matt brehmer,"I'm pleased to introduce Diatoms, our new approach for generating #visualization design inspiration. 📃 Check out our #ieeevis '21 paper (by myself, @eagereyes, and Carmen Hull): <LINK> 🎬 explainer video: <LINK> <LINK> @JanWillemTulp @eagereyes Thanks! Several of the information design students who participated in our study did share and discuss their final glyph designs with us. We didn't have space in the paper to show these, but with their permission I'd like to show some of them in the upcoming VIS talk :) @eagereyes @antarcticdesign Thanks for the question, Eamonn! We see our approach as being applicable to the early and divergent phases of vis design, being complementary to the later process of establishing a visual hierarchy, such as in your taxonomic approach (which was an important influence on us!) @JanWillemTulp @eagereyes I'd be interested in that too. We've talked about doing more class-based activities w/ students. We also know that pro designers need more control over the initial palettes (beyond the examples used in the paper), so more work is needed before this technique is used in production @JanWillemTulp @eagereyes and since VIS is virtual again this year, the talk will be publicly available online in October",https://arxiv.org/abs/2107.09015,"We introduce Diatoms, a technique that generates design inspiration for glyphs by sampling from palettes of mark shapes, encoding channels, and glyph scaffold shapes. Diatoms allows for a degree of randomness while respecting constraints imposed by columns in a data table: their data types and domains as well as semantic associations between columns as specified by the designer. We pair this generative design process with two forms of interactive design externalization that enable comparison and critique of the design alternatives. First, we incorporate a familiar small multiples configuration in which every data point is drawn according to a single glyph design, coupled with the ability to page between alternative glyph designs. Second, we propose a small permutables design gallery, in which a single data point is drawn according to each alternative glyph design, coupled with the ability to page between data points. We demonstrate an implementation of our technique as an extension to Tableau featuring three example palettes, and to better understand how Diatoms could fit into existing design workflows, we conducted interviews and chauffeured demos with 12 designers. Finally, we reflect on our process and the designers' reactions, discussing the potential of our technique in the context of visualization authoring systems. Ultimately, our approach to glyph design and comparison can kickstart and inspire visualization design, allowing for the serendipitous discovery of shape and channel combinations that would have otherwise been overlooked. ",Generative Design Inspiration for Glyphs with Diatoms,5,"[""I'm pleased to introduce Diatoms, our new approach for generating #visualization design inspiration.\n\n📃 Check out our #ieeevis '21 paper (by myself, @eagereyes, and Carmen Hull): <LINK>\n\n🎬 explainer video: <LINK> <LINK>"", ""@JanWillemTulp @eagereyes Thanks! \n\nSeveral of the information design students who participated in our study did share and discuss their final glyph designs with us. \n\nWe didn't have space in the paper to show these, but with their permission I'd like to show some of them in the upcoming VIS talk :)"", '@eagereyes @antarcticdesign Thanks for the question, Eamonn! \n\nWe see our approach as being applicable to the early and divergent phases of vis design, being complementary to the later process of establishing a visual hierarchy, such as in your taxonomic approach (which was an important influence on us!)', ""@JanWillemTulp @eagereyes I'd be interested in that too. We've talked about doing more class-based activities w/ students. We also know that pro designers need more control over the initial palettes (beyond the examples used in the paper), so more work is needed before this technique is used in production"", '@JanWillemTulp @eagereyes and since VIS is virtual again this year, the talk will be publicly available online in October']",21,07,1250 |
149,136,1369110409219698691,60893773,James Bullock,Very excited! New paper led by @UCIPhysAstro PhD student Anna (Sijie) Yu @AstroBananna Uses FIRE sims of Milky-Way like galaxies show tight connection between star formation mode (bursty to steady) & thick-to-thin disk formation. No mergers required! <LINK> <LINK>,https://arxiv.org/abs/2103.03888,"We investigate thin and thick stellar disc formation in Milky-Way-mass galaxies using twelve FIRE-2 cosmological zoom-in simulations. All simulated galaxies experience an early period of bursty star formation that transitions to a late-time steady phase of near-constant star formation. Stars formed during the late-time steady phase have more circular orbits and thin-disc-like morphology at $z=0$, whilst stars born during the bursty phase have more radial orbits and thick-disc structure. The median age of thick-disc stars at $z=0$ correlates strongly with this transition time. We also find that galaxies with an earlier transition from bursty to steady star formation have a higher thin-disc fractions at $z=0$. Three of our systems have minor mergers with LMC-size satellites during the thin-disc phase. These mergers trigger short starbursts but do not destroy the thin disc nor alter broad trends between the star formation transition time and thin/thick disc properties. If our simulations are representative of the Universe, then stellar archaeological studies of the Milky Way (or M31) provide a window into past star-formation modes in the Galaxy. Current age estimates of the Galactic thick disc would suggest that the Milky Way transitioned from bursty to steady phase $\sim$6.5 Gyr ago; prior to that time the Milky Way likely lacked a recognisable thin disc. ",The bursty origin of the Milky Way thick disc,1,['Very excited! New paper led by @UCIPhysAstro PhD student Anna (Sijie) Yu @AstroBananna Uses FIRE sims of Milky-Way like galaxies show tight connection between star formation mode (bursty to steady) & thick-to-thin disk formation. No mergers required! <LINK> <LINK>'],21,03,264 |
150,51,1194227346838036486,23000769,Christopher Conselice,New paper out today led by Nottingham PhD student @AstroSunnyC on using an autoencoder to find gravitational lenses with unsupervised machine learning. This method will have to be used with @EC_Euclid and LSST etc. to find unique strong lens systems. <LINK>,https://arxiv.org/abs/1911.04320,"In this paper we develop a new unsupervised machine learning technique comprised of a feature extractor, a convolutional autoencoder (CAE), and a clustering algorithm consisting of a Bayesian Gaussian mixture model (BGM). We apply this technique to visual band space-based simulated imaging data from the Euclid Space Telescope using data from the Strong Gravitational Lenses Finding Challenge. Our technique promisingly captures a variety of lensing features such as Einstein rings with different radii, distorted arc structures, etc, without using predefined labels. After the clustering process, we obtain several classification clusters separated by different visual features which are seen in the images. Our method successfully picks up $\sim$63\ percent of lensing images from all lenses in the training set. With the assumed probability proposed in this study, this technique reaches an accuracy of $77.25\pm 0.48$\% in binary classification using the training set. Additionally, our unsupervised clustering process can be used as the preliminary classification for future surveys of lenses to efficiently select targets and to speed up the labelling process. As the starting point of the astronomical application using this technique, we not only explore the application to gravitationally lensed systems, but also discuss the limitations and potential future uses of this technique. ","Identifying Strong Lenses with Unsupervised Machine Learning using |
Convolutional Autoencoder",1,['New paper out today led by Nottingham PhD student @AstroSunnyC on using an autoencoder to find gravitational lenses with unsupervised machine learning. This method will have to be used with @EC_Euclid and LSST etc. to find unique strong lens systems.\n\n<LINK>'],19,11,257 |
151,35,1176660747931148289,838292815,Ofir Nachum,"new paper! <LINK> We investigate the underlying reasons for success of hierarchical RL, finding that (surprisingly) much of it is due to exploration, and that this benefit can be achieved *without* explicit hierarchies of policies @svlevine @shaneguML @honglaklee",https://arxiv.org/abs/1909.10618,"Hierarchical reinforcement learning has demonstrated significant success at solving difficult reinforcement learning (RL) tasks. Previous works have motivated the use of hierarchy by appealing to a number of intuitive benefits, including learning over temporally extended transitions, exploring over temporally extended periods, and training and exploring in a more semantically meaningful action space, among others. However, in fully observed, Markovian settings, it is not immediately clear why hierarchical RL should provide benefits over standard ""shallow"" RL architectures. In this work, we isolate and evaluate the claimed benefits of hierarchical RL on a suite of tasks encompassing locomotion, navigation, and manipulation. Surprisingly, we find that most of the observed benefits of hierarchy can be attributed to improved exploration, as opposed to easier policy learning or imposed hierarchical structures. Given this insight, we present exploration techniques inspired by hierarchy that achieve performance competitive with hierarchical RL while at the same time being much simpler to use and implement. ",Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?,1,"['new paper! <LINK>\nWe investigate the underlying reasons for success of hierarchical RL, finding that (surprisingly) much of it is due to exploration, and that this benefit can be achieved *without* explicit hierarchies of policies @svlevine @shaneguML @honglaklee']",19,09,263 |
152,78,1360509664354516992,1288593824,Michael Felderer,"Our new paper clarifying terminology and challenges of quality assurance for AI-Enabled Systems is available online at <LINK> - it also characterizes the dimensions artifact type, quality assurance, and process for quality assurance for AI-Enabled Systems 👇 <LINK> @prof_wagnerst @WhiteAero Yes, Safety is a Quality-in-Use Property!",https://arxiv.org/abs/2102.05351,"The number and importance of AI-based systems in all domains is growing. With the pervasive use and the dependence on AI-based systems, the quality of these systems becomes essential for their practical usage. However, quality assurance for AI-based systems is an emerging area that has not been well explored and requires collaboration between the SE and AI research communities. This paper discusses terminology and challenges on quality assurance for AI-based systems to set a baseline for that purpose. Therefore, we define basic concepts and characterize AI-based systems along the three dimensions of artifact type, process, and quality characteristics. Furthermore, we elaborate on the key challenges of (1) understandability and interpretability of AI models, (2) lack of specifications and defined requirements, (3) need for validation data and test input generation, (4) defining expected outcomes as test oracles, (5) accuracy and correctness measures, (6) non-functional properties of AI-based systems, (7) self-adaptive and self-learning characteristics, and (8) dynamic and frequently changing environments. ",Quality Assurance for AI-based Systems: Overview and Challenges,2,"['Our new paper clarifying terminology and challenges of quality assurance for AI-Enabled Systems is available online at <LINK> - it also characterizes the dimensions artifact type, quality assurance, and process for quality assurance for AI-Enabled Systems 👇 <LINK>', '@prof_wagnerst @WhiteAero Yes, Safety is a Quality-in-Use Property!']",21,02,332 |
153,175,1421681752196714500,1400911697452453893,Chandar Lab,"[1/6] With regard to the results highlighting insensitivity to word order of neural models in NLU tasks, we study it further with newly proposed metrics - DND and IDC. Work by Louis Clouâtre, @prasannapartha @amalzouaq @apsarathchandar arxiv: <LINK> <LINK> [2/6] Two major takeaways: (1) Local ordering of a sequence is more important than its global ordering. We observed that the metric quantifying distortion to local ordering of tokens, DND, correlates with the (in)sensitivity to perturbation of different models ! <LINK> [3/6 ] We observed the correlation between DND and downstream performance of a model on perturbed sample is consistent across BiLSTM, ConvNet, Transformers and with different tokenization schemes (sub-word, Character level). <LINK> [4/6] (2) Our analysis on existing word-level perturbations suggests that models mostly rely on local organization of characters that are seldom perturbed by the commonly used word-level perturbations and hence explaining their insensitivity to such perturbations. <LINK> [5/6] We find that the lack of correlation between performance of Non-pretrained Transformers and IDC metric (global ordering of tokens) helps identify when models fail to make use of the positional information present in text, hence defaulting to bag-of-word models. <LINK> [6/6] We observe the effect of perturbation to be similar across the different architectures. The weaker models have smaller drop in performance but still explained by the metric. <LINK>",https://arxiv.org/abs/2107.13955,"Recent research analyzing the sensitivity of natural language understanding models to word-order perturbations has shown that neural models are surprisingly insensitive to the order of words. In this paper, we investigate this phenomenon by developing order-altering perturbations on the order of words, subwords, and characters to analyze their effect on neural models' performance on language understanding tasks. We experiment with measuring the impact of perturbations to the local neighborhood of characters and global position of characters in the perturbed texts and observe that perturbation functions found in prior literature only affect the global ordering while the local ordering remains relatively unperturbed. We empirically show that neural models, invariant of their inductive biases, pretraining scheme, or the choice of tokenization, mostly rely on the local structure of text to build understanding and make limited use of the global structure. ",Local Structure Matters Most: Perturbation Study in NLU,6,"['[1/6] With regard to the results highlighting insensitivity to word order of neural models in NLU tasks, we study it further with newly proposed metrics - DND and IDC.\nWork by Louis Clouâtre, @prasannapartha @amalzouaq @apsarathchandar\narxiv: <LINK> <LINK>', '[2/6] Two major takeaways: (1) Local ordering of a sequence is more important than its global ordering. We observed that the metric quantifying distortion to local ordering of tokens, DND, correlates with the (in)sensitivity to perturbation of different models ! https://t.co/CKJamx8EYz', '[3/6 ] We observed the correlation between DND and downstream performance of a model on perturbed sample is consistent across BiLSTM, ConvNet, Transformers and with different tokenization schemes (sub-word, Character level). https://t.co/JFtgGSU2Yl', '[4/6] (2) Our analysis on existing word-level perturbations suggests that models mostly rely on local organization of characters that are seldom perturbed by the commonly used word-level perturbations and hence explaining their insensitivity to such perturbations. https://t.co/20RYZAfL0B', '[5/6] We find that the lack of correlation between performance of Non-pretrained Transformers and IDC metric (global ordering of tokens) helps identify when models fail to make use of the positional information present in text, hence defaulting to bag-of-word models. https://t.co/VsA2UmcQFu', '[6/6] We observe the effect of perturbation to be similar across the different architectures. The weaker models have smaller drop in performance but still explained by the metric. https://t.co/ttgb4cor8B']",21,07,1492 |
154,36,939180153405501440,3301643341,Roger Grosse,"Natural gradient isn't just for optimization, it can improve uncertainty modeling in variational Bayesian neural nets. Train a matrix variate Gaussian posterior using noisy K-FAC! New paper by @jelly__zhang and Shengyang Sun at the BDL Workshop. <LINK>",https://arxiv.org/abs/1712.02390,"Variational Bayesian neural nets combine the flexibility of deep learning with Bayesian uncertainty estimation. Unfortunately, there is a tradeoff between cheap but simple variational families (e.g.~fully factorized) or expensive and complicated inference procedures. We show that natural gradient ascent with adaptive weight noise implicitly fits a variational posterior to maximize the evidence lower bound (ELBO). This insight allows us to train full-covariance, fully factorized, or matrix-variate Gaussian variational posteriors using noisy versions of natural gradient, Adam, and K-FAC, respectively, making it possible to scale up to modern-size ConvNets. On standard regression benchmarks, our noisy K-FAC algorithm makes better predictions and matches Hamiltonian Monte Carlo's predictive variances better than existing methods. Its improved uncertainty estimates lead to more efficient exploration in active learning, and intrinsic motivation for reinforcement learning. ",Noisy Natural Gradient as Variational Inference,1,"[""Natural gradient isn't just for optimization, it can improve uncertainty modeling in variational Bayesian neural nets. Train a matrix variate Gaussian posterior using noisy K-FAC! New paper by @jelly__zhang and Shengyang Sun at the BDL Workshop.\n\n<LINK>""]",17,12,252 |
155,158,1333329063512043521,2603024598,Ricardo Pérez-Marco,"""Notes on the historical bibliography of the Gamma function"" Many references in the classical literature are erroneous, and numerous results misattributed. We can find some in classical texts from Gauss, Weierstrass, etc We share these personal notes. <LINK> <LINK>",https://arxiv.org/abs/2011.12140,"Telegraphic notes on the historical bibliography of the Gamma function and Eulerian integrals. Correction to some classical references. Some topics of the interest of the author. We provide some extensive (but not exhaustive) bibliography. Feedback is welcome, notes will be updated and some references need completion. ",Notes on the historical bibliography of the gamma function,1,"['""Notes on the historical bibliography of the Gamma function""\n\nMany references in the classical literature are erroneous, and numerous results misattributed. We can find some in classical texts from Gauss, Weierstrass, etc\n\nWe share these personal notes.\n\n<LINK> <LINK>']",20,11,265 |
156,12,1035134747549356033,88806960,Dr. Vivienne Baldassare,Check out our new paper on arxiv today on optical variability as a tool for identifying AGNs in low-mass galaxies. Plot below shows population of low-mass galaxies with AGN-like variability and star-formation dominated narrow line ratios. <LINK> <LINK> Also see <LINK> for light curves and difference imaging videos :) @profjsb Thanks! We actually used your qsofit code to select our AGNs!,https://arxiv.org/abs/1808.09578,"We present an analysis of the nuclear variability of $\sim28,000$ nearby ($z<0.15$) galaxies with Sloan Digital Sky Survey (SDSS) spectroscopy in Stripe 82. We construct light curves using difference imaging of SDSS g-band images, which allows us to detect subtle variations in the central light output. We select variable AGN by assessing whether detected variability is well-described by a damped random walk model. We find 135 galaxies with AGN-like nuclear variability. While most of the variability-selected AGNs have narrow emission lines consistent with the presence of an AGN, a small fraction have narrow emission lines dominated by star formation. The star-forming systems with nuclear AGN-like variability tend to be low-mass ($M_{\ast}<10^{10}~M_{\odot}$), and may be AGNs missed by other selection techniques due to star formation dilution or low-metallicities. We explore the AGN fraction as a function of stellar mass, and find that the fraction of variable AGN increases with stellar mass, even after taking into account the fact that lower mass systems are fainter. There are several possible explanations for an observed decline in the fraction of variable AGN with decreasing stellar mass, including a drop in the supermassive black hole occupation fraction, a decrease in the ratio of black hole mass to galaxy stellar mass, or a change in the variability properties of lower-mass AGNs. We demonstrate that optical photometric variability is a promising avenue for detecting AGNs in low-mass, star formation-dominated galaxies, which has implications for the upcoming Large Synoptic Survey Telescope. ",Identifying AGNs in low-mass galaxies via long-term optical variability,3,"['Check out our new paper on arxiv today on optical variability as a tool for identifying AGNs in low-mass galaxies. Plot below shows population of low-mass galaxies with AGN-like variability and star-formation dominated narrow line ratios. <LINK> <LINK>', 'Also see https://t.co/gZ8UTbGEKJ for light curves and difference imaging videos :)', '@profjsb Thanks! We actually used your qsofit code to select our AGNs!']",18,08,389 |
157,142,1389748127897317377,313814795,M. Sohaib Alam,"Verifying entanglement can be hard. Our new paper with the @NASA QuAIL team shows that for QAOA-MaxCut (and perhaps other) states, you could verify N-partite entanglement with just 3 bases measurements, and poly(N) terms to estimate. <LINK> @dinunno @NASA Haha thanks, I pretty much just ever log in here to catch some conversations on the latest quantum news/papers anyway, so citation requests are totally fair game imo lol",https://arxiv.org/abs/2105.01639,"In order to assess whether quantum resources can provide an advantage over classical computation, it is necessary to characterize and benchmark the non-classical properties of quantum algorithms in a practical manner. In this paper, we show that using measurements in no more than 3 out of the possible $3^N$ bases, one can not only reconstruct the single-qubit reduced density matrices and measure the ability to create coherent superpositions, but also possibly verify entanglement across all $N$ qubits participating in the algorithm. We introduce a family of generalized Bell-type observables for which we establish an upper bound to the expectation values in fully separable states by proving a generalization of the Cauchy-Schwarz inequality, which may serve of independent interest. We demonstrate that a subset of such observables can serve as entanglement witnesses for QAOA-MaxCut states, and further argue that they are especially well tailored for this purpose by defining and computing an entanglement potency metric on witnesses. A subset of these observables also certify, in a weaker sense, the entanglement in GHZ states, which share the $\mathbb{Z}_2$ symmetry of QAOA-MaxCut. The construction of such witnesses follows directly from the cost Hamiltonian to be optimized, and not through the standard technique of using the projector of the state being certified. It may thus provide insights to construct similar witnesses for other variational algorithms prevalent in the NISQ era. We demonstrate our ideas with proof-of-concept experiments on the Rigetti Aspen-9 chip for ansatze containing up to 24 qubits. ","Practical Verification of Quantum Properties in Quantum Approximate |
Optimization Runs",2,"['Verifying entanglement can be hard. Our new paper with the @NASA QuAIL team shows that for QAOA-MaxCut (and perhaps other) states, you could verify N-partite entanglement with just 3 bases measurements, and poly(N) terms to estimate.\n\n<LINK>', '@dinunno @NASA Haha thanks, I pretty much just ever log in here to catch some conversations on the latest quantum news/papers anyway, so citation requests are totally fair game imo lol']",21,05,425 |
158,149,1484361628476866565,1238886955372470274,Xiaohui Chen,Happy to announce that the paper <LINK> will appear in AISTATS. We propose a sketch-and-lift algo that first estimates cluster centroids using a subsampled SDP and then propagates the solution to the full data in linear time. Exact recovery guarantee is provided.,https://arxiv.org/abs/2201.08226,"Semidefinite programming (SDP) is a powerful tool for tackling a wide range of computationally hard problems such as clustering. Despite the high accuracy, semidefinite programs are often too slow in practice with poor scalability on large (or even moderate) datasets. In this paper, we introduce a linear time complexity algorithm for approximating an SDP relaxed $K$-means clustering. The proposed sketch-and-lift (SL) approach solves an SDP on a subsampled dataset and then propagates the solution to all data points by a nearest-centroid rounding procedure. It is shown that the SL approach enjoys a similar exact recovery threshold as the $K$-means SDP on the full dataset, which is known to be information-theoretically tight under the Gaussian mixture model. The SL method can be made adaptive with enhanced theoretic properties when the cluster sizes are unbalanced. Our simulation experiments demonstrate that the statistical accuracy of the proposed method outperforms state-of-the-art fast clustering algorithms without sacrificing too much computational efficiency, and is comparable to the original $K$-means SDP with substantially reduced runtime. ","Sketch-and-Lift: Scalable Subsampled Semidefinite Program for $K$-means |
Clustering",1,['Happy to announce that the paper <LINK> will appear in AISTATS. We propose a sketch-and-lift algo that first estimates cluster centroids using a subsampled SDP and then propagates the solution to the full data in linear time. Exact recovery guarantee is provided.'],22,01,263 |
159,78,1492162171299631108,1309406444731731970,Alejandro Sopena,"Very happy to announce that our new paper on the Algrebaic Bethe Ansatz (ABA) on quantum computers is finally on arXiv!! Read and enjoy it [THREAD] <LINK> The Bethe Ansatz (BA) is a classical method for exactly solving one-dimensional quantum models. However, there are some obervables such as long-range correlation functions which are difficult to calculate. This motivates its implementation on a quantum computer. We present a deterministic quantum algorithm for the preparation of BA eigenstates. We build a quantum circuit with no ancillary qubits (Algebraic Bethe Circuit) which allows us to obtain the desired eigenstate as output. <LINK> We illustrate our method in the spin- 1/2 XX and XXZ models and we find its application on the XX model is efficient. We run numerical simulations, preparing eigenstates of the XXZ model for systems of up to 24 qubits and 12 magnons. <LINK> We run small-scale error-mitigated implementations on the IBM quantum computers, including the preparation of the ground state for the XX and XXZ models in 4 sites. <LINK>",https://arxiv.org/abs/2202.04673,"The Algebraic Bethe Ansatz (ABA) is a highly successful analytical method used to exactly solve several physical models in both statistical mechanics and condensed-matter physics. Here we bring the ABA to unitary form, for its direct implementation on a quantum computer. This is achieved by distilling the non-unitary $R$ matrices that make up the ABA into unitaries using the QR decomposition. Our algorithm is deterministic and works for both real and complex roots of the Bethe equations. We illustrate our method in the spin-$\frac{1}{2}$ XX and XXZ models. We show that using this approach one can efficiently prepare eigenstates of the XX model on a quantum computer with quantum resources that match previous state-of-the-art approaches. We run numerical simulations, preparing eigenstates of the XXZ model for systems of up to 24 qubits and 12 magnons. Furthermore, we run small-scale error-mitigated implementations on the IBM quantum computers, including the preparation of the ground state for the XX and XXZ models in $4$ sites. Finally, we derive a new form of the Yang-Baxter equation using unitary matrices, and also verify it on a quantum computer. ",Algebraic Bethe Circuits,5,"['Very happy to announce that our new paper on the Algrebaic Bethe Ansatz (ABA) on quantum computers is finally on arXiv!!\n\nRead and enjoy it\n\n[THREAD]\n<LINK>', 'The Bethe Ansatz (BA) is a classical method for exactly solving one-dimensional quantum models.\nHowever, there are some obervables such as long-range correlation functions which are difficult to calculate.\nThis motivates its implementation on a quantum computer.', 'We present a deterministic quantum algorithm for the preparation of BA eigenstates.\nWe build a quantum circuit with no ancillary qubits (Algebraic Bethe Circuit) which allows us to obtain the desired eigenstate as output. https://t.co/nuOmtfqtxr', 'We illustrate our method in the spin- 1/2 XX and XXZ models and we find its application on the XX model is efficient.\nWe run numerical simulations, preparing eigenstates of the XXZ model for systems of up to 24 qubits and 12 magnons. https://t.co/eVlr9IvBCG', 'We run small-scale error-mitigated implementations on the IBM quantum computers, including the preparation of the ground state for the XX and XXZ models in 4 sites. https://t.co/TEgplSz4qo']",22,02,1059 |
160,87,1273878952520507392,762420558,Ciaran O'Hare,"New paper on solar axions. Given the recent XENON excitement, thought it timely to push this one out too: <LINK> <LINK> We look at a new source of axions from the Sun (aside from the usual Primakoff and ABC fluxes) which are produced from the conversion of longitudinal plasmons. This has the side effect of the expected axion signal being very sensitive to the magnetic field of the Sun. So we took a look at whether the planned next-gen helioscope #IAXO which will be hosted @desy could serve a dual-purpose as an instrument to measure the solar B-field. This of course requires that solar axions are discovered first (maybe they already have?....) Nevertheless the prospects look quite good we think. The flux effectively probes the Sun from the inside out: the better the energy threshold the farther out in radius you can probe. The code is available for you to have a look at here: <LINK>",https://arxiv.org/abs/2006.10415,"Axion helioscopes search for solar axions and axion-like particles via inverse Primakoff conversion in strong laboratory magnets pointed at the Sun. Anticipating the detection of solar axions, we determine the potential for the planned next-generation helioscope, the International Axion Observatory (IAXO), to measure or constrain the solar magnetic field. To do this we consider a previously neglected component of the solar axion flux at sub-keV energies arising from the conversion of longitudinal plasmons. This flux is sensitively dependent to the magnetic field profile of the Sun, with lower energies corresponding to axions converting into photons at larger solar radii. If the detector technology eventually installed in IAXO has an energy resolution better than 200 eV, then solar axions could become an even more powerful messenger than neutrinos of the magnetic field in the core of the Sun. For energy resolutions better than 10 eV, IAXO could access the inner 70% of the Sun and begin to constrain the field at the tachocline: the boundary between the radiative and convective zones. The longitudinal plasmon flux from a toroidal magnetic field also has an additional 2% geometric modulation effect which could be used to measure the angular dependence of the magnetic field. ",Axion helioscopes as solar magnetometers,5,"['New paper on solar axions. Given the recent XENON excitement, thought it timely to push this one out too: <LINK> <LINK>', 'We look at a new source of axions from the Sun (aside from the usual Primakoff and ABC fluxes) which are produced from the conversion of longitudinal plasmons. This has the side effect of the expected axion signal being very sensitive to the magnetic field of the Sun.', 'So we took a look at whether the planned next-gen helioscope #IAXO which will be hosted @desy could serve a dual-purpose as an instrument to measure the solar B-field. This of course requires that solar axions are discovered first (maybe they already have?....)', 'Nevertheless the prospects look quite good we think. The flux effectively probes the Sun from the inside out: the better the energy threshold the farther out in radius you can probe.', 'The code is available for you to have a look at here: https://t.co/7Qb24OxEhn']",20,06,894 |
161,223,1408224623725064195,110103071,Andrej Risteski,"In the invariant feature approach to domain generalization, the goal is to identify invariant features after seeing a small # of environments. How many envs are needed? We study this q in a variant of a toy data model we introduced w @ElanRosenfeld in <LINK>. <LINK> For this variant, we show that while ERM and IRM both can fail with sublinear number of environments, an *iterative* feature matching approach we introduce in the paper succeeds using only logarithmic # of domains. (For more details see Tengyu's tweet and paper) Joint work with @cynnjjs, @elanrosenfeld, Mark Sellke, and @tengyuma .",https://arxiv.org/abs/2106.09913,"Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments. Despite a proliferation of proposal algorithms for this task, assessing their performance both theoretically and empirically is still very challenging. Distributional matching algorithms such as (Conditional) Domain Adversarial Networks [Ganin et al., 2016, Long et al., 2018] are popular and enjoy empirical success, but they lack formal guarantees. Other approaches such as Invariant Risk Minimization (IRM) require a prohibitively large number of training environments -- linear in the dimension of the spurious feature space $d_s$ -- even on simple data models like the one proposed by [Rosenfeld et al., 2021]. Under a variant of this model, we show that both ERM and IRM cannot generalize with $o(d_s)$ environments. We then present an iterative feature matching algorithm that is guaranteed with high probability to yield a predictor that generalizes after seeing only $O(\log d_s)$ environments. Our results provide the first theoretical justification for a family of distribution-matching algorithms widely used in practice under a concrete nontrivial data model. ","Iterative Feature Matching: Toward Provable Domain Generalization with |
Logarithmic Environments",3,"['In the invariant feature approach to domain generalization, the goal is to identify invariant features after seeing a small # of environments. How many envs are needed? We study this q in a variant of a toy data model we introduced w @ElanRosenfeld in <LINK>. <LINK>', ""For this variant, we show that while ERM and IRM both can fail with sublinear number of environments, an *iterative* feature matching approach we introduce in the paper succeeds using only logarithmic # of domains. (For more details see Tengyu's tweet and paper)"", 'Joint work with @cynnjjs, @elanrosenfeld, Mark Sellke, and @tengyuma .']",21,06,600 |
162,26,1134140437252304896,18850305,Zachary Lipton,"Adversarial spelling mistakes are a ***real problem in the wild*** (e.g. spam filter evasion). Our new #ACL2019 paper shows that editing just 1-2 characters can cripple SoA NLP classifier (BERT). Work w CMU PhDs *Danish Pruthi* & *Bhuwan Dhingra*) <LINK> (1/4) You might think that character level models would be better able to handle spelling mistakes (vs word-level models which simply get UNK-d). However, character-level and word-piece (e.g. BERT) models expose a larger attack surface, and thus are even more vulnerable. (2/4) Our proposed solution stacks a word recognition model between the raw (possibly manipulated) inputs and the downstream classifier. Salvages performance 90% (original) -> 45.3% (attack) -> 75% (defense)—outperforming data aug. & adversarial training <LINK> (3/4) While this paper went through the cycle twice before getting in, I'm glad for the *CL peer review process. This version is considerably stronger than our earlier attempts thanks to critical feedback from NAACL reviewers (thanks R2!). Less glad for the arXiv embargo but meh (4/4) ...oh and I finally found Danish's Twitter handle which could not possibly have been harder to locate while still existing @danish037 (5/4) @Qdatalab Thanks for sharing! Looking forward to reading.",https://arxiv.org/abs/1905.11268,"To combat adversarial spelling mistakes, we propose placing a word recognition model in front of the downstream classifier. Our word recognition models build upon the RNN semi-character architecture, introducing several new backoff strategies for handling rare and unseen words. Trained to recognize words corrupted by random adds, drops, swaps, and keyboard mistakes, our method achieves 32% relative (and 3.3% absolute) error reduction over the vanilla semi-character model. Notably, our pipeline confers robustness on the downstream classifier, outperforming both adversarial training and off-the-shelf spell checkers. Against a BERT model fine-tuned for sentiment analysis, a single adversarially-chosen character attack lowers accuracy from 90.3% to 45.8%. Our defense restores accuracy to 75%. Surprisingly, better word recognition does not always entail greater robustness. Our analysis reveals that robustness also depends upon a quantity that we denote the sensitivity. ",Combating Adversarial Misspellings with Robust Word Recognition,6,"['Adversarial spelling mistakes are a ***real problem in the wild*** (e.g. spam filter evasion). Our new #ACL2019 paper shows that editing just 1-2 characters can cripple SoA NLP classifier (BERT). Work w CMU PhDs *Danish Pruthi* & *Bhuwan Dhingra*) <LINK> (1/4)', 'You might think that character level models would be better able to handle spelling mistakes (vs word-level models which simply get UNK-d). However, character-level and word-piece (e.g. BERT) models expose a larger attack surface, and thus are even more vulnerable. (2/4)', 'Our proposed solution stacks a word recognition model between the raw (possibly manipulated) inputs and the downstream classifier. Salvages performance 90% (original) -> 45.3% (attack) -> 75% (defense)—outperforming data aug. & adversarial training\n https://t.co/ZE2eOP0FpV (3/4)', ""While this paper went through the cycle twice before getting in, I'm glad for the *CL peer review process. This version is considerably stronger than our earlier attempts thanks to critical feedback from NAACL reviewers (thanks R2!). Less glad for the arXiv embargo but meh (4/4)"", ""...oh and I finally found Danish's Twitter handle which could not possibly have been harder to locate while still existing @danish037 (5/4)"", '@Qdatalab Thanks for sharing! Looking forward to reading.']",19,05,1278 |
163,7,722427063475408896,171674815,Mark Marley,Our new paper on exomoon detection utilizing polarization variations. <LINK> @astromarkmarley We are getting a lot of complaints in email we didn't include an intro review of all the other methods of moon detection @astromarkmarley Certainly was not our intention to neglect anyone but rather paper very narrowly focuses on one additional method. @astromarkmarley One example of earlier thinking on other moon detection methods is this paper by J. Schneider <LINK>,http://arxiv.org/abs/1604.04773,"Many of the directly imaged self-luminous gas giant exoplanets have been found to have cloudy atmospheres. Scattering of the emergent thermal radiation from these planets by the dust grains in their atmospheres should locally give rise to significant linear polarization of the emitted radiation. However, the observable disk averaged polarization should be zero if the planet is spherically symmetric. Rotation-induced oblateness may yield a net non-zero disk averaged polarization if the planets have sufficiently high spin rotation velocity. On the other hand, when a large natural satellite or exomoon transits a planet with cloudy atmosphere along the line of sight, the asymmetry induced during the transit should give rise to a net non-zero, time resolved linear polarization signal. The peak amplitude of such time dependent polarization may be detectable even for slowly rotating exoplanets. Therefore, we suggest that large exomoons around directly imaged self-luminous exoplanets may be detectable through time resolved imaging polarimetry. Adopting detailed atmospheric models for several values of effective temperature and surface gravity which are appropriate for self-luminous exoplanets, we present the polarization profiles of these objects in the infrared during transit phase and estimate the peak amplitude of polarization that occurs during the inner contacts of the transit ingress/egress phase. The peak polarization is predicted to range between 0.1 and 0.3 % in the infrared. ","Detecting Exomoons Around Self-luminous Giant Exoplanets Through |
Polarization",4,"['Our new paper on exomoon detection utilizing polarization variations. <LINK>', ""@astromarkmarley We are getting a lot of complaints in email we didn't include an intro review of all the other methods of moon detection"", '@astromarkmarley Certainly was not our intention to neglect anyone but rather paper very narrowly focuses on one additional method.', '@astromarkmarley One example of earlier thinking on other moon detection methods is this paper by J. Schneider https://t.co/Ym0jBQJ6lE']",16,04,464 |
164,42,1494770910762577922,573729628,"Steve Taylor, PhD","New paper to end the week! Together with some friends and team members, we developed a **parallelized** Bayesian pipeline for GW Background characterization in Pulsar Timing Arrays. (1/3) <LINK> This is in preparation for the torrent of new data from international combination efforts, and new data from CHIME and other upcoming radio telescopes. Modular and parallelized techniques help our searches scales much better with all the lovely new data. (2/3) In this plot, the posterior constraints on the GW Background amplitude from many separate pulsars are contrasted against what one gets from a simple combination in post-processing. It agrees super well with the regular PTA likelihood! ""Fix it in post"" as they say in showbiz 😀 <LINK>",https://arxiv.org/abs/2202.08293,"The characterization of nanohertz-frequency gravitational waves (GWs) with pulsar-timing arrays requires a continual expansion of datasets and monitored pulsars. Whereas detection of the stochastic GW background is predicated on measuring a distinctive pattern of inter-pulsar correlations, characterizing the background's spectrum is driven by information encoded in the power spectra of the individual pulsars' time series. We propose a new technique for rapid Bayesian characterization of the stochastic GW background that is fully parallelized over pulsar datasets. This Factorized Likelihood (FL) technique empowers a modular approach to parameter estimation of the GW background, multi-stage model selection of a spectrally-common stochastic process and quadrupolar inter-pulsar correlations, and statistical cross-validation of measured signals between independent pulsar sub-arrays. We demonstrate the equivalence of this technique's efficacy with the full pulsar-timing array likelihood, yet at a fraction of the required time. Our technique is fast, easily implemented, and trivially allows for new data and pulsars to be combined with legacy datasets without re-analysis of the latter. ","A Parallelized Bayesian Approach To Accelerated Gravitational-Wave |
Background Characterization",3,"['New paper to end the week! Together with some friends and team members, we developed a **parallelized** Bayesian pipeline for GW Background characterization in Pulsar Timing Arrays. (1/3) \n<LINK>', 'This is in preparation for the torrent of new data from international combination efforts, and new data from CHIME and other upcoming radio telescopes. Modular and parallelized techniques help our searches scales much better with all the lovely new data. (2/3)', 'In this plot, the posterior constraints on the GW Background amplitude from many separate pulsars are contrasted against what one gets from a simple combination in post-processing. It agrees super well with the regular PTA likelihood! \n\n""Fix it in post"" as they say in showbiz 😀 https://t.co/irSD0f0W4P']",22,02,740 |
165,101,1291539867890049026,1138255660750127108,Berthold Jaeck,Check out our new paper on quantum spin liquids: we report that tunneling spectroscopy could be well suited to find evidence for this elusive phase of matter. Great collaboration @PrincetonPhys with Mallika @MIT and my high school friend @ElioKonig @mpifkf <LINK>,https://arxiv.org/abs/2008.02278,"We examine the spectroscopic signatures of tunneling through a Kitaev quantum spin liquid (QSL) barrier in a number of experimentally relevant geometries. We combine contributions from elastic and inelastic tunneling processes and find that spin-flip scattering at the itinerant spinon modes gives rise to a gaped contribution to the tunneling conductance spectrum. We address the spectral modifications that arise in a magnetic field necessary to drive the candidate material $\alpha$-RuCl$_3$ into a QSL phase, and we propose a lateral 1D tunnel junction as a viable setup in this regime. The characteristic spin gap is an unambiguous signature of the fractionalized QSL excitations, distinguishing it from magnons or phonons. The results of our analysis are generically applicable to a wide variety of topological QSL systems. ",Tunneling spectroscopy of quantum spin liquids,1,['Check out our new paper on quantum spin liquids: we report that tunneling spectroscopy could be well suited to find evidence for this elusive phase of matter. Great collaboration @PrincetonPhys with Mallika @MIT and my high school friend @ElioKonig @mpifkf <LINK>'],20,08,263 |
166,0,1503330542510612485,757009671366606848,Luigi Acerbi,"Ever wanted to perform distributed Bayesian inference on large datasets, e.g. via parallel MCMC? Watch out for *embarrassing failures*! In our new paper at @aistats_conf, we explore what can go wrong with popular ""embarrassingly parallel"" methods <LINK> 1/ Embarrassingly parallel MCMC works by splitting the data into partitions, sent to different computing nodes. Each node performs MCMC separately and the results are sent back to the main node to be combined. So far so good, right? 2/ <LINK> Unfortunately, many things can go wrong! We show how mode collapse, model mismatch, and underrepresented tails *in any of the nodes* can independently produce a catastrophic output in the final combination step, for many commonly used parallel MCMC algorithms. 3/ <LINK> As a solution, we propose Parallel Active Inference (PAI), which uses a mix of (1) surrogate modeling via Gaussian processes; (2) sharing key information across nodes (a single extra step); and (3) active learning to smartly refine the local models. 4/ <LINK> We demonstrate how our proposed steps avoid catastrophic failures on several examples - and show why all these steps are necessary via ablation studies. 5/ <LINK> Check out the paper for more details (<LINK>), and the code is available here: <LINK> Work fantastically led by @spectraldani, with @wkly_infrmtive and @samikaski and the support of @FCAI_fi. 6/6",https://arxiv.org/abs/2202.11154,"Embarrassingly parallel Markov Chain Monte Carlo (MCMC) exploits parallel computing to scale Bayesian inference to large datasets by using a two-step approach. First, MCMC is run in parallel on (sub)posteriors defined on data partitions. Then, a server combines local results. While efficient, this framework is very sensitive to the quality of subposterior sampling. Common sampling problems such as missing modes or misrepresentation of low-density regions are amplified -- instead of being corrected -- in the combination phase, leading to catastrophic failures. In this work, we propose a novel combination strategy to mitigate this issue. Our strategy, Parallel Active Inference (PAI), leverages Gaussian Process (GP) surrogate modeling and active learning. After fitting GPs to subposteriors, PAI (i) shares information between GP surrogates to cover missing modes; and (ii) uses active sampling to individually refine subposterior approximations. We validate PAI in challenging benchmarks, including heavy-tailed and multi-modal posteriors and a real-world application to computational neuroscience. Empirical results show that PAI succeeds where previous methods catastrophically fail, with a small communication overhead. ",Parallel MCMC Without Embarrassing Failures,6,"['Ever wanted to perform distributed Bayesian inference on large datasets, e.g. via parallel MCMC? Watch out for *embarrassing failures*!\n\nIn our new paper at @aistats_conf, we explore what can go wrong with popular ""embarrassingly parallel"" methods <LINK> 1/', 'Embarrassingly parallel MCMC works by splitting the data into partitions, sent to different computing nodes. Each node performs MCMC separately and the results are sent back to the main node to be combined. So far so good, right? 2/ https://t.co/NDYwlaeLxS', 'Unfortunately, many things can go wrong!\nWe show how mode collapse, model mismatch, and underrepresented tails *in any of the nodes* can independently produce a catastrophic output in the final combination step, for many commonly used parallel MCMC algorithms. 3/ https://t.co/D5kza84uBV', 'As a solution, we propose Parallel Active Inference (PAI), which uses a mix of (1) surrogate modeling via Gaussian processes; (2) sharing key information across nodes (a single extra step); and (3) active learning to smartly refine the local models. 4/ https://t.co/pEUucFZ0xi', 'We demonstrate how our proposed steps avoid catastrophic failures on several examples - and show why all these steps are necessary via ablation studies. 5/ https://t.co/ldhHza0r8s', 'Check out the paper for more details (https://t.co/pbVcFF8Xta), and the code is available here: https://t.co/QVdqgwJ7x9\n\nWork fantastically led by @spectraldani, with @wkly_infrmtive and @samikaski and the support of @FCAI_fi. 6/6']",22,02,1386 |
167,175,1286487690783780869,4666231375,Konstantin Batygin,"Even ""semi-active"" particles whose direct gravitational coupling is turned off, can still interact with one-another within standard N-body simulations by perturbing the central body. For details, check out our new paper, led by Shirui Peng: <LINK> @Caltech <LINK>",https://arxiv.org/abs/2007.11758,"Over the course of the recent decades, $N$-body simulations have become a standard tool for quantifying the gravitational perturbations that ensue in planet-forming disks. Within the context of such simulations, massive non-central bodies are routinely classified into ""big"" and ""small"" particles, where big objects interact with all other objects self-consistently, while small bodies interact with big bodies but not with each other. Importantly, this grouping translates to an approximation scheme where the orbital evolution of small bodies is dictated entirely by the dynamics of the big bodies, yielding considerable computational advantages with little added cost in terms of astrophysical accuracy. Here we point out, however, that this scheme can also yield spurious dynamical behaviour, where even in absence of big bodies within a simulation, indirect coupling among small bodies can lead to excitation of the constituent ""non-interacting"" orbits. We demonstrate this self-stirring by carrying out a sequence of numerical experiments, and confirm that this effect is largely independent of the time-step or the employed integration algorithm. Furthermore, adopting the growth of angular momentum deficit as a proxy for dynamical excitation, we explore its dependence on time, the cumulative mass of the system, as well as the total number of particles present in the simulation. Finally, we examine the degree of such indirect excitation within the context of conventional terrestrial planet formation calculations, and conclude that although some level of caution may be warranted, this effect plays a negligible role in driving the simulated dynamical evolution. ","Interactions Among Non-Interacting Particles in Planet Formation |
Simulations",1,"['Even ""semi-active"" particles whose direct gravitational coupling is turned off, can still interact with one-another within standard N-body simulations by perturbing the central body. For details, check out our new paper, led by Shirui Peng: <LINK> @Caltech <LINK>']",20,07,263 |
168,120,1125303748367126528,487990723,Gianfranco Bertone,"New paper on the arXiv today! Key result: discovering primordial #BlackHoles with @LIGO, @ego_virgo, Einstein Telescope, or @SKA_telescope, would rule out almost completely #supersymmetry and other theories predicting stable particles at the weak scale <LINK> <LINK> Work done in collaboration with terrific @GRAPPAInstitute team: Adam Coogan, @DanieleGaggero, @BradleyKavanagh and @C_Weniger Note that discovering primordial black holes would rule out weak-scale extensions of the standard model even in the case where the neutralino (or any other stable relic) contribute a negligible fraction of the #darkmatter in the universe Adam, @DanieleGaggero and @BradleyKavanagh made analysis code available on @github and @ZENODO_ORG. Click on links in captions to get python code to generate them #OpenScience",https://arxiv.org/abs/1905.01238,"Observational constraints on gamma rays produced by the annihilation of weakly interacting massive particles around primordial black holes (PBHs) imply that these two classes of Dark Matter candidates cannot coexist. We show here that the successful detection of one or more PBHs by radio searches (with the Square Kilometer Array) and gravitational waves searches (with LIGO/Virgo and the upcoming Einstein Telescope) would set extraordinarily stringent constraints on virtually all weak-scale extensions of the Standard Model with stable relics, including those predicting a WIMP abundance much smaller than that of Dark Matter. Upcoming PBHs searches have in particular the potential to rule out almost the entire parameter space of popular theories such as the minimal supersymmetric standard model and scalar singlet Dark Matter. ","Primordial Black Holes as Silver Bullets for New Physics at the Weak |
Scale",4,"['New paper on the arXiv today! Key result: discovering primordial #BlackHoles with @LIGO, @ego_virgo, Einstein Telescope, or @SKA_telescope, would rule out almost completely #supersymmetry and other theories predicting stable particles at the weak scale <LINK> <LINK>', 'Work done in collaboration with terrific @GRAPPAInstitute team: Adam Coogan, @DanieleGaggero, @BradleyKavanagh and @C_Weniger', 'Note that discovering primordial black holes would rule out weak-scale extensions of the standard model even in the case where the neutralino (or any other stable relic) contribute a negligible fraction of the #darkmatter in the universe', 'Adam, @DanieleGaggero and @BradleyKavanagh made analysis code available on @github and @ZENODO_ORG. Click on links in captions to get python code to generate them #OpenScience']",19,05,806 |
169,38,1376553367980285955,705950701986275328,Tzanio Kolev,"🤔 Need to couple or transfer fields between high- and low-order simulations? Check out our new paper with Will Pazner: 📄 ""Conservative and accurate solution transfer between high-order and low-order refined finite element spaces"" 👉 <LINK> #PoweredByMFEM <LINK>",https://arxiv.org/abs/2103.05283,"In this paper we introduce general transfer operators between high-order and low-order refined finite element spaces that can be used to couple high-order and low-order simulations. Under natural restrictions on the low-order refined space we prove that both the high-to-low-order and low-to-high-order linear mappings are conservative, constant preserving and high-order accurate. While the proofs apply to affine geometries, numerical experiments indicate that the results hold for more general curved and mixed meshes. These operators also have applications in the context of coarsening solution fields defined on meshes with nonconforming refinement. The transfer operators for $H^1$ finite element spaces require a globally coupled solve, for which robust and efficient preconditioners are developed. We present several numerical results confirming our analysis and demonstrate the utility of the new mappings in the context of adaptive mesh refinement and conservative multi-discretization coupling. ","Conservative and accurate solution transfer between high-order and |
low-order refined finite element spaces",1,"['🤔 Need to couple or transfer fields between high- and low-order simulations? Check out our new paper with Will Pazner: \n\n📄 ""Conservative and accurate solution transfer between high-order and low-order refined finite element spaces"" \n\n👉 <LINK>\n\n#PoweredByMFEM <LINK>']",21,03,262 |
170,35,1277468429323251713,2819715191,Antonella Palmese,"My new paper on a statistical standard siren measurement of the Hubble constant using gravitational wave events GW190814 GW170814 from @LIGO @ego_virgo and @theDESurvey galaxies, with improved photoz treatment <LINK> <LINK> Well localized GW events without counterpart (dark standard sirens) can provide marginal improvement to an H0 measurement from those with a counterpart, provided that a complete galaxy catalog exists in the localization area. Things will get very interesting as we combine hundreds of dark sirens!",https://arxiv.org/abs/2006.14961v1,"We present a measurement of the Hubble constant $H_0$ using the gravitational wave (GW) event GW190814, which resulted from the coalescence of a 23 $M_\odot$ black hole with a 2.6 $M_\odot$ compact object, as a standard siren. No compelling electromagnetic counterpart with associated host galaxy has been identified for this event, thus our analysis accounts for $\sim$ 2,700 potential host galaxies within a statistical framework. The redshift information is obtained from the photometric redshift (photo-$z$) catalog from the Dark Energy Survey. The luminosity distance is provided by the gravitational wave sky map published by the LIGO/Virgo Collaboration. Since this GW event has the second-smallest sky localization area after GW170817, GW190814 is likely to provide the best constraint on cosmology from a single standard siren without identifying an electromagnetic counterpart. Our analysis uses photo-$z$ probability distribution functions and corrects for photo-$z$ biases. We also reanalyze the binary-black hole GW170814 within this updated framework. We explore how our findings impact the $H_0$ constraints from GW170817, the only GW merger associated with a unique host galaxy, and therefore the most powerful standard siren to date. From a combination of GW190814, GW170814 and GW170817, our analysis yields $H_0 = 69.0^{+ 14}_{- 7.5 }~{\rm km~s^{-1}~Mpc^{-1}}$ (68% Highest Density Interval, HDI) for a prior in $H_0$ uniform between $[20,140]~{\rm km~s^{-1}~Mpc^{-1}}$. The addition of GW190814 and GW170814 to GW170817 improves the 68% HDI from GW170817 alone by $\sim 12\%$, showing how well-localized mergers without counterparts can provide a marginal contribution to standard siren measurements, provided that a complete galaxy catalog is available at the location of the event. ","] A statistical standard siren measurement of the Hubble constant from the |
LIGO/Virgo gravitational wave compact object merger GW190814 and Dark Energy |
Survey galaxies",3,"['My new paper on a statistical standard siren measurement of the Hubble constant using gravitational wave events GW190814 GW170814 from @LIGO @ego_virgo and @theDESurvey galaxies, with improved photoz treatment \n<LINK> <LINK>', 'Well localized GW events without counterpart (dark standard sirens) can provide marginal improvement to an H0 measurement from those with a counterpart, provided that a complete galaxy catalog exists in the localization area.', 'Things will get very interesting as we combine hundreds of dark sirens!']",20,06,521 |
171,41,966126604756819968,939498802767044608,Stephan,New #MachineLearning paper on multi-resolution tensor learning for large-scale spatial data with @yuqirose and @yisongyue! How do you learn basketball shot prediction models quickly? Use multi-resolution gradient descent with gradient entropy control! <LINK>,https://arxiv.org/abs/1802.06825,"High-dimensional tensor models are notoriously computationally expensive to train. We present a meta-learning algorithm, MMT, that can significantly speed up the process for spatial tensor models. MMT leverages the property that spatial data can be viewed at multiple resolutions, which are related by coarsening and finegraining from one resolution to another. Using this property, MMT learns a tensor model by starting from a coarse resolution and iteratively increasing the model complexity. In order to not ""over-train"" on coarse resolution models, we investigate an information-theoretic fine-graining criterion to decide when to transition into higher-resolution models. We provide both theoretical and empirical evidence for the advantages of this approach. When applied to two real-world large-scale spatial datasets for basketball player and animal behavior modeling, our approach demonstrate 3 key benefits: 1) it efficiently captures higher-order interactions (i.e., tensor latent factors), 2) it is orders of magnitude faster than fixed resolution learning and scales to very fine-grained spatial resolutions, and 3) it reliably yields accurate and interpretable models. ",Multi-resolution Tensor Learning for Large-Scale Spatial Data,1,['New #MachineLearning paper on multi-resolution tensor learning for large-scale spatial data with @yuqirose and @yisongyue! How do you learn basketball shot prediction models quickly? Use multi-resolution gradient descent with gradient entropy control! <LINK>'],18,02,258 |
172,69,1060938543546187778,712975316587819008,Pieter Roelfsema,A new paper on how deep learning could be implemented in the brain. We trained a network with a biologically plausible learning rule to recognize hand-written digits by trial-and-error: 2x slower than the non-biological error back propagation rule: <LINK>,https://arxiv.org/abs/1811.01768,"Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain. However, the methods used for deep learning by artificial neural networks are biologically unrealistic and would need to be replaced by biologically realistic counterparts. Previous biologically plausible reinforcement learning rules, like AGREL and AuGMEnT, showed promising results but focused on shallow networks with three layers. Will these learning rules also generalize to networks with more layers and can they handle tasks of higher complexity? We demonstrate the learning scheme on classical and hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast as direct reward tasks, both for fully connected, convolutional and locally connected architectures. We show that our learning rule - Q-AGREL - performs comparably to supervised learning via error-backpropagation, with this type of trial-and-error reinforcement learning requiring only 1.5-2.5 times more epochs, even when classifying 100 different classes as in CIFAR100. Our results provide new insights into how deep learning may be implemented in the brain. ",A Biologically Plausible Learning Rule for Deep Learning in the Brain,1,['A new paper on how deep learning could be implemented in the brain. We trained a network with a biologically plausible learning rule to recognize hand-written digits by trial-and-error: 2x slower than the non-biological error back propagation rule: <LINK>'],18,11,255 |
173,101,1514186163917053956,106843613,Jacob Haqq Misra,"Alien farming could be detectable by searching for ammonia and nitrous oxide in exoplanet atmospheres as evidence of nitrogen cycle management. Read more about this #technosignature in our new paper! @ThomasFauchez, @nogreenstars, @ravi_kopparapu <LINK> @ThomasFauchez @nogreenstars @ravi_kopparapu We also summarize our ""ExoFarm"" study in this @sciworthy article <LINK>",https://arxiv.org/abs/2204.05360,"Agriculture is one of the oldest forms of technology on Earth. The cultivation of plants requires a terrestrial planet with active hydrological and carbon cycles and depends on the availability of nitrogen in soil. The technological innovation of agriculture is the active management of this nitrogen cycle by applying fertilizer to soil, at first through the production of manure excesses but later by the Haber-Bosch industrial process. The use of such fertilizers has increased the atmospheric abundance of nitrogen-containing species such as NH$_3$ and N$_2$O as agricultural productivity intensifies in many parts of the world. Both NH$_3$ and N$_2$O are effective greenhouse gases, and the combined presence of these gases in the atmosphere of a habitable planet could serve as a remotely detectable spectral signature of technology. Here we use a synthetic spectral generator to assess the detectability of NH$_3$ and N$_2$O that would arise from present-day and future global-scale agriculture. We show that present-day Earth abundances of NH$_3$ and N$_2$O would be difficult to detect but hypothetical scenarios involving a planet with 30-100 billion people could show a change in transmittance of about 50-70% compared to pre-agricultural Earth. These calculations suggest the possibility of considering the simultaneous detection of NH$_3$ and N$_2$O in an atmosphere that also contains H$_2$O, O$_2$, and CO$_2$ as a technosignature for extraterrestrial agriculture. The technology of agriculture is one that could be sustainable across geologic timescales, so the spectral signature of such an ""ExoFarm"" is worth considering in the search for technosignatures. ","Disruption of a Planetary Nitrogen Cycle as Evidence of Extraterrestrial |
Agriculture",2,"['Alien farming could be detectable by searching for ammonia and nitrous oxide in exoplanet atmospheres as evidence of nitrogen cycle management. \n\nRead more about this #technosignature in our new paper!\n\n@ThomasFauchez, @nogreenstars, @ravi_kopparapu\n\n<LINK>', '@ThomasFauchez @nogreenstars @ravi_kopparapu We also summarize our ""ExoFarm"" study in this @sciworthy article\n\nhttps://t.co/b0ZD02gewU']",22,04,371 |
174,30,808601641150717952,1069045039,daniele marinazzo,"We use public data (thanks!). We find partitions characterised by different levels of @russpoldrack's fatigue scores <LINK> <LINK> @ChrisFiloG @russpoldrack that's why he scored mostly low on those! But when he scored high, the connectome ended up in a separate community",https://arxiv.org/abs/1612.03760,"A novel approach rooted on the notion of consensus clustering, a strategy developed for community detection in complex networks, is proposed to cope with the heterogeneity that characterizes connectivity matrices in health and disease. The method can be summarized as follows: (i) define, for each node, a distance matrix for the set of subjects by comparing the connectivity pattern of that node in all pairs of subjects; (ii) cluster the distance matrix for each node; (iii) build the consensus network from the corresponding partitions; (iv) extract groups of subjects by finding the communities of the consensus network thus obtained. Differently from the previous implementations of consensus clustering, we thus propose to use the consensus strategy to combine the information arising from the connectivity patterns of each node. The proposed approach may be seen either as an exploratory technique or as an unsupervised pre-training step to help the subsequent construction of a supervised classifier. Applications on a toy model and two real data sets, show the effectiveness of the proposed methodology, which represents heterogeneity of a set of subjects in terms of a weighted network, the consensus matrix. ",Consensus clustering approach to group brain connectivity matrices,2,"[""We use public data (thanks!). We find partitions characterised by different levels of @russpoldrack's fatigue scores\n<LINK> <LINK>"", ""@ChrisFiloG @russpoldrack that's why he scored mostly low on those! But when he scored high, the connectome ended up in a separate community""]",16,12,271 |
175,106,1392318699282079744,18850305,Zachary Lipton,"Hey @UCSDJacobs, @ucsd_cse, @HDSIUCSD friends, I'll be ""at"" UCSD Thursday to talk about label shift & label noise, inc new paper on spiking training sets w unlabeled data (with random labels assigned) to guarantee generalization (<LINK>). <LINK> @limufar @UCSDJacobs @ucsd_cse @HDSIUCSD There's a ""website"" link on the linked page.",https://arxiv.org/abs/2105.00303,"To assess generalization, machine learning scientists typically either (i) bound the generalization gap and then (after training) plug in the empirical risk to obtain a bound on the true risk; or (ii) validate empirically on holdout data. However, (i) typically yields vacuous guarantees for overparameterized models. Furthermore, (ii) shrinks the training set and its guarantee erodes with each re-use of the holdout set. In this paper, we introduce a method that leverages unlabeled data to produce generalization bounds. After augmenting our (labeled) training set with randomly labeled fresh examples, we train in the standard fashion. Whenever classifiers achieve low error on clean data and high error on noisy data, our bound provides a tight upper bound on the true risk. We prove that our bound is valid for 0-1 empirical risk minimization and with linear classifiers trained by gradient descent. Our approach is especially useful in conjunction with deep learning due to the early learning phenomenon whereby networks fit true labels before noisy labels but requires one intuitive assumption. Empirically, on canonical computer vision and NLP tasks, our bound provides non-vacuous generalization guarantees that track actual performance closely. This work provides practitioners with an option for certifying the generalization of deep nets even when unseen labeled data is unavailable and provides theoretical insights into the relationship between random label noise and generalization. ",RATT: Leveraging Unlabeled Data to Guarantee Generalization,2,"['Hey @UCSDJacobs, @ucsd_cse, @HDSIUCSD friends, I\'ll be ""at"" UCSD Thursday to talk about label shift & label noise, inc new paper on spiking training sets w unlabeled data (with random labels assigned) to guarantee generalization (<LINK>).\n\n<LINK>', '@limufar @UCSDJacobs @ucsd_cse @HDSIUCSD There\'s a ""website"" link on the linked page.']",21,05,331 |
176,226,1313283901851344896,4902145390,Gordan Krnjaic,"New paper out tonight with @DanHooperAstro. We study how light primordial black holes (PBH) affect GUT baryogenesis. Hawking evaporation can produce very heavy particles even if the SM temperature is low, which changes the predictions for the baryon yield <LINK> @bvlehmann @jazzwhiz @DanHooperAstro Ben got it exactly right. The surrounding radiation temperature can be low, but as any BH evaporates, its personal temperature (distinct from the bath) increases",https://arxiv.org/abs/2010.01134,"In models of baryogenesis based on Grand Unified Theories (GUTs), the baryon asymmetry of the universe is generated through the CP and baryon number violating, out-of-equilibrium decays of very massive gauge or Higgs bosons in the very early universe. Recent constraints on the scale of inflation and the subsequent temperature of reheating, however, have put pressure on many such models. In this paper, we consider the role that primordial black holes may have played in the process of GUT baryogenesis. Through Hawking evaporation, black holes can efficiently generate GUT Higgs or gauge bosons, regardless of the masses of these particles or the temperature of the early universe. Furthermore, in significant regions of parameter space, the black holes evaporate after the electroweak phase transition, naturally evading the problem of sphaleron washout that is normally encountered in GUT models based on $SU(5)$. We identify a wide range of scenarios in which black holes could facilitate the generation of the baryon asymmetry through the production and decays of GUT bosons. ",GUT Baryogenesis With Primordial Black Holes,2,"['New paper out tonight with @DanHooperAstro. We study how light primordial black holes (PBH) affect GUT baryogenesis. Hawking evaporation can produce very heavy particles even if the SM temperature is low, which changes the predictions for the baryon yield\n\n<LINK>', '@bvlehmann @jazzwhiz @DanHooperAstro Ben got it exactly right. The surrounding radiation temperature can be low, but as any BH evaporates, its personal temperature (distinct from the bath) increases']",20,10,461 |
177,195,1380076559470628883,1242075170279493632,Matteo A. C. Rossi,"Excited to share our new results on VQE obtained in a nice collaboration with @IBMResearch Zurich team, @quantum_of_me , @PBarkoutsos , @GuglielmoMazzo3 and Ivano Tavernelli. We propose a new variational approach: learning to measure on the fly! <LINK>",https://arxiv.org/abs/2104.00569,"Many prominent quantum computing algorithms with applications in fields such as chemistry and materials science require a large number of measurements, which represents an important roadblock for future real-world use cases. We introduce a novel approach to tackle this problem through an adaptive measurement scheme. We present an algorithm that optimizes informationally complete positive operator-valued measurements (POVMs) on the fly in order to minimize the statistical fluctuations in the estimation of relevant cost functions. We show its advantage by improving the efficiency of the variational quantum eigensolver in calculating ground-state energies of molecular Hamiltonians with extensive numerical simulations. Our results indicate that the proposed method is competitive with state-of-the-art measurement-reduction approaches in terms of efficiency. In addition, the informational completeness of the approach offers a crucial advantage, as the measurement data can be reused to infer other quantities of interest. We demonstrate the feasibility of this prospect by reusing ground-state energy-estimation data to perform high-fidelity reduced state tomography. ","Learning to Measure: Adaptive Informationally Complete Generalized |
Measurements for Quantum Algorithms",1,"['Excited to share our new results on VQE obtained in a nice collaboration with @IBMResearch Zurich team, @quantum_of_me , @PBarkoutsos , @GuglielmoMazzo3 and Ivano Tavernelli. We propose a new variational approach: learning to measure on the fly! <LINK>']",21,04,252 |
178,141,1389775359747448832,1348851291313889280,Rajdeep Dasgupta,New paper accepted in ApJ: Led by @izidorocosta @CLEVER_Planets - tests the effects of a pressure bump in the outer disk on the inner solar system formation. Highlight: terrestrial embryos formed by accreting planetesimals than by accreting pebbles. <LINK> @CrustalEvo @izidorocosta @CLEVER_Planets The planetesimals form from the pebbles themselves. But the exact composition of pebble population can vary as a function of time and locally. I will let @izidorocosta answer in more detail.,https://arxiv.org/abs/2105.01101,"Mass-independent isotopic anomalies of carbonaceous and non-carbonaceous meteorites show a clear dichotomy suggesting an efficient separation of the inner and outer solar system. Observations show that ring-like structures in the distribution of mm-sized pebbles in protoplanetary disks are common. These structures are often associated with drifting pebbles being trapped by local pressure maxima in the gas disk. Similar structures may also have existed in the sun's natal disk, which could naturally explain the meteorite/planetary isotopic dichotomy. Here, we test the effects of a strong pressure bump in the outer disk (e.g. $\sim$5~au) on the formation of the inner solar system. We model dust coagulation and evolution, planetesimal formation, as well as embryo's growth via planetesimal and pebble accretion. Our results show that terrestrial embryos formed via planetesimal accretion rather than pebble accretion. In our model, the radial drift of pebbles foster planetesimal formation. However, once a pressure bump forms, pebbles in the inner disk are lost via drift before they can be efficiently accreted by embryos growing at $\gtrapprox$1~au. Embryos inside $\sim$0.5-1.0au grow relatively faster and can accrete pebbles more efficiently. However, these same embryos grow to larger masses so they should migrate inwards substantially, which is inconsistent with the current solar system. Therefore, terrestrial planets most likely accreted from giant impacts of Moon to roughly Mars-mass planetary embryos formed around $\gtrapprox$1.0~au. Finally, our simulations produce a steep radial mass distribution of planetesimals in the terrestrial region which is qualitatively aligned with formation models suggesting that the asteroid belt was born low-mass. ","The effect of a strong pressure bump in the Sun's natal disk: |
Terrestrial planet formation via planetesimal accretion rather than pebble |
accretion",2,"['New paper accepted in ApJ: Led by @izidorocosta @CLEVER_Planets - tests the effects of a pressure bump in the outer disk on the inner solar system formation. \n\nHighlight: terrestrial embryos formed by accreting planetesimals than by accreting pebbles.\n\n<LINK>', '@CrustalEvo @izidorocosta @CLEVER_Planets The planetesimals form from the pebbles themselves. But the exact composition of pebble population can vary as a function of time and locally. I will let @izidorocosta answer in more detail.']",21,05,490 |
179,34,893524518035234820,251927957,sorelle,New workshop paper on Interpretable Active Learning! Undergrad first author @rlanasphillips will present at #WHI2017 <LINK> @rlanasphillips Preview of one cool finding: can use new measure for uncertainty bias to see diff in uncertainty by race on @ProPublica recidivism data. <LINK> @krvarshney @rlanasphillips Thanks for organizing the workshop! Wish I could be there.,https://arxiv.org/abs/1708.00049,"Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. In order to quantify the per-subgroup differences in how an active learning strategy queries spatial regions, we introduce a notion of uncertainty bias (based on disparate impact) to measure the discrepancy in the confidence for a model's predictions between one subgroup and another. Using the uncertainty bias measure, we show that our query explanations accurately reflect the subgroup focus of the active learning queries, allowing for an interpretable explanation of what is being learned as points with similar sources of uncertainty have their uncertainty bias resolved. We demonstrate that this technique can be applied to track uncertainty bias over user-defined clusters or automatically generated clusters based on the source of uncertainty. ",Interpretable Active Learning,3,"['New workshop paper on Interpretable Active Learning! Undergrad first author @rlanasphillips will present at #WHI2017 <LINK>', '@rlanasphillips Preview of one cool finding: can use new measure for uncertainty bias to see diff in uncertainty by race on @ProPublica recidivism data. https://t.co/g71oSAVWRu', '@krvarshney @rlanasphillips Thanks for organizing the workshop! Wish I could be there.']",17,08,370 |
180,76,1092706483920424961,892059194240532480,Mikel Artetxe,"Check out our new paper on ""An Effective Approach to Unsupervised Machine Translation"" (w/ @glabaka & @eagirre). We propose a more principled unsupervised SMT approach and hybridize it with NMT, improving previous SOTA by 5-7 BLEU points. <LINK> 5 years later, we outperform the WMT14 English-German winner using monolingual corpora only!",https://arxiv.org/abs/1902.01313,"While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through on-the-fly back-translation. Together, we obtain large improvements over the previous state-of-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014. ",An Effective Approach to Unsupervised Machine Translation,2,"['Check out our new paper on ""An Effective Approach to Unsupervised Machine Translation"" (w/ @glabaka & @eagirre). We propose a more principled unsupervised SMT approach and hybridize it with NMT, improving previous SOTA by 5-7 BLEU points.\n<LINK>', '5 years later, we outperform the WMT14 English-German winner using monolingual corpora only!']",19,02,338 |
181,77,1006308638540132352,2800204849,Andrew Gordon Wilson,"Our new paper, Probabilistic FastText for Multi-Sense Word Embeddings, is appearing as an oral at #ACL2018, with code! <LINK> We learn density embeddings that account for sub-word structure and multiple senses. Joint with Ben Athiwaratkun and @AnimaAnandkumar! <LINK>",https://arxiv.org/abs/1806.02901,"We introduce Probabilistic FastText, a new model for word embeddings that can capture multiple word senses, sub-word structure, and uncertainty information. In particular, we represent each word with a Gaussian mixture density, where the mean of a mixture component is given by the sum of n-grams. This representation allows the model to share statistical strength across sub-word structures (e.g. Latin roots), producing accurate representations of rare, misspelt, or even unseen words. Moreover, each component of the mixture can capture a different word sense. Probabilistic FastText outperforms both FastText, which has no probabilistic model, and dictionary-level probabilistic embeddings, which do not incorporate subword structures, on several word-similarity benchmarks, including English RareWord and foreign language datasets. We also achieve state-of-art performance on benchmarks that measure ability to discern different meanings. Thus, the proposed model is the first to achieve multi-sense representations while having enriched semantics on rare words. ",Probabilistic FastText for Multi-Sense Word Embeddings,1,"['Our new paper, Probabilistic FastText for Multi-Sense Word Embeddings, is appearing as an oral at #ACL2018, with code! <LINK>\nWe learn density embeddings that account for sub-word structure and multiple senses. Joint with Ben Athiwaratkun and @AnimaAnandkumar! <LINK>']",18,06,267 |
182,60,1186462785317654529,19089454,Dr. Teddy Kareta,"In other news, our new paper on the active Centaur 174P/Echeclus has been accepted for publication in the Astronomical Journal! We detail observations of its huge December 2017 outburst and find some strange stuff. <LINK> <LINK> We combined near-infrared spectra with visible imaging to infer a debris-ejection/mini-fragmentation event smaller than Echeclus's massive 2005 outburst but with many of the same properties. If Echeclus is typical of the active centaurs, other centaurs should be doing this too. (Thankfully, they totally are -- after we submitted the paper, Astronomer's Telegram 13179 was submitted finding something similar-ish at 29P/SW1. Phew.) <LINK> We also did some fun (and new for me) dynamical work to show that Echeclus's orbit hasn't changed nearly as much as many other active Centaurs, so the origin of Echeclus's modern strong activity is, for now, unclear. <LINK> The co-authors include the (by now) normal cast of characters of : @benjaminsharkey @J_Noons @kat_volk @moonyguy @WaltHarris2 and Dr. Richard Miles.",https://arxiv.org/abs/1910.09490,"The Centaurs are the small solar system bodies intermediate between the active inner solar system Jupiter Family Comets and their inactive progenitors in the trans-Neptunian region. Among the fraction of Centaurs which show comet-like activity, 174P/Echeclus is best known for its massive 2005 outburst in which a large apparently active fragment was ejected above the escape velocity from the primary nucleus. We present visible imaging and near-infrared spectroscopy of Echeclus during the first week after its December 2017 outburst taken at the Faulkes North & South Telescopes and the NASA IRTF, the largest outburst since 2005. The coma was seen to be highly asymmetric. A secondary peak was seen in the near-infrared 2D spectra, which is strongly hinted at in the visible images, moving hyperbolically with respect to the nucleus. The retrieved reflectance spectrum of Echelcus is consistent with the unobscured nucleus but becomes bluer when a wider extraction aperture is used. We find that Echeclus's coma is best explained as dominated by large blue dust grains, which agrees with previous work. We also conducted a high-resolution orbital integration of Echeclus's recent evolution and found no large orbital changes that could drive its modern evolution. We interpret the second peak in the visible and near-infrared datasets as a large cloud of larger-than-dust debris ejected at the time of outburst. If Echeclus is typical of the Centaurs, there may be several debris ejection or fragmentation events per year on other Centaurs that are going unnoticed. ","Physical Characterization of the December 2017 Outburst of the Centaur |
174P/Echeclus",5,"['In other news, our new paper on the active Centaur 174P/Echeclus has been accepted for publication in the Astronomical Journal! We detail observations of its huge December 2017 outburst and find some strange stuff.\n<LINK> <LINK>', ""We combined near-infrared spectra with visible imaging to infer a debris-ejection/mini-fragmentation event smaller than Echeclus's massive 2005 outburst but with many of the same properties. If Echeclus is typical of the active centaurs, other centaurs should be doing this too."", ""(Thankfully, they totally are -- after we submitted the paper, Astronomer's Telegram 13179 was submitted finding something similar-ish at 29P/SW1. Phew.) https://t.co/t7hcVkFln4"", ""We also did some fun (and new for me) dynamical work to show that Echeclus's orbit hasn't changed nearly as much as many other active Centaurs, so the origin of Echeclus's modern strong activity is, for now, unclear. https://t.co/ijsJgxDjli"", 'The co-authors include the (by now) normal cast of characters of : @benjaminsharkey @J_Noons @kat_volk @moonyguy @WaltHarris2 and Dr. Richard Miles.']",19,10,1041 |
183,57,1296058358114525185,5850692,Aaron Roth,"I'm excited about our new paper ""Moment Multicalibration for Uncertainty Estimation"" with @crispy_jung, @ChanghwaLee3, @malleshpai, and Ricky. <LINK> It gives a way to estimate the uncertainty of predictions that are simultaneously valid over many groups. 1/ Marginal prediction intervals (what the ""conformal prediction"" literature aims for) quantify uncertainty -on average- over everyone in the population. But a 95% conformal prediction interval might be completely wrong for people -like you- if your demographics are not typical. 2/ A similar problem arises for expectation estimation: the standard performance metric of calibration is an average over the whole population. Hebert-Johnson, @mikekimbackward, Reingold, and Rothblum proposed a way to do better called ""multicalibration"". <LINK> 3/ Multicalibration asks for calibration not just overall, but over an enormous number of intersecting subgroups. It turns out this is achievable even from a small sample! We show how to do something similar not just for means, but for variances and other higher moments. 4/ You can use multicalibrated moment estimates to compute prediction intervals that are valid not just averaged over the whole population, but simultaneously over each subgroup. Its also technically interesting that you can do this, because higher moments are nonlinear. 5/ Short blog post here: <LINK> 6/6",https://arxiv.org/abs/2008.08037,"We show how to achieve the notion of ""multicalibration"" from H\'ebert-Johnson et al. [2018] not just for means, but also for variances and other higher moments. Informally, it means that we can find regression functions which, given a data point, can make point predictions not just for the expectation of its label, but for higher moments of its label distribution as well-and those predictions match the true distribution quantities when averaged not just over the population as a whole, but also when averaged over an enormous number of finely defined subgroups. It yields a principled way to estimate the uncertainty of predictions on many different subgroups-and to diagnose potential sources of unfairness in the predictive power of features across subgroups. As an application, we show that our moment estimates can be used to derive marginal prediction intervals that are simultaneously valid as averaged over all of the (sufficiently large) subgroups for which moment multicalibration has been obtained. ",Moment Multicalibration for Uncertainty Estimation,6,"['I\'m excited about our new paper ""Moment Multicalibration for Uncertainty Estimation"" with @crispy_jung, @ChanghwaLee3, @malleshpai, and Ricky. <LINK> It gives a way to estimate the uncertainty of predictions that are simultaneously valid over many groups. 1/', 'Marginal prediction intervals (what the ""conformal prediction"" literature aims for) quantify uncertainty -on average- over everyone in the population. But a 95% conformal prediction interval might be completely wrong for people -like you- if your demographics are not typical. 2/', 'A similar problem arises for expectation estimation: the standard performance metric of calibration is an average over the whole population. Hebert-Johnson, @mikekimbackward, Reingold, and Rothblum proposed a way to do better called ""multicalibration"". https://t.co/9zzHfl4oSe 3/', 'Multicalibration asks for calibration not just overall, but over an enormous number of intersecting subgroups. It turns out this is achievable even from a small sample! We show how to do something similar not just for means, but for variances and other higher moments. 4/', 'You can use multicalibrated moment estimates to compute prediction intervals that are valid not just averaged over the whole population, but simultaneously over each subgroup. Its also technically interesting that you can do this, because higher moments are nonlinear. 5/', 'Short blog post here: https://t.co/mSmXDCEZVI 6/6']",20,08,1378 |
184,113,1216739447447810051,1216454926349418496,Isaiah Santistevan,"Just got my first, first-author paper on the arxiv! We determined formation times and studied the build-up of MW/M31-mass galaxies in the FIRE sims across cosmic time: <LINK> By examining when galaxies transition from mostly merger/accretion dominated growth, to mostly in-situ growth, the main progenitor of the host galaxy “forms/emerges” around z ~ 3-4 (11.6-12.2 Gyr ago) <LINK> About 100 galaxies with stellar mass over 1e5 solar masses formed a typical MW/M31-mass galaxy and its satellite population; this is ~ 5x more galaxies than the surviving population! <LINK> Finally, galaxies that are in LG-like environments (i.e., with a massive companion; in dotted lines) form earlier than isolated MW/M31-mass galaxies (solid lines)! <LINK> Many more details are in the paper if you're interested. Special thanks to @AndrewWetzel @kjb_astro @JossBlandHawtho @MBKplus and everyone else who helped out!",https://arxiv.org/abs/2001.03178,"Surveys of the Milky Way (MW) and M31 enable detailed studies of stellar populations across ages and metallicities, with the goal of reconstructing formation histories across cosmic time. These surveys motivate key questions for galactic archaeology in a cosmological context: when did the main progenitor of a MW/M31-mass galaxy form, and what were the galactic building blocks that formed it? We investigate the formation times and progenitor galaxies of MW/M31-mass galaxies using the FIRE-2 cosmological simulations, including 6 isolated MW/M31-mass galaxies and 6 galaxies in Local Group (LG)-like pairs at z = 0. We examine main progenitor ""formation"" based on two metrics: (1) transition from primarily ex-situ to in-situ stellar mass growth and (2) mass dominance compared to other progenitors. We find that the main progenitor of a MW/M31-mass galaxy emerged typically at z ~ 3-4 (11.6-12.2 Gyr ago), while stars in the bulge region (inner 2 kpc) at z = 0 formed primarily in a single main progenitor at z < 5 (< 12.6 Gyr ago). Compared with isolated hosts, the main progenitors of LG-like paired hosts emerged significantly earlier (\Delta z ~ 2, \Delta t ~ 1.6 Gyr), with ~ 4x higher stellar mass at all z > 4 (> 12.2 Gyr ago). This highlights the importance of environment in MW/M31-mass galaxy formation, especially at early times. Overall, about 100 galaxies with M_star > 10^5 M_sun formed a typical MW/M31-mass system. Thus, surviving satellites represent a highly incomplete census (by ~ 5x) of the progenitor population. ","The Formation Times and Building Blocks of Milky Way-mass Galaxies in |
the FIRE Simulations",5,"['Just got my first, first-author paper on the arxiv! We determined formation times and studied the build-up of MW/M31-mass galaxies in the FIRE sims across cosmic time: <LINK>', 'By examining when galaxies transition from mostly merger/accretion dominated growth, to mostly in-situ growth, the main progenitor of the host galaxy “forms/emerges” around z ~ 3-4 (11.6-12.2 Gyr ago) https://t.co/lApvqNdyAN', 'About 100 galaxies with stellar mass over 1e5 solar masses formed a typical MW/M31-mass galaxy and its satellite population; this is ~ 5x more galaxies than the surviving population! https://t.co/LQM5XqCtsY', 'Finally, galaxies that are in LG-like environments (i.e., with a massive companion; in dotted lines) form earlier than isolated MW/M31-mass galaxies (solid lines)! https://t.co/lbFg9BMIEv', ""Many more details are in the paper if you're interested. Special thanks to @AndrewWetzel @kjb_astro @JossBlandHawtho @MBKplus and everyone else who helped out!""]",20,01,903 |
185,63,1351130958020440068,3131128329,Dr Heidi Thiemann,"New paper day (and my second first author paper)! We analysed the first 1 million classifications from our Zooniverse project, SuperWASP Variable Stars, finding 301 brand stellar variables, and a bunch of other exciting stars. 🌟 Check it out: <LINK> <LINK> SuperWASP 🔭 observed the entire night sky for over a decade, originally hunting for exoplanets, but it's also fantastic for stellar variability studies. @ajnorton3 reanalysed the archive of ~30 million light curves, finding ~1.6 million candidate light curves of variable stars. <LINK> To classify those ~1.6 million light curves, we used @the_zooniverse and asked citizen scientists to take part in a pattern matching task designed to find broad variable types. Here's an example of a lovely contact eclipsing binary! <LINK> Fancy classifying a few variable stars? You can check out the project here: <LINK> Anyway, we found out loads, including how accurate the classifications are for different variable types and the discovery of 301 brand new stellar variables, including contact binaries near the short period cut off and a new binary configuration (but that's a future paper!). @chrislintott @ajnorton3 @AstroAdamMc @the_zooniverse @OU_SPS @SuperWASP_stars Thanks Chris! @ScienceSocks @ajnorton3 @AstroAdamMc @the_zooniverse @OU_SPS @SuperWASP_stars Thank you! We're now working on some exciting additions to the @SuperWASP_stars project, including bringing in machine learning, adding new workflows, and the fantastic @AstroAdamMc is building a new user interface so the variable star classifications and data will be available to all. So, watch this space for more variable star news. 🌟 I meant ""brand new"" but I apparently can't type. Oh well. 🤦♀️ @LauraForczyk @ajnorton3 @AstroAdamMc @the_zooniverse @OU_SPS @SuperWASP_stars Thanks Laura!",http://arxiv.org/abs/2101.06216,"We present the first analysis of results from the SuperWASP Variable Stars Zooniverse project, which is aiming to classify 1.6 million phase-folded light curves of candidate stellar variables observed by the SuperWASP all sky survey with periods detected in the SuperWASP periodicity catalogue. The resultant data set currently contains $>$1 million classifications corresponding to $>$500,000 object-period combinations, provided by citizen scientist volunteers. Volunteer-classified light curves have $\sim$89 per cent accuracy for detached and semi-detached eclipsing binaries, but only $\sim$9 per cent accuracy for rotationally modulated variables, based on known objects. We demonstrate that this Zooniverse project will be valuable for both population studies of individual variable types and the identification of stellar variables for follow up. We present preliminary findings on various unique and extreme variables in this analysis, including long period contact binaries and binaries near the short-period cutoff, and we identify 301 previously unknown binaries and pulsators. We are now in the process of developing a web portal to enable other researchers to access the outputs of the SuperWASP Variable Stars project. ",SuperWASP Variable Stars: Classifying Light Curves Using Citizen Science,11,"['New paper day (and my second first author paper)! \n\nWe analysed the first 1 million classifications from our Zooniverse project, SuperWASP Variable Stars, finding 301 brand stellar variables, and a bunch of other exciting stars. 🌟\n\nCheck it out: <LINK> <LINK>', ""SuperWASP 🔭 observed the entire night sky for over a decade, originally hunting for exoplanets, but it's also fantastic for stellar variability studies. @ajnorton3 reanalysed the archive of ~30 million light curves, finding ~1.6 million candidate light curves of variable stars. https://t.co/wAjFCYjgcC"", ""To classify those ~1.6 million light curves, we used @the_zooniverse and asked citizen scientists to take part in a pattern matching task designed to find broad variable types. Here's an example of a lovely contact eclipsing binary! https://t.co/XKiKRHN3lS"", 'Fancy classifying a few variable stars? You can check out the project here: https://t.co/fIMXadpKop', ""Anyway, we found out loads, including how accurate the classifications are for different variable types and the discovery of 301 brand new stellar variables, including contact binaries near the short period cut off and a new binary configuration (but that's a future paper!)."", '@chrislintott @ajnorton3 @AstroAdamMc @the_zooniverse @OU_SPS @SuperWASP_stars Thanks Chris!', '@ScienceSocks @ajnorton3 @AstroAdamMc @the_zooniverse @OU_SPS @SuperWASP_stars Thank you!', ""We're now working on some exciting additions to the @SuperWASP_stars project, including bringing in machine learning, adding new workflows, and the fantastic @AstroAdamMc is building a new user interface so the variable star classifications and data will be available to all."", 'So, watch this space for more variable star news. 🌟', 'I meant ""brand new"" but I apparently can\'t type. Oh well. 🤦\u200d♀️', '@LauraForczyk @ajnorton3 @AstroAdamMc @the_zooniverse @OU_SPS @SuperWASP_stars Thanks Laura!']",21,01,1809 |
186,27,1322081311721480192,21902101,Jim Geach,New paper on arXiv today @MattJDoherty et al. “[NII] fine-structure emission at 122 and 205um in a galaxy at z=2.6: a globally dense star-forming interstellar medium” accepted in ApJ (@almaobs follow-up of the @SpaceWarps ‘red radio ring’ @chrislintott) <LINK> @chrislintott @MattJDoherty @almaobs @SpaceWarps @the_zooniverse Sure!,https://arxiv.org/abs/2010.15128,"We present new observations with the Atacama Large Millimeter/sub-millimeter Array of the 122um and 205um fine-structure line emission of singly-ionised nitrogen in a strongly lensed starburst galaxy at z=2.6. The 122/205um [NII] line ratio is sensitive to electron density, n_e, in the ionised interstellar medium, and we use this to measure n_e~300cm^-3 averaged across the galaxy. This is over an order of magnitude higher than the Milky Way average, but comparable to localised Galactic star-forming regions. Combined with observations of the atomic carbon (CI(1-0)) and carbon monoxide (CO(4-3)) in the same system, we reveal the conditions in this intensely star-forming system. The majority of the molecular interstellar medium has been driven to high density, and the resultant conflagration of star formation produces a correspondingly dense ionised phase, presumably co-located with myriad HII regions that litter the gas-rich disk. ","[NII] fine-structure emission at 122 and 205um in a galaxy at z=2.6: a |
globally dense star-forming interstellar medium",2,"['New paper on arXiv today @MattJDoherty et al. “[NII] fine-structure emission at 122 and 205um in a galaxy at z=2.6: a globally dense star-forming interstellar medium” accepted in ApJ\n\n(@almaobs follow-up of the @SpaceWarps ‘red radio ring’ @chrislintott) \n\n<LINK>', '@chrislintott @MattJDoherty @almaobs @SpaceWarps @the_zooniverse Sure!']",20,10,332 |
187,44,1119347697822130177,846041727232331776,Roei Herzig,"Excited to share our new CVPR19 paper, <LINK>, on 𝐩𝐫𝐞𝐜𝐢𝐬𝐞 𝐨𝐛𝐣𝐞𝐜𝐭 𝐝𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧. Code & dataset are on <LINK>! #CVPR19 #ObjectDetection #TraxRetail #BIU <LINK> We collected a new SKU-110K dataset which takes detection challenges to unexplored territories: millions of possible facets; hundreds of heavily crowded objects per image. We propose a novel mechanism to learn deep overlap rates for each detection, and use an accurate clustering algorithm to resolve duplicates. Research done in collaboration with @TraxRetail, Prof. Tal Hassner, Prof. Jacob Goldberger, Eran Goldman, and Aviv Eisenschtat. We wish to express our gratitude for Dr. Ziv Mhabary and Dr. Yair Adato from Trax Research Group for their essential support in this work.",https://arxiv.org/abs/1904.00853,"Man-made scenes can be densely packed, containing numerous objects, often identical, positioned in close proximity. We show that precise object detection in such scenes remains a challenging frontier even for state-of-the-art object detectors. We propose a novel, deep-learning based method for precise object detection, designed for such challenging settings. Our contributions include: (1) A layer for estimating the Jaccard index as a detection quality score; (2) a novel EM merging unit, which uses our quality scores to resolve detection overlap ambiguities; finally, (3) an extensive, annotated data set, SKU-110K, representing packed retail environments, released for training and testing under such extreme settings. Detection tests on SKU-110K and counting tests on the CARPK and PUCPR+ show our method to outperform existing state-of-the-art with substantial margins. The code and data will be made available on \url{www.github.com/eg4000/SKU110K_CVPR19}. ",Precise Detection in Densely Packed Scenes,4,"['Excited to share our new CVPR19 paper, <LINK>, on 𝐩𝐫𝐞𝐜𝐢𝐬𝐞 𝐨𝐛𝐣𝐞𝐜𝐭 𝐝𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧. \n\nCode & dataset are on <LINK>!\n\n#CVPR19 #ObjectDetection #TraxRetail #BIU <LINK>', 'We collected a new SKU-110K dataset which takes detection challenges to unexplored territories: millions of possible facets; hundreds of heavily crowded objects per image.', 'We propose a novel mechanism to learn deep overlap rates for each detection, and use an accurate clustering algorithm to resolve duplicates.', 'Research done in collaboration with @TraxRetail, Prof. Tal Hassner, Prof. Jacob Goldberger, Eran Goldman, and Aviv Eisenschtat. We wish to express our gratitude for Dr. Ziv Mhabary and Dr. Yair Adato from Trax Research Group for their essential support in this work.']",19,04,735 |
188,120,1006425985393315840,738769492122214400,Johannes Lischner,"X-ray photoemission is a useful technique for studying #catalysis, but analyzing spectra is challenging. In our new paper, we use DFT to calculate core-electron binding energies to make life easier for experimentalists: <LINK>. <LINK> Big thanks to @bluebananna and @photoelectrons for motivating this work!",https://arxiv.org/abs/1806.03895,"Core-level X-ray Photoelectron Spectroscopy (XPS) is often used to study the surfaces of heterogeneous copper-based catalysts, but the interpretation of measured spectra, in particular the assignment of peaks to adsorbed species, can be extremely challenging. In this study we demonstrate that first principles calculations using the delta Self Consistent Field (delta-SCF) method can be used to guide the analysis of experimental core-level spectra of complex surfaces relevant to heterogeneous catalysis. Specifically, we calculate core-level binding energy shifts for a series of adsorbates on Cu(111) and show that the resulting C1s and O1s binding energy shifts for adsorbed CO, CO2, C2H4, HCOO, CH3O, H2O, OH and a surface oxide on Cu(111) are in good overall agreement with the experimental literature. In the few cases where the agreement is less good, the theoretical results may indicate the need to re-examine experimental peak assignments. ","Core electron binding energies of adsorbates on Cu(111) from |
first-principles calculations",2,"['X-ray photoemission is a useful technique for studying #catalysis, but analyzing spectra is challenging. In our new paper, we use DFT to calculate core-electron binding energies to make life easier for experimentalists: <LINK>. <LINK>', 'Big thanks to @bluebananna and @photoelectrons for motivating this work!']",18,06,307 |
189,120,1437388857415188489,13800042,Lukas Heinrich,"Our new paper on publishing statistical models is out! This has been a long time coming with lots of progress in the last years to make it a reality. This is *one of the best* data products we have and making them public is the right thing to do. <LINK> <LINK> @TJGershon an important point is ""we are!"" <LINK> . It's not a pure hypothetical but a reality already. The paper a plea to make it more standard practice. Admittedly, it happened only recently. We discuss reasons a bit (closed/open world) but it also is a lot of sociology @TJGershon it might not have been as obvious to everyone (often heard weariness that information is too detailed, difficult to interpret, etc) but shared tooling and documentation imho are the way to go. The community can handle the truth.",https://arxiv.org/abs/2109.04981,"The statistical models used to derive the results of experimental analyses are of incredible scientific value and are essential information for analysis preservation and reuse. In this paper, we make the scientific case for systematically publishing the full statistical models and discuss the technical developments that make this practical. By means of a variety of physics cases -- including parton distribution functions, Higgs boson measurements, effective field theory interpretations, direct searches for new physics, heavy flavor physics, direct dark matter detection, world averages, and beyond the Standard Model global fits -- we illustrate how detailed information on the statistical modelling can enhance the short- and long-term impact of experimental results. ","Publishing statistical models: Getting the most out of particle physics |
experiments",3,"['Our new paper on publishing statistical models is out! This has been a long time coming with lots of progress in the last years to make it a reality. This is *one of the best* data products we have and making them public is the right thing to do.\n<LINK> <LINK>', '@TJGershon an important point is ""we are!"" https://t.co/PXkXGjR1cQ . It\'s not a pure hypothetical but a reality already. The paper a plea to make it more standard practice. Admittedly, it happened only recently. We discuss reasons a bit (closed/open world) but it also is a lot of sociology', '@TJGershon it might not have been as obvious to everyone (often heard weariness that information is too detailed, difficult to interpret, etc) but shared tooling and documentation imho are the way to go. The community can handle the truth.']",21,09,774 |
190,168,1433399816050987010,565140816,K-G Lee,"Paper Day! In which I again say: ""Give me ALL your multi-object spectroscopy"" and introduce the FLIMFLAM Survey to implement the new technique of FRB foreground mapping. This will yield, over the next few yrs, a comprehensive census of cosmic baryons. <LINK>",https://arxiv.org/abs/2109.00386,"The dispersion measures (DM) of fast radio bursts (FRBs) encode the integrated electron density along the line-of-sight, which is typically dominated by the intergalactic medium (IGM) contribution in the case of extragalactic FRBs. In this paper, we show that incorporating wide-field spectroscopic galaxy survey data in the foreground of localized FRBs can significantly improve constraints on the partition of diffuse cosmic baryons. Using mock DMs and realistic lightcone galaxy catalogs derived from the Millennium simulation, we define spectroscopic surveys that can be carried out with 4m and 8m-class wide field spectroscopic facilities. On these simulated surveys, we carry out Bayesian density reconstructions in order to estimate the foreground matter density field. In comparison with the `true' matter density field, we show that these can help reduce the uncertainties in the foreground structures by $\sim 2-3\times$ compared to cosmic variance. We calculate the Fisher matrix to forecast that $N=30\: (96)$ localized FRBs should be able to constrain the diffuse cosmic baryon fraction to $\sim 10\%\: (\sim 5\%) $, and parameters governing the size and baryon fraction of galaxy circumgalactic halos to within $\sim 20-25\%\: (\sim 8-12\%)$. From the Fisher analysis, we show that the foreground data increases the sensitivity of localized FRBs toward our parameters of interest by $\sim 25\times$. We briefly introduce FLIMFLAM, an ongoing galaxy redshift survey that aims to obtain foreground data on $\sim 30$ localized FRB fields. ","Constraining the Cosmic Baryon Distribution with Fast Radio Burst |
Foreground Mapping",1,"['Paper Day! In which I again say: ""Give me ALL your multi-object spectroscopy"" and introduce the FLIMFLAM Survey to implement the new technique of FRB foreground mapping. \n\nThis will yield, over the next few yrs, a comprehensive census of cosmic baryons.\n\n<LINK>']",21,09,259 |
191,40,1055309605645897729,869862586610851840,Jeannette Bohg,"Happy to share our new paper on Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks. Featuring RL on a real robot. <LINK> <LINK> <LINK> Kudos to Michelle Lee, @yukez, Krishnan Srinivasan, Parth Shah, @silviocinguetta, @drfeifei and @animesh_garg",https://arxiv.org/abs/1810.10191,"Contact-rich manipulation tasks in unstructured environments often require both haptic and visual feedback. However, it is non-trivial to manually design a robot controller that combines modalities with very different characteristics. While deep reinforcement learning has shown success in learning control policies for high-dimensional inputs, these algorithms are generally intractable to deploy on real robots due to sample complexity. We use self-supervision to learn a compact and multimodal representation of our sensory inputs, which can then be used to improve the sample efficiency of our policy learning. We evaluate our method on a peg insertion task, generalizing over different geometry, configurations, and clearances, while being robust to external perturbations. Results for simulated and real robot experiments are presented. ","Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal |
Representations for Contact-Rich Tasks",2,"['Happy to share our new paper on Making Sense of Vision and Touch: \nSelf-Supervised Learning of Multimodal Representations for Contact-Rich Tasks. Featuring RL on a real robot. <LINK> <LINK> <LINK>', 'Kudos to Michelle Lee, @yukez, Krishnan Srinivasan, Parth Shah, @silviocinguetta, @drfeifei and @animesh_garg']",18,10,305 |
192,141,1248049713149906945,759894532649545732,Aravind Srinivas,"New paper - CURL: Contrastive Unsupervised Representations for RL! We use the simplest form of contrastive learning (instance-based) as an auxiliary task in model-free RL. SoTA by *significant* margin on DMControl and Atari for data-efficiency. <LINK> <LINK> Highlights: Solves most of DMControl envs from pixels within 100K timesteps. Learning from pixels nearly matches learning from physical state for the first time SoTA on every single DMControl environment and 10x more data-efficient than previous SoTA by Dreamer <LINK> Highlights (cont): Atari100K timesteps: Competitive with SimPLE without learning any world model SoTA median human normalized score for 100K timesteps SoTA on 14/26 games for 100k timesteps <LINK> Method: Take any model-free RL algorithm (example, SAC or Rainbow). Create two augmentations (views) of your input. Add instance contrastive loss in the latent space. Optimize both the RL and contrastive loss. That’s it. <LINK> Contrastive Learning (CPCv2, MoCo, SimCLR) helps in data-efficiency on downstream vision tasks. Data-efficiency is particularly useful in RL. @maxjaderberg proposed UNREAL to improve sample-efficiency with reconstruction tasks. Turns out Contrastive >> Reconstruction. Implementation is extremely simple (follows MoCo / SimCLR style instance contrastive learning with InfoNCE loss) <LINK> We use random crop as data-augmentation for contrastive learning (highly effective) <LINK> We outperform model-based variants in both continuous and discrete settings by a significant margin. Code has been released by @MishaLaskin (joint first author) here: <LINK> Instance Contrastive Self-Supervised Learning (Siamese Networks with InfoNCE loss) is simple yet powerful and this result in RL adds on to previous successes in computer vision by MoCo and SimCLR. Thanks to my collaborators @MishaLaskin and @pabbeel @pabbeel @MishaLaskin @ylecun We have an exact LeCake ablation where encoder only learns from Siamese and no RL gradient (green) (It almost works as well as using both RL and contrastive gradients (red)): <LINK>",https://arxiv.org/abs/2004.04136,"We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at this https URL ","CURL: Contrastive Unsupervised Representations for Reinforcement |
Learning",11,"['New paper - CURL: Contrastive Unsupervised Representations for RL! We use the simplest form of contrastive learning (instance-based) as an auxiliary task in model-free RL. SoTA by *significant* margin on DMControl and Atari for data-efficiency. <LINK> <LINK>', 'Highlights:\nSolves most of DMControl envs from pixels within 100K timesteps.\nLearning from pixels nearly matches learning from physical state for the first time\nSoTA on every single DMControl environment and 10x more data-efficient than previous SoTA by Dreamer https://t.co/4M9f9XJrbS', 'Highlights (cont):\nAtari100K timesteps: Competitive with SimPLE without learning any world model\nSoTA median human normalized score for 100K timesteps \nSoTA on 14/26 games for 100k timesteps https://t.co/5HQyqrdLhI', 'Method:\nTake any model-free RL algorithm (example, SAC or Rainbow). Create two augmentations (views) of your input. Add instance contrastive loss in the latent space. Optimize both the RL and contrastive loss. That’s it. https://t.co/X9pN0F6eEr', 'Contrastive Learning (CPCv2, MoCo, SimCLR) helps in data-efficiency on downstream vision tasks. Data-efficiency is particularly useful in RL. @maxjaderberg proposed UNREAL to improve sample-efficiency with reconstruction tasks. Turns out Contrastive >> Reconstruction.', 'Implementation is extremely simple (follows MoCo / SimCLR style instance contrastive learning with InfoNCE loss) https://t.co/JPLMlOsgUc', 'We use random crop as data-augmentation for contrastive learning (highly effective) https://t.co/ObpCTHjxU8', 'We outperform model-based variants in both continuous and discrete settings by a significant margin. Code has been released by @MishaLaskin (joint first author) here: https://t.co/i0hGmvgnjJ', 'Instance Contrastive Self-Supervised Learning (Siamese Networks with InfoNCE loss) is simple yet powerful and this result in RL adds on to previous successes in computer vision by MoCo and SimCLR.', 'Thanks to my collaborators @MishaLaskin and @pabbeel', '@pabbeel @MishaLaskin @ylecun We have an exact LeCake ablation where encoder only learns from Siamese and no RL gradient (green) (It almost works as well as using both RL and contrastive gradients (red)): https://t.co/SI8aque2O5']",20,04,2074 |
193,12,1377615748462415878,3407899930,Freek Roelofs,"We have a new paper out showing how well we could measure black holes in the future! Just with more EHT observations, we could get several times better mass measurements. With space telescopes, we could get 0.5% precision and constrain black hole spin. ➡️<LINK> <LINK> The left plot shows the precision we recover the M87 black hole mass with for different simulated observations. We especially see a large improvement when we go from a single observation to 10 observations, where we average out the variable structure. This variable structure makes it difficult to fit to the data, but when we average multiple images we see the black hole shadow and lensed photon ring more clearly as illustrated here, allowing for a better mass measurement. <LINK> We also see a large improvement when considering only the input model as compared to a full simulation library (blue vs orange). This tells us that it will be very useful to have data at e.g. other wavelengths and polarizations to narrow down the model parameter space. The right image above shows the distribution over black hole spin of the models that are acceptable after fitting to the simulated data from different arrays. When we observe with submillimeter telescopes in orbit around Earth (EHI), the black hole spin starts to be constrained. With many developments in other parameter extraction techniques and instrument upgrades, this tells us that the future of black hole parameter measurements is bright! With a strongly constrained mass and spin, we can test general relativity to high precision.",http://arxiv.org/abs/2103.16736,"The Event Horizon Telescope (EHT) has imaged the shadow of the supermassive black hole in M87. A library of general relativistic magnetohydrodynamics (GMRHD) models was fit to the observational data, providing constraints on black hole parameters. We investigate how much better future experiments can realistically constrain these parameters and test theories of gravity. We generate realistic synthetic 230 GHz data from representative input models taken from a GRMHD image library for M87, using the 2017, 2021, and an expanded EHT array. The synthetic data are run through a data reduction pipeline used by the EHT. Additionally, we simulate observations at 230, 557, and 690 GHz with the Event Horizon Imager (EHI) Space VLBI concept. Using one of the EHT parameter estimation pipelines, we fit the GRMHD library images to the synthetic data and investigate how the black hole parameter estimations are affected by different arrays and repeated observations. Repeated observations play an important role in constraining black hole and accretion parameters as the varying source structure is averaged out. A modest expansion of the EHT already leads to stronger parameter constraints. High-frequency observations from space rule out all but ~15% of the GRMHD models in our library, strongly constraining the magnetic flux and black hole spin. The 1$\sigma$ constraints on the black hole mass improve by a factor of five with repeated high-frequency space array observations as compared to observations with the current ground array. If the black hole spin, magnetization, and electron temperature distribution can be independently constrained, the shadow size for a given black hole mass can be tested to ~0.5% with the EHI, which allows tests of deviations from general relativity. High-precision tests of the Kerr metric become within reach from observations of the Galactic Center black hole Sagittarius A*. ","Black hole parameter estimation with synthetic Very Long Baseline |
Interferometry data from the ground and from space",6,"['We have a new paper out showing how well we could measure black holes in the future! Just with more EHT observations, we could get several times better mass measurements. With space telescopes, we could get 0.5% precision and constrain black hole spin.\n\n➡️<LINK> <LINK>', 'The left plot shows the precision we recover the M87 black hole mass with for different simulated observations. We especially see a large improvement when we go from a single observation to 10 observations, where we average out the variable structure.', 'This variable structure makes it difficult to fit to the data, but when we average multiple images we see the black hole shadow and lensed photon ring more clearly as illustrated here, allowing for a better mass measurement. https://t.co/4pca5tv6pW', 'We also see a large improvement when considering only the input model as compared to a full simulation library (blue vs orange). This tells us that it will be very useful to have data at e.g. other wavelengths and polarizations to narrow down the model parameter space.', 'The right image above shows the distribution over black hole spin of the models that are acceptable after fitting to the simulated data from different arrays. When we observe with submillimeter telescopes in orbit around Earth (EHI), the black hole spin starts to be constrained.', 'With many developments in other parameter extraction techniques and instrument upgrades, this tells us that the future of black hole parameter measurements is bright! With a strongly constrained mass and spin, we can test general relativity to high precision.']",21,03,1562 |
194,116,1125657425456041985,526115229,Kevin Heng,"Very proud of my postdoc Jens Hoeijmakers, who followed up his Nature paper on discovering iron and titanium in KELT-9b with this new A&A study. First discovery of chromium, scandium and yttrium in an exoplanetary atmosphere at high spectral resolution. <LINK> It is necessary to point out massive contributions from two of my other (senior) postdocs: Simon Grimm basically converted the entire Kurucz database of atomic line lists into opacities; Daniel Kitzmann converted these opacities into transmission spectra (with variable gravity).",https://arxiv.org/abs/1905.02096,"Context: KELT-9 b exemplifies a newly emerging class of short-period gaseous exoplanets that tend to orbit hot, early type stars - termed ultra-hot Jupiters. The severe stellar irradiation heats their atmospheres to temperatures of $\sim 4,000$ K, similar to the photospheres of dwarf stars. Due to the absence of aerosols and complex molecular chemistry at such temperatures, these planets offer the potential of detailed chemical characterisation through transit and day-side spectroscopy. Studies of their chemical inventories may provide crucial constraints on their formation process and evolution history. Aims: To search the optical transmission spectrum of KELT-9 b for absorption lines by metals using the cross-correlation technique. Methods: We analyse 2 transits observed with the HARPS-N spectrograph. We use an isothermal equilibrium chemistry model to predict the transmission spectrum for each of the neutral and singly-ionized atoms with atomic numbers between 3 and 78. Of these, we identify the elements that are expected to have spectral lines in the visible wavelength range and use those as cross-correlation templates. Results: We detect absorption of Na I, Cr II, Sc II and Y II, and confirm previous detections of Mg I, Fe I, Fe II and Ti II. In addition, we find evidence of Ca I, Cr I, Co I, and Sr II that will require further observations to verify. The detected absorption lines are significantly deeper than model predictions, suggesting that material is transported to higher altitudes where the density is enhanced compared to a hydrostatic profile. There appears to be no significant blue-shift of the absorption spectrum due to a net day-to-night side wind. In particular, the strong Fe II feature is shifted by $0.18 \pm 0.27$ km~s$^{-1}$, consistent with zero. Using the orbital velocity of the planet we revise the steller and planetary masses and radii. ","A spectral survey of an ultra-hot Jupiter: Detection of metals in the |